id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.10250
Stabilizer-free polygonal and polyhedral virtual elements
Stabilizer-free $P_k$ virtual elements are constructed on polygonal and polyhedral meshes. Here the interpolating space is the space of continuous $P_k$ polynomials on a triangular-subdivision of each polygon, or a tetrahedral-subdivision of each polyhedron. With such an accurate and proper interpolation, the stabilizer of the virtual elements is eliminated while the system is kept positive-definite. We show that the stabilizer-free virtual elements converge at the optimal order in 2D and 3D. Numerical examples are computed, validating the theory.
Yanping Lin, Mo Mu, Shangyou Zhang
2023-09-19T02:07:17Z
http://arxiv.org/abs/2309.10250v1
# Stabilizer-free polygonal and polyhedral virtual elements ###### Abstract. Stabilizer-free \(P_{k}\) virtual elements are constructed on polygonal and polyhedral meshes. Here the interpolating space is the space of continuous \(P_{k}\) polynomials on a triangular-subdivision of each polygon, or a tetrahedral-subdivision of each polyhedron. With such an accurate and proper interpolation, the stabilizer of the virtual elements is eliminated while the system is kept positive-definite. We show that the stabilizer-free virtual elements converge at the optimal order in 2D and 3D. Numerical examples are computed, validating the theory. Key words and phrases:virtual element, stabilizer free, elliptic equation, Hsieh-Clough-Tocher macro-triangle, triangular mesh 2010 Mathematics Subject Classification: 65N15, 65N30. Yanping Lin is supported in part by HKSAR GRF 15302922 and polyu-CAS joint Lab. Mo Mu is supported in part by Hong Kong RGC CERG HKUST16301218. Introduction Let \(\mathcal{T}_{h}\) be a bounded bounded bounded domain with Lipschitz boundary \(\partial\mathcal{T}_{h}\). We consider the following system of equations \[\begin{cases}\nabla\tilde{u}_{h}=\nabla\tilde{v}_{h},\quad v_{h}=\nabla\tilde{v }_{h},\\ \nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0, \end{cases} \tag{1.1}\] where \(\nabla\tilde{v}_{h}\) is the gradient of \(\nabla\tilde{v}_{h}\). The problem of finding the solution of the problem is to find the solution of the problem \[\begin{cases}\nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0,\quad\nabla \tilde{v}_{h}=0,\\ \nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0, \end{cases} \tag{1.2}\] where \(\nabla\tilde{v}_{h}\) is the gradient of \(\nabla\tilde{v}_{h}\). The problem of finding the solution of the problem is to find the solution of the problem \[\begin{cases}\nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0,\\ \nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0,\quad\nabla\tilde{v}_{h}=0, \end{cases} \tag{1.3}\] where \(\nabla\tilde{v}_{h}\) is the gradient of \(\nabla\tilde{v}_{h}\). For \(k\geq 1\), the virtual element space is defined as \[\begin{cases}\tilde{V}_{h}=\{v\in H^{1}_{0}(\Omega):\tilde{v}\in \mathbb{B}_{k}(\mathcal{E}_{h}),\Delta\tilde{v}|_{K}\in P_{k-2}(K)\}\text{ in 2D, or as}\\ \tilde{V}_{h}=\{v\in H^{1}_{0}(\Omega):\tilde{v}\in\mathbb{B}_{k}(\mathcal{E} _{h});\Delta_{F}\tilde{v}|_{F}\in P_{k-2}(F),\ F\in\mathcal{F}_{h};\\ \Delta\tilde{v}|_{K}\in P_{k-2}(K)\}\end{cases} \tag{1.4}\] in 3D, where \(P_{-1}=\{0\}\), \(\mathbb{B}_{k}(\mathcal{E}_{h})=\{v\in C^{0}(\mathcal{E}_{h}):v|_{e}\in P_{k}( e)\ \forall e\subset\mathcal{E}_{h}\}\), and \(\Delta_{F}\) is the 2D Laplacian on the flat polygon \(F\). In computation, the interpolated virtual finite element space on \(\mathcal{T}_{h}\) is defined by \[V_{h}=\{v_{h}=\Pi^{\nabla}_{h}\tilde{v}\ :\ v_{h}|_{K}\in\mathbb{V}_{k}(K),\ K \in\mathcal{T}_{h};\ \tilde{v}\in\tilde{V}_{h}\}, \tag{1.5}\] where \(\mathbb{V}_{k}(K)=P_{k}(K)\) for the standard virtual elements (and to be defined below in (2.1) for the new virtual element method), and \(v_{h}=\Pi^{\nabla}_{h}\tilde{v}\) is the local \(H^{1}\)-projection: \[\begin{cases}(\nabla(v_{h}-\tilde{v}),\nabla w_{h})_{K}=0&\forall w_{h}=\Pi^ {\nabla}_{h}\tilde{w}\in\mathbb{V}_{k}(K),\\ \langle v_{h}-\tilde{v},w_{h}\rangle_{\partial K}=0&\forall w_{h}\in P_{k}(K). \end{cases}\] The stabilizer-free virtual element equation reads: Find \(u_{h}=\Pi^{\nabla}_{h}\tilde{u}\in V_{h}\) such that \[(\nabla u_{h},\nabla v_{h})_{h}=(f,v_{h})\quad\forall\tilde{v}\in\tilde{V}_{h},\ v_{h}=\Pi^{\nabla}_{h}\tilde{v}, \tag{1.6}\] where \((\nabla u_{h},\nabla v_{h})_{h}=\sum_{K\in\mathcal{T}_{h}}(\nabla u_{h},\nabla v _{h})_{K}\). In 3D, to find the value of \(\tilde{v}\) inside a face-polygon, we use the moments \(\int_{F}\tilde{v}_{h}p_{k-2}dS\) instead of the surface Laplacian values \(\Delta_{F}\tilde{v}_{h}\in P_{k-2}(F)\), as the latter uniquely determines \(\tilde{v}_{h}\) and consequently uniquely determines the \(P_{k-2}\) moments of \(\tilde{v}_{h}\) on \(F\). Because the dimension of \(V_{h}\) is less than that of \(\tilde{V}_{h}\) (equal only when \(k=1\) on triangular and tetrahedral meshes), the bilinear form in (1.6) is not positive-definite and the equation does not have a unique solution. Thus a discrete stabilizer must be added to the equation (1.6) if the interpolation space \(\mathbb{V}_{k}(K)\) is defined to be \(P_{k}(K)\). With a stabilizer, many degrees of freedom do not fully contribute their approximation power as they are averaged into a smaller dimensional vector space. To be a stabilizer free virtual element, the interpolation space must have at least no less degrees of freedom on each element. But raising the polynomial degree of \(\mathbb{V}_{k}(K)\) in (1.5) does not work. It works only for \(P_{1}\) virtual elements in 2D with special treatment, cf. [6, 7], where \(\mathbb{V}_{k}(K)=P_{k+l(n)}(K)\) and \(n\) is the number of edges of \(K\), in the virtual element space (1.5). Another stabilizer-free method for \(k=1\) is proposed in [8] that \(\mathbb{V}_{k}(K)=P_{k}(K)\cup H_{l}(K)\), where \(H_{l}(K)\) is the set of 2D harmonic polynomials of degree \(l\) or less, and \(l\) depends on the number of edges of \(K\). This is an excellent idea because the \(H_{l}\) harmonic polynomials may help to gather all boundary edge values while not destroying the gradient approximation, as harmonic polynomials have vanishing Laplacian. The same idea has been implemented in some other finite elements [1, 28, 29]. But the method of [8] is also for 2D \(P_{1}\) polygonal elements, as it is shown numerically not working for \(k>3\) in [35]. We propose to use macro-triangles or macro-tetrahedrons \(C^{0}\)-\(P_{k}\) spaces as the interpolation space \(\mathbb{V}_{k}(K)\) in (1.4). This method was first used in [35] for \(P_{k}\) triangular virtual elements only. In [35], each triangle \(K\) is split into three triangles by connecting its barycenter with the three vertices. \(K\) is called a Hsieh-Clough-Tocher macro-triangle [14, 29, 34, 58, 59]. In this work, we extend the method to polygonal and polyhedral virtual elements. It turns out that the triangular virtual element would be the most complicated case, as we have to introduce a new point in the subdivision in order to get a sufficiently large dimensional vector space. For most polygons and polyhedrons we can subdivide them into triangles and tetrahedrons respectively without adding any new point, when we have enough face-edge and face-polygon degrees of freedom. A different interpolation space \(\mathbb{V}_{k}\) changes the quadrature rule for computing \((\nabla u_{h},\nabla v_{h})=(\nabla\Pi_{h}^{\nabla}\tilde{u},\nabla\Pi_{h}^{ \nabla}\tilde{v})\). Such an accurate local interpolation does not increase the computational cost, once the local stiffness matrix is generated. On the other side, eliminating the stabilizer may reduce computational cost, and may improve the condition number of the resulting linear system. More importantly, a stabilizer-free method may utilize fully every degree of freedom in the discrete approximation. Thus stabilizer-free methods may result in superconvergence. The stabilizer is eliminated in the weak Galerkin finite element method [2, 18, 19, 23, 30, 31, 38, 39, 40, 43, 44, 45]. It is also eliminated in the \(H(\mathrm{div})\) finite element method [24, 42, 47]. The stabilizer-free \(C^{0}\) or \(C^{-1}\) nonconforming finite elements are constructed for the biharmonic equation [41, 49, 50]. We have stabilizer-free discontinuous Galerkin finite element methods [17, 26, 36, 37, 41]. Without a stabilizer, two-order superconvergent weak Galerkin finite elements are found in [3, 32, 33, 48, 54, 57]. Also two-order superconvergent stabilizer-free discontinuous Galerkin finite elements are constructed in [51, 52, 55] for second order elliptic equations. One or two-order superconvergent weak Galerkin finite elements are found for the Stokes equations in [25, 46, 53]. Four-order superconvergent weak Galerkin finite elements [56] and four-order superconvergent discontinuous Galerkin finite elements [52, 57] are all stabilizer-free, for the biharmonic equation. For example, a \(P_{3}\) discontinuous finite element solution is as accurate as a \(C^{1}\)-\(P_{7}\) finite element solution in solving a 2D biharmonic equation. In this paper, we show that with the macro-triangle/tetrahedron interpolation, the stabilizer-free virtual element equation (1.5) has a unique and quasi-optimal solution. Numerical examples on the new stabilizer-free virtual elements are computed, verifying the theory. ## 2. The 2D interpolation We define in this section the 2D macro-triangle interpolation space and show that the stabilizer-free virtual element equation has a unique solution. Let \(K\) be a 2D polygon. The only requirement is that \(K\) is subdivided into more than one tetrahedron. If \(K\) has only three sides, i.e., \(K\) is a triangle, we add a barycenter point to the triangle, shown as in the Figure 1(b) macro triangle on \(K\). If \(K\) is a polygon of four sides or more, we usually can connect some vertices of \(K\) to subdivide \(K\) into a macro-triangle polygon, cf. Figure 1(a). If needed, we can add one or two inner points to subdivide \(K\), cf. Figure 1(c), where we intentionally add a new point for the purpose of illustration. With the subdivision \(K=\cup_{T_{i}\subset K}T_{i}\), we define the interpolation space as, for \(k\geq 1\), \[\mathbb{V}_{k}(K)=\{v_{h}\in C^{0}(K):v_{h}|_{T_{i}}\in P_{k}(T_{i}),\ T_{i} \subset K\}. \tag{2.1}\] Figure 1. (a) A polygon is subdivided without any new point. (b) A triangle must be subdivided with one new point. (c) A polygon is subdivided with one new point. One can easily count the internal degrees of freedom of \(\mathbb{V}_{k}(K)\) to get \[\dim(\mathbb{V}_{k}\cap H^{1}_{0}(K))>\dim P_{k-2}(K).\] The interpolation operator is defined to be the local \(H^{1}\)-projection, i.e., \(v_{h}=\Pi^{\nabla}_{h}\tilde{v}\in\mathbb{V}_{k}\) such that \(v_{h}|_{\partial K}=\tilde{v}\) and \[(\nabla(v_{h}-\tilde{v}),\nabla w_{h})=0\quad\forall w_{h}\in\mathbb{V}_{k}(K). \tag{2.2}\] **Lemma 2.1**.: _The interpolation operator \(\Pi^{\nabla}_{h}\) is well defined in (2.2) and it preserves \(P_{k}\) polynomials,_ \[\Pi^{\nabla}_{h}\tilde{v}=\tilde{v}\quad\text{ if }\ \tilde{v}\in P_{k}(K). \tag{2.3}\] Proof.: Because \(\tilde{v}|_{\partial K}\in\mathbb{B}_{k}(\mathcal{E}_{h})\), \(v_{h}\) can assume the boundary condition \(v_{h}=\tilde{v}\) exactly on \(\partial K\). The linear system of equations in (2.2) is a finite dimensional square system. The existence is implied by the uniqueness. To show the uniqueness, we let \(\tilde{v}=0\) in (2.2). Letting \(w_{h}=v_{h}\) in (2.2), we get \[\nabla v_{h}=\mathbf{0}\quad\text{ on }\ K.\] Thus \(v_{h}=c\) is a constant on \(K\). As \(v_{h}\) is continuous on edges, \(v_{h}=c\) is a global constant on the whole domain. By the boundary condition, we get \(0=\tilde{v}|_{\partial\Omega}=v_{h}|_{\partial\Omega}=c\). Hence \(v_{h}=0\) and (2.2) has a unique solution. If \(\tilde{v}\in P_{k}(K)\subset\mathbb{V}_{k}(K)\), defined in (1.4), then the solution of (2.2) says, letting \(w_{h}=v_{h}-\tilde{v}\), \[\nabla(v_{h}-\tilde{v})=\mathbf{0}.\] Thus \(v_{h}-\tilde{v}\) is a global constant which must be zero as it vanishes at all \(\partial K\). (2.3) is proved. **Lemma 2.2**.: _The stabilizer-free virtual element equation (1.5) has a unique solution, where the interpolation \(\Pi^{\nabla}_{h}\) is defined in (2.2)._ Proof.: As both \(\tilde{u},\tilde{v}\in\tilde{V}_{h}\), (1.5) is a finite square system of linear equations. The uniqueness of solution implies the existence. To show the uniqueness, we let \(f=0\) and \(\tilde{v}=\tilde{u}\) in (1.5). It follows that \[|\Pi^{\nabla}_{h}\tilde{u}|_{1,h}=0.\] Thus \(\Pi^{\nabla}_{h}\tilde{u}=c\) is constant on each \(K\). But \(\Pi^{\nabla}_{h}\tilde{u}\) is continuous on the whole domain. By the boundary condition, we get \(0=\Pi^{\nabla}_{h}\tilde{u}|_{\partial\Omega}=c\). That is, \[\Pi^{\nabla}_{h}\tilde{u}=0\ \text{ and }\ \tilde{u}|_{\partial K}=\Pi^{\nabla}_{h} \tilde{u}=0. \tag{2.4}\] For \(k=1\), \(\tilde{u}\) has no internal degree of freedom, and the lemma is proved by (2.4), \[\tilde{u}=0,\ \ \text{if}\ \ k=1.\] For \(k\geq 2\), let \[b_{K}=\sum_{i\in\mathcal{N}_{2}}\phi_{i}\in H^{1}_{0}(K)\cap\mathbb{V}_{k}(K), \tag{2.5}\] where \(\mathcal{N}_{2}\) is the set of all internal mid-edge points of \(\{T_{i}\}\), \(K=\cup T_{i}\), and \(\phi_{i}\) is the \(P_{2}\) Lagrange nodal basis at node \(i\). Then \(b_{K}>0\) inside polygon \(K\) if it does not have any added internal point, cf. Figure 1. Otherwise, \(b_{K}>0\) inside \(K\) except at one or two internal points where \(b_{K}=0\). On one polygon \(K\), by (2.2), (2.4) and integration by parts, we have \[(-\Delta\tilde{u},w_{h})=(\nabla\tilde{u},\nabla w_{h})=0\ \ \ \forall w_{h}\in H^{1}_{0}(K)\cap\mathbb{V}_{k}(K). \tag{2.6}\] By the space \(\tilde{V}_{h}\) definition (1.3), we denote \[p_{k-2}=-\Delta\tilde{u}\in P_{k-2}(K). \tag{2.7}\] Let the \(w_{h}\) in (2.6) be \[w_{h}=p_{k-2}b_{K}\in H^{1}_{0}(K)\cap\mathbb{V}_{k}(K), \tag{2.8}\] where the positive \(P_{2}\) bubble \(b_{K}\) is defined in (2.5). With the \(w_{k}\) in (2.8), we get from (2.6) and (2.7) that \[\int_{K}p_{k-2}^{2}b_{K}d\mathbf{x}=0.\] As \(b_{K}>0\) inside \(K\) (other than 1 or 2 possibly internal points), it follows that \[p_{k-2}^{2}=0\ \ \text{and}\ \ p_{k-2}=0\ \ \text{on}\ \ K.\] By (2.4) and (2.7), \(\Delta\tilde{u}=0\) in \(K\) and \(\tilde{u}=0\) on \(\partial K\). Thus, by the unique solution of the Laplace equation, \(\tilde{u}=0\). The lemma is proved. ## 3. The 3D interpolation We define in this section the 3D macro-tetrahedron interpolation space and show that the stabilizer-free virtual element equation has a unique solution when using the interpolation. Let \(K\) be a 3D polyhedron. The first requirement is that each face-polygon \(F\) must be subdivided into more than one triangle. The subdivision of polygons is defined in the last section. For example, if a face polygon \(F\) has only three sides, i.e., \(F\) is a triangle, we must add a barycenter point to subdivide it into three triangles, cf. Figure 2(b). However if a face polygon has more than three edges, we usually can subdivide it into triangles easily, cf. Figure 3(b). After each face polygon is subdivided into more than one triangle, the next requirement in subdividing \(K\) is that every resulting tetrahedron has at least two face-triangles inside \(K\). For example, after cutting each face-triangle of a tetrahedron \(K\) into three triangles, we add one more internal point to cut \(K\) into twelve tetrahedrons, cf. Figure 2(c). For example, for the polyhedron \(K\) of a cube in Figure 3(a), we cut each face-polygon into two triangles without adding any point, and we cut \(K\) into six tetrahedrons without adding any internal point, cf. Figure 3(b). Figure 3. (a) A polyhedron \(K\). (b) A polyhedron is subdivided into six tetrahedrons without adding any point. (c) A polyhedron is subdivided into twelve tetrahedrons with one new point and without adding any face point. Figure 2. (a) A tetrahedron \(K\). (b) By adding a barycenter point to each face-triangle, the face triangles of \(K\) are subdivided into twelve triangles. (c) By adding one barycenter point of \(K\), the tetrahedron \(K\) is subdivided into twelve tetrahedrons. For example, for the polyhedron \(K\) of a cube in Figure 3(a), we can also subdivide it by cutting each face-polygon into two triangles without adding any point, and cutting \(K\) into twelve tetrahedrons with one internal point, cf. Figure 3(c). For example, for the polyhedron \(K\) of a cube in Figure 4(a), we can also subdivide it by cutting each face-polygon into four triangles with one added point on each face-polygon, and cutting \(K\) into twenty four tetrahedrons with one additional point inside \(K\), cf. Figure 4(b). In the analysis, we assume the same face-polygon subdivision on the two polyhedrons sharing the polygon. In the computation, the subdivisions of a shared polygon on the two sides can be different as the interpolation and the computation on the two polyhedrons are independent of each other. We can extend the theory easily to cover the case that different triangulations on a face-polygon of two polyhedrons, as both interpolations are the 2D \(H^{1}\) projection of same \(P_{k-2}\) moments. With a proper tetrahedral subdivision of \(K=\cup_{T_{i}\subset K}T_{i}\), cf. Figures 2-4, we define the interpolation space on \(K\) as, for \(k\geq 1\), in (2.1), again in 3D. The interpolation operator is defined by two steps. On each face polygon \(F\in\mathcal{F}_{h}\), we solve an \(H^{1}\) projection problem that \(v_{h}|_{F}=\Pi_{h}^{\nabla}\tilde{v}\in\mathbb{V}_{k}(F)\) (the restriction of \(\mathbb{V}_{k}(K)\) on \(F\)) satisfying \[\begin{split}(\nabla_{F}(v_{h}|_{F}-\tilde{v}),\nabla_{F}w_{h})& =0\quad\forall w_{h}\in\mathbb{V}_{k}(F)\cap H^{1}_{0}(F),\\ v_{h}|_{F}-\tilde{v}&=0\quad\text{ on }\partial F, \end{split} \tag{3.1}\] where \(\nabla_{F}\) is the 2D face gradient. This way, the boundary value of \(\Pi_{h}^{\nabla}\tilde{v}\) is determined on \(\partial K\). The interpolation in 3D is defined as the 3D local Figure 4. (a) A polyhedron \(K\). (b) A polyhedron is subdivided into twenty four tetrahedrons with one new point each face-polygon and one internal point. \(H^{1}\)-projection, i.e., \(v_{h}=\Pi_{h}^{\nabla}\tilde{v}\in\mathbb{V}_{k}\) such that \[\begin{split}(\nabla(v_{h}-\tilde{v}),\nabla w_{h})&=0 \quad\forall w_{h}\in\mathbb{V}_{k}(K)\cap H^{1}_{0}(K),\\ v_{h}-v_{h}|_{F}&=0\quad\text{ on all }F\in\partial K, \end{split} \tag{3.2}\] where \(v_{h}|_{F}\) is defined in (3.1). **Lemma 3.1**.: _The interpolation operator \(\Pi_{h}^{\nabla}\) is well defined in (3.2) and it preserves \(P_{k}\) polynomials,_ \[\Pi_{h}^{\nabla}\tilde{v}=\tilde{v}\quad\text{ if }\ \tilde{v}\in P_{k}(K). \tag{3.3}\] Proof.: Because \(\tilde{v}|_{\partial F}\in\mathbb{B}_{k}(\mathcal{E}_{h})\), \(v_{h}\) can assume the boundary condition \(v_{h}=\tilde{v}\) exactly on \(\partial F\), where \(F\) is a face-polygon in the polyhedral mesh. As we have proved in Lemma 2.1, \(v_{h}|_{F}\) is well-defined in (3.1). Further by Lemma 2.1, \[v_{h}|_{F}=p_{k}|_{F}\quad\text{ if }p_{k}=\tilde{v}\in P_{k}(K). \tag{3.4}\] The linear system of equations in (3.2) is a finite dimensional square system, after the boundary condition is enforced. The existence is implied by the uniqueness. To show the uniqueness, we let \(\tilde{v}=0\) in (3.2). By Lemma 2.1, \(v_{h}|_{F}=0\). Letting \(w_{h}=v_{h}\) in (3.2), we get \[\nabla v_{h}=\mathbf{0}\quad\text{ on }\ K.\] Thus \(v_{h}=c\) is one constant on all tetrahedrons of \(K\). As \(v_{h}\) is continuous, by the zero boundary condition, \(v_{h}=0\) and (3.2) has a unique solution. If \(\tilde{v}=p_{k}\in P_{k}(K)\subset\mathbb{V}_{k}(K)\), then (3.2) says, letting \(w_{h}=v_{h}-p_{k}\), \[\nabla(v_{h}-p_{k})=\mathbf{0}.\] Thus \(v_{h}-p_{k}\) is a constant on \(K\), which must be zero by (3.4). (3.3) is proved. **Lemma 3.2**.: _The stabilizer-free virtual element equation (1.5) has a unique solution, where the interpolation \(\Pi_{h}^{\nabla}\) is defined in (3.2)._ Proof.: As both \(\tilde{u},\tilde{v}\in\tilde{V}_{h}\), (1.5) is a finite square system of linear equations. We only need to show the uniqueness, by letting \(f=0\) and \(\tilde{v}=\tilde{u}\) in (1.5). As in the 2D proof of Lemma 2.2, we have \[|\Pi_{h}^{\nabla}\tilde{u}|_{1,h}=0\quad\text{ and }\ \Pi_{h}^{\nabla}\tilde{u}=0.\] For \(k=1\), \(\tilde{u}\) has no internal degree of freedom on each face polygon \(F\), and \(\tilde{v}|_{F}=\Pi_{h}^{\nabla}\tilde{u}|_{F}=0\). Further \(\tilde{u}\) has no internal degree of freedom on each polyhedron \(K\), and \(\tilde{v}|_{K}=\Pi_{h}^{\nabla}\tilde{u}|_{K}=0\). The lemma is proved. For \(k\geq 2\), by the proof of Lemma 2.2, as each face polygon is subdivided into more than one tetrahedron, we have \(\tilde{v}|_{F}=\Pi_{h}^{\nabla}\tilde{u}|_{F}=0\) on every face polygon \(F\). Next, as every tetrahedron has at least two internal face triangles, we define an internal \(P_{2}\) bubble by \[b_{K}=\sum_{i\in\mathcal{N}_{2}}\phi_{i}\in H_{0}^{1}(K)\cap\mathbb{V}_{k}(K), \tag{3.5}\] where \(\mathcal{N}_{2}\) is the set of all internal mid-edge points of \(\{T_{i}\}\), \(K=\cup T_{i}\), and \(\phi_{i}\) is the \(P_{2}\) Lagrange nodal basis at node \(i\). As every tetrahedron has such an internal \(P_{2}\) node (which is the shared-edge mid-point of two internal face triangles), \(b_{K}>0\) inside polyhedron \(K\) if it does not have any added internal point. Otherwise, \(b_{K}>0\) inside \(K\) except at one or two internal points of \(K\) where \(b_{K}=0\). On one polyhedron \(K\), let \[w_{h}=p_{k-2}b_{K}\in H_{0}^{1}(K)\cap\mathbb{V}_{k}(K), \tag{3.6}\] where the positive \(P_{2}\) bubble \(b_{K}\) is defined in (3.5), and \(p_{k-2}=-\Delta\tilde{u}\in P_{k-2}(K)\). With the integration by parts, we get from (2.6) and (3.6) that \[\int_{K}p_{k-2}^{2}b_{K}d\mathbf{x}=-\int_{K}\nabla\tilde{u}\nabla w_{h}d \mathbf{x}=0.\] As \(b_{K}>0\) inside \(K\) (other than 1 or 2 possibly internal points), it follows that \[p_{k-2}^{2}=0\ \ \text{and}\ \ p_{k-2}=0\ \ \text{on}\ \ K.\] As \(\Delta\tilde{u}=0\) in \(K\) and \(\tilde{u}=0\) on \(\partial K\), by the unique solution of the Laplace equation, \(\tilde{u}=0\). The lemma is proved. ## 4. Convergence We show that the stabilizer-free virtual element solution converges at the optimal order, in this section. **Theorem 4.1**.: _Let the solution of (1.2) be \(u\in H^{k+1}\cap H_{0}^{1}(\Omega)\). Let the stabilizer-free virtual element solution of (1.5) be \(u_{h}\). Then the discrete solution converges at the optimal order with the following error estimate,_ \[|u-u_{h}|_{1}\leq Ch^{k}|u|_{k+1}. \tag{4.1}\] Proof.: As \(w_{h}\in V_{h}\subset H^{1}_{0}(\Omega)\), subtracting (1.5) from (1.2), we obtain \[(\nabla(u-u_{h}),\nabla w_{h})=0\quad\forall w_{h}\in V_{h}.\] By the Schwarz inequality, we get that \[|u-u_{h}|_{1}^{2} =(\nabla(u-u_{h}),\nabla(u-I_{h}u))\] \[\leq|u-u_{h}|_{1}|u-I_{h}u|_{1}\leq Ch^{k}|u|_{k+1}|u-u_{h}|_{1},\] where \(I_{h}u\) is the Scott-Zhang interpolation on subdivided triangular mesh or tetrahedral mesh, cf. [27]. The theorem is proved. To get the optimal order \(L^{2}\) error estimate, we assume a full regularity for the dual problem that the solution of \[-\Delta w =u-u_{h}\quad\text{ in }\ \Omega,\] \[w =0\quad\text{ on }\ \partial\Omega, \tag{4.2}\] satisfies \[|w|_{2}\leq C\|u-u_{h}\|_{0}. \tag{4.3}\] **Theorem 4.2**.: _Let the solution of (1.2) be \(u\in H^{k+1}\cap H^{1}_{0}(\Omega)\). Let the stabilizer-free virtual element solution of (1.5) be \(u_{h}\). Then the discrete solution converges at the optimal order with the following \(L^{2}\) error estimate, assuming (4.3),_ \[\|u-u_{h}\|_{0}\leq Ch^{k+1}|u|_{k+1}.\] Proof.: Let \(w_{h}=\Pi_{h}^{\nabla}\tilde{w}\) be the virtual element solution of (4.2). By (4.2), (4.3) and (4.1), we get \[\|u-u_{h}\|_{0}^{2} =(\nabla w,\nabla(u-u_{h}))=(\nabla(w-w_{h}),\nabla(u-u_{h}))\] \[\leq Ch|w|_{2}h^{k}|u|_{k+1}\leq Ch^{k+1}|u|_{k+1}\|u-u_{h}\|_{0}.\] Canceling a \(\|u-u_{h}\|_{0}\) on both sides, we proved the optimal-order \(L^{2}\) error bound. ## 5. Numerical test We solve numerically the Poisson equation (1.1) on domain \(\Omega=(0,1)\times(0,1)\), where an exact solution is chosen as \[u(x,y)=\sin(\pi x)\sin(\pi y). \tag{5.1}\] We test the \(P_{k}\) (\(k=1,2,3,4,5\)) stabilizer-free virtual elements on pentagonal meshes shown in Figure 5. In Table 1, we compute the \(P_{1}\)-\(P_{5}\) stabilizer-free virtual elements solutions for (5.1) on the pentagonal meshes shown in Figure 5. All virtual element solutions converge at rates of the optimal order in both \(L^{2}\) and \(H^{1}\) norms. In Table 2, we compute the \(P_{1}\)-\(P_{5}\) stabilizer-free virtual elements solutions for (5.1) on the hexagonal meshes shown in Figure 6. All virtual element solutions converge at rates of the optimal order in both \(L^{2}\) and \(H^{1}\) norms. Figure 5. The first two levels of pentagonal meshes for the computation in Table 1. Figure 6. The first two levels of hexagonal grids for the computation in Table 2. We solve the 3D Poisson equation (1.1) on domain \(\Omega=(0,1)^{3}\), where an exact solution is chosen as \[u(x,y,z)=2^{6}(x-x^{2})(y-y^{2})(z-z^{2}). \tag{5.2}\] In Table 3, we compute the 3D \(P_{1}\)-\(P_{5}\) stabilizer-free virtual elements solutions for (5.2) on the cubic meshes shown in Figure 7. All virtual element solutions converge at rates of the optimal order in both \(L^{2}\) and \(H^{1}\) norms. In particular, we have one order superconvergence in \(H^{1}\) semi-norm for the \(P_{1}\) stabilizer-free virtual element solutions. Also, we have one order superconvergence in both \(H^{1}\) semi-norm and \(L^{2}\) for the \(P_{2}\) stabilizer-free virtual element solutions. But we do not have a theory for these superconvergences. It is surprising that the \(P_{2}\) solutions are more accurate than the \(P_{3}\) solutions in Table 3. \begin{table} \begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{3}{c}{By the \(P_{1}\) stabilizer-free virtual element.} \\ \hline 7 & 0.4462E-04 & 2.00 & 0.5834E-02 & 1.00 \\ 8 & 0.1116E-04 & 2.00 & 0.2916E-02 & 1.00 \\ 9 & 0.2789E-05 & 2.00 & 0.1458E-02 & 1.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) stabilizer-free virtual element.} \\ \hline 7 & 0.1930E-06 & 3.00 & 0.1131E-03 & 2.00 \\ 8 & 0.2413E-07 & 3.00 & 0.2826E-04 & 2.00 \\ 9 & 0.3016E-08 & 3.00 & 0.7066E-05 & 2.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{3}\) stabilizer-free virtual element.} \\ \hline 6 & 0.2486E-07 & 4.00 & 0.8973E-05 & 3.00 \\ 7 & 0.1554E-08 & 4.00 & 0.1122E-05 & 3.00 \\ 8 & 0.9716E-10 & 4.00 & 0.1402E-06 & 3.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{4}\) stabilizer-free virtual element.} \\ \hline 5 & 0.1051E-07 & 5.00 & 0.1977E-05 & 4.00 \\ 6 & 0.3286E-09 & 5.00 & 0.1236E-06 & 4.00 \\ 7 & 0.1027E-10 & 5.00 & 0.7724E-08 & 4.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{5}\) stabilizer-free virtual element.} \\ \hline 3 & 0.7591E-06 & 6.02 & 0.4741E-04 & 4.98 \\ 4 & 0.1181E-07 & 6.01 & 0.1488E-05 & 4.99 \\ 5 & 0.1846E-09 & 6.00 & 0.4659E-07 & 5.00 \\ \hline \end{tabular} \end{table} Table 1. The error profile for (5.1) on meshes shown in Figure 5. ## 6. Ethical Statement ### Compliance with Ethical Standards \begin{table} \begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{3}{c}{By the \(P_{1}\) stabilizer-free virtual element.} \\ \hline 6 & 0.1142E-03 & 2.00 & 0.1255E-01 & 1.00 \\ 7 & 0.2855E-04 & 2.00 & 0.6273E-02 & 1.00 \\ 8 & 0.7137E-05 & 2.00 & 0.3136E-02 & 1.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) stabilizer-free virtual element.} \\ \hline 6 & 0.1011E-05 & 3.00 & 0.4338E-03 & 2.00 \\ 7 & 0.1265E-06 & 3.00 & 0.1085E-03 & 2.00 \\ 8 & 0.1581E-07 & 3.00 & 0.2712E-04 & 2.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{3}\) stabilizer-free virtual element.} \\ \hline 6 & 0.2132E-07 & 4.00 & 0.1081E-04 & 3.00 \\ 7 & 0.1332E-08 & 4.00 & 0.1351E-05 & 3.00 \\ 8 & 0.8329E-10 & 4.00 & 0.1689E-06 & 3.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{4}\) stabilizer-free virtual element.} \\ \hline 5 & 0.1200E-07 & 4.99 & 0.3029E-05 & 4.00 \\ 6 & 0.3756E-09 & 5.00 & 0.1894E-06 & 4.00 \\ 7 & 0.1175E-10 & 5.00 & 0.1185E-07 & 4.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{5}\) stabilizer-free virtual element.} \\ \hline 3 & 0.1177E-05 & 5.98 & 0.8909E-04 & 4.97 \\ 4 & 0.1842E-07 & 6.00 & 0.2800E-05 & 4.99 \\ 5 & 0.2895E-09 & 5.99 & 0.8776E-07 & 5.00 \\ \hline \end{tabular} \end{table} Table 2. The error profile for (5.1) on meshes shown in Figure 6. Figure 7. The first three grids for the computation in Table 3. The submitted work is original and is not published elsewhere in any form or language. ### Funding Yanping Lin is supported in part by HKSAR GRF 15302922 and polyCAS joint Lab. Mo Mu is supported in part by Hong Kong RGC CERG HKUST16301218. ### Conflict of Interest There is no potential conflict of interest. ### Ethical approval This article does not contain any studies involving animals. This article does not contain any studies involving human participants. \begin{table} \begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & By the 3D \(P_{1}\) stabilizer-free virtual element. \\ \hline 5 & 0.4944E-02 & 1.92 & 0.2780E-01 & 1.95 \\ 6 & 0.1254E-02 & 1.98 & 0.7011E-02 & 1.99 \\ 7 & 0.3145E-03 & 1.99 & 0.1757E-02 & 2.00 \\ \hline & By the 3D \(P_{2}\) stabilizer-free virtual element. \\ \hline 5 & 0.1132E-04 & 3.89 & 0.1001E-02 & 2.85 \\ 6 & 0.7294E-06 & 3.96 & 0.1313E-03 & 2.93 \\ 7 & 0.4615E-07 & 3.98 & 0.1680E-04 & 2.97 \\ \hline & By the 3D \(P_{3}\) stabilizer-free virtual element. \\ \hline 4 & 0.4149E-04 & 3.84 & 0.1569E-02 & 2.83 \\ 5 & 0.2688E-05 & 3.95 & 0.2053E-03 & 2.93 \\ 6 & 0.1703E-06 & 3.98 & 0.2618E-04 & 2.97 \\ \hline & By the 3D \(P_{4}\) stabilizer-free virtual element. \\ \hline 4 & 0.7316E-06 & 4.94 & 0.4987E-04 & 3.94 \\ 5 & 0.2331E-07 & 4.97 & 0.3169E-05 & 3.98 \\ 6 & 0.7356E-09 & 4.99 & 0.1997E-06 & 3.99 \\ \hline & By the 3D \(P_{5}\) stabilizer-free virtual element. \\ \hline 3 & 0.2986E-05 & 5.97 & 0.7301E-04 & 4.96 \\ 4 & 0.4707E-07 & 5.99 & 0.2318E-05 & 4.98 \\ 5 & 0.7386E-09 & 5.99 & 0.7304E-07 & 4.99 \\ \hline \end{tabular} \end{table} Table 3. The error profile for (5.2) on cubic meshes shown in Figure 7. ### Informed consent This research does not have any human participant. ### Availability of supporting data This research does not use any external or author-collected data. ### Authors' contributions All authors made equal contribution. ### Acknowledgments None.
2309.12447
A flexible modular all-fiber based photon pair source for quantum key distribution in a network
Entanglement-based QKD protocols require robust and stable photon pair sources in terms of high heralding efficiencies or photon pair generation rates even under harsh environmental conditions, e.g. when operated in the field. In this paper, we report on a flexible, tunable, alignment-free, all-fiber coupled photon source based on spontaneous parametric down-conversion in periodically poled crystals. It can be operated in continuous-wave and pulsed modes, respectively. Its rack-compatible and modular setup allows a straight forward plug-and-play integration of coding-modules e.g. interferometers to enable various QKD protocols such as phase or phase-time coding. We demonstrate operation as a type-II and a type-0 SPDC stage proving the high flexibility of our source. Furthermore, we demonstrate simultaneous operation of SHG and SPDC in a double-pass configuration within the same nonlinear crystal further simplifying the hardware requirements of our source. To evaluate the conversion efficiencies of our modules, we employ data post-processing to remove artefacts from detector afterpulsing and deadtimes of the detectors. We investigate the source performance for various repetition rates.
Maximilian Tippmann, Erik Fitzke, Oleg Nikiforov, Philipp Kleinpaß, Till Dolejsky, Maximilian Mengler, Thomas Walther
2023-09-21T19:44:28Z
http://arxiv.org/abs/2309.12447v1
# A flexible modular all-fiber based photon pair source for quantum key distribution in a network ###### Abstract Entanglement-based QKD protocols require robust and stable photon pair sources in terms of high heralding efficiencies or photon pair generation rates even under harsh environmental conditions, e.g. when operated in the field. In this paper, we report on a flexible, tunable, alignment-free, all-fiber coupled photon source based on spontaneous parametric down-conversion in periodically poled crystals. It can be operated in continuous-wave and pulsed modes, respectively. Its rack-compatible and modular setup allows a straight forward plug-and-play integration of coding-modules e.g. interferometers to enable various QKD protocols such as phase or phase-time coding. We demonstrate operation as a type-II and a type-0 SPDC stage proving the high flexibility of our source. Furthermore, we demonstrate simultaneous operation of SHG and SPDC in a double-pass configuration within the same nonlinear crystal further simplifying the hardware requirements of our source. To evaluate the conversion efficiencies of our modules, we employ data post-processing to remove artefacts from detector afterpulsing and deadtimes of the detectors. We investigate the source performance for various repetition rates. * _Keywords_: photon pair source, quantum key distribution, network, heralded single photon source, double-pass setup ## 1 Introduction Photon pair sources offer a large variety of applications such as quantum key distribution (QKD) [1, 2, 3] and quantum computing [4]. Such sources can be used as heralded single photon sources [5, 6, 7] or to provide entangled pairs of photons [8, 9, 10, 11, 12, 13, 14]. Each application comes with a unique set of source requirements in terms of photon production rates, noise suppression and filtering as well as photon separation and coupling for the experiments. There are various methods to produce photon pairs. The most common is spontaneous parametric down-conversion (SPDC) typically in crystal waveguides employing periodic-poling for quasi-phase matching [13]. However, for typical application scenarios in QKD setups the produced photon pairs are required to be coupled to an optical fiber to bridge larger distances. Thus, some photon pair sources use spontaneous four-wave-mixing in fibers to avoid coupling from a waveguide to a fiber reducing losses. These sources have the drawback to require cryogenic cooling of the setup to suppress Raman-noise photons which otherwise would degrade the signal-to-noise ratio of the source [15, 16]. With our source, we demonstrate that a good performance can be achieved with commercially available fiber-packaged crystal waveguides allowing a simple setup while accessing fibers to provide a stable source for entanglement-based QKD setups [17]. Many other sources are optimized towards a single purpose. For instance, these use asymmetric filtering in signal/idler paths, fixed pump powers, are not completely fiber-coupled or employ a seed laser at different wavelengths [5, 18, 19, 20]. However, our source can be used for various setups due to its plug-and-play design and high flexibility in terms of continuous wave (cw) to high repetition rate pulsed pumping for SPDC photon pairs. This allows an easy integration of coding modules such as interferometers for QKD protocols like phase-time-coding [17, 21] or adaption for phase-coding [18]. Producing photon pairs in the telecom C-band centered at 1550 nm, our source is ideally suited for use in real-world fiber QKD applications as this window offers lowest transmission losses in standard optical fibers. The source is completely fiber-coupled, hence dropping the need for any optical alignment. ## 2 Source Setup The photon source features a modular design with six main components (cf. figure 1(a)). The seed laser is a wavelength-stabilized DFB-laser (Wavelength References Clarity NLL-1550-HP) operating at 1550.51 nm on the edges of two adjacent 100 GHz ITU-DWDM grid channels. The laser is followed by an electro-optic modulator shaping pulses of flexible duration and repetition rate from the cw radiation of the seed laser. Then, the ensuing laser pulses are amplified employing an in-house made erbium-doped fiber amplifier (EDFA). This is followed by type-0 second harmonic generation (SHG) in a periodically poled LiNbO\({}_{3}\) crystal. In the final step, we use type-II or type-0 spontaneous parametric down-conversion (SPDC) in a periodically poled LiNbO\({}_{3}\) crystal waveguide as well. Throughout the setup, several beamsplitters/tap couplers are placed used either for diagnostic purposes or as feedback for power stabilization. Depending on the intended use of the photon pairs we introduce an imbalanced interferometer (IF) between the EDFA and the SHG step which acts as a coding module for QKD applications e.g. employing a phase-time protocol [17]. Notably, by starting with 1550 nm radiation, undergoing SHG followed by SPDC we obtain several advantages: First, commercially available components for telecom wavelengths can be used, hence dropping the need of expensive TiSa lasers at 775 nm. Second, by placing the interferometer in front of the SHG it operates at 1550 nm. Thus, it can be manufactured by the same method and with the same components as receiver interferometers for a phase-time coding setup like [17], drastically simplifying the precise fabrication. This proves especially useful in the future. Figure 1: Setup of the photon pair source. Figure (a) shows the complete source with its modules. The interferometer (IF) in the dashed box depicts the position where coding modules can be placed into the source setup. Figures (b)-(e) show the setup of each module in more detail. Note that there are two different setups for the SPDC-module, a type-II and a type-0 module. Abbreviations: AM - Amplitude Modulator; BS - Beamsplitter; FI - Faraday Isolator; FBG - Fiber Bragg Grating; Pulsgen. - Pulse Generator; Bias - Bias Controller; WDM - Wavelength Division Multiplexer; LP-Filter - Long Pass Filter; PBS - Polarization Beam Splitter; AWG - Arrayed Waveguide Grating; C-B.-Filter - C-Band Filter. these kinds of setups as all interferometers from source and receivers have to be matched in their arm lengths with a precision of a few hundred micrometers [17]. The electro-optic modulator shapes pulses followed by a beamsplitter where a small portion (10 %) of the light is fed to the bias controller. This device delivers a DC offset to the electro-optic modulator to either lock the electro-optic modulator to minimum or maximum transmission. A pulse generator (either HP8131A or HP8133A) generates the electrical pulses to switch the electro-optical modulator to produce optical pulses. If cw operation is required, the pulse generator can be switched off and the bias controller is set to maximum transmission locking. Hence, no physical change of the setup is required to switch from pulsed to cw operation. The in-house built double-pass EDFA incorporates fiber Bragg gratings serving as pump and amplified spontaneous emission (ASE) filters achieving a spectrally pure seed output with residual light suppressed by at least 80 dB. The pump laser can deliver a power of up to 900 mW at 976 nm. This leads to a large variety of accessible output powers for various repetition rates and pulse lengths. Thus, average output powers of up to 100 mW at 300 MHz pulse repetition rate with pulse durations of 501 ps have been observed (cf. figure 2). For the SHG stage, the setup uses 5 meters of tightly coiled PM780 fiber following the SHG crystal to filter out residual fundamental light at 1550 nm. When tested with 2 meters of fiber, again, very high suppression of at least 79 dB is reached without causing significant insertion loss (\(<1\) dB) for the SHG light at 775 nm. The SHG crystal Figure 2: Averaged EDFA output power for pulsed operation at different pump power and pulse repetition rates. The pulse shapes are nearly rectangular with estimated FWHM pulse widths of 433 ps for a repetition rate of 10 MHz, 396 ps for 100 MHz and 501 ps for 300 MHz, respectively. is a 34 mm long PPLN Type-0 crystal waveguide from NTT Electronics. The crystal features conversion efficiencies of up to 22 % at only 54 mW averaged fundamental power (equivalent to approximately 360 mW pulse peak power) without using a resonator setup (cf. figure 3). The pulse duty cycle is approximately 15 % which allows SHG pulse peak powers of around 100 mW, when increasing the pump even further. Of course, for photon pair generation aiming for QKD only fundamental powers of less than 10 mW are necessary. The curve of the generated SHG power can be explained considering a simple pump depletion model following [22] and extending it by a constant to factor in the coupling loss at the crystal's facets, \[P_{\mathrm{SHG}}=c^{2}P_{\mathrm{Fundamental}}\cdot\tanh^{2}\left(\sqrt{\eta _{\mathrm{BK}}\cdot c\cdot P_{\mathrm{Fundamental}}}\right) \tag{1}\] with the coupling ratio \(c\), the Boyd-Kleinman conversion efficiency \(\eta_{\mathrm{BK}}\), and the pulse peak power \(P_{\mathrm{Fundamental}}\) calculated from the average fundamental power \(\bar{P}\), the pulse repetition rate \(r\) and pulse length \(\tau\) according to: \[P_{\mathrm{Fundamental}}=\frac{\bar{P}}{r\cdot\tau}. \tag{2}\] For simplicity of the fit model the coupling ratio is set to be equal for both facets. As demonstrated, our setup enables a wide range of available SHG powers to ensure a large accessible region of different mean photon pair numbers per pulse \(\mu\) at the SPDC process. Each SPDC module consists of a PPLN-crystal for photon pair generation followed by two long pass filters to remove the remaining 775 nm light. The subsequent 7.1 nm Figure 3: Averaged output powers of the SHG crystal when pumped with fundamental pulses of 501 ps FWHM at 300 MHz repetition rate. The SHG-power is measured at the 99 % output of the tap coupler (cf. figure 1(d)). band pass (full-width half-maximum) or C-band filter (cf. figure 1(e)) makes sure the photon pairs are within the operation ranges of the photon separation filters to avoid undesired crosstalk between the channels. To separate the photons, we employed a polarization beam splitter (PBS) for the type-II process or a standard telecom C-band arrayed-waveguide-grating (AWG) with 100 GHz channel separation for the type-0 process, respectively. A comparison of both crystals or modules can be found in table 1. The crystals are usually operated at 43.56 \({}^{\circ}\)C in case of the type-0 and 41.64 \({}^{\circ}\)C for the type-II crystal, respectively, via package integrated thermo-electric coolers. The temperature is stabilized within a 10 mK range of these temperatures. The complete source is made of fiber-coupled components with the crystals being fiber coupled and packaged as well. The modules are connected via standard optical fibers. These connections allow flexible source configuration e.g. bypassing of the amplitude modulator or integrating interferometers but introduce additional losses because of the fiber connectors. However, this does not degrade the performance of the source as all losses before the SPDC process can be simply mitigated by increasing the pump power of the EDFA. The setup is polarization maintaining until photon separation to ensure a stable pump power at the SPDC crystals in the relevant polarization axis, thus allowing stable photon pair generation rates. Further stabilization of pump power can be added by adapting the driver current of the EDFA via a software-based PID loop using the power measured at the beamsplitter after SHG as feedback. If additional pump power is needed, the plug-and-play design allows an easy integration of further EDFAs. \begin{table} \begin{tabular}{l l l} \hline Property & Type-0 & Type-II \\ \hline manufacturer & NTT Electronics & AdVR \\ material & ZnO:PPLN & MgO:PPLN \\ crystal length & 34 mm & 24 mm \\ fiber coupled & yes & yes \\ est. conversion efficiency & \((4.8\pm 0.2)\times 10^{-7}\) & \((7.6\pm 0.4)\times 10^{-10}\) \\ & (within 7.1 nm band) & \\ & 5.1 THz usable width with & \\ & C-band filter & \\ \hline Filters & longpass & longpass \\ & + C-Band filter & + 7.1 nm bandpass \\ Photon pair separation & wavelength & polarization \\ & (symmetric to center WL) & (slow/fast) \\ \hline \end{tabular} \end{table} Table 1: Comparison of type-0 and type-II SPDC crystals (above horizontal line) and modules (below horizontal line). ### Double-pass type-0 configuration Both, the SHG crystal and the type-0 SPDC crystal are 34-mm long PPLN-Crystals. If the same crystal is used for both SHG and SPDC, the complexity and cost of the photon pair source is considerably reduced. Cascaded SHG and SPDC within a single crystal have been demonstrated [23] utilizing a single-pass configuration. However, this can make filtering of noise photons tedious. While there are setups utilizing a double-pass configuration with a single crystal, these are usually designed such that polarization entangled photons can be produced [24, 25, 13]. As our source does not produce polarization entanglement, we have investigated an operation mode of the source where the type-0 SPDC crystal was used to produce SHG light in forward direction and photon pairs in backward direction. The setup is shown in figure 4. To operate the photon pair source with a single crystal a polarization maintaining circulator is placed in front of the type-0 module (without the C-band filter), sending 1550-nm pump light through the long pass filters into the crystal. After passing a wound fiber the produced 775-nm light is retro-reflected back into the crystal. As in the SHG-module a 90:10 beam splitter is introduced to monitor the back-propagating SHG power that now pumps the SPDC process. Finally, the generated photon pairs are guided through a pump light filter followed by the C-band filter before they are distributed via wavelength-division demultiplexing. The pump light filter is used to remove any remaining laser light and consists of two DWDM filters and two fiber Bragg gratings. This filter as well as all other added optical components are connected through standard mating-sleeves, thus rebuilding the source from single-pass to double-pass configuration only requires reconnecting optical fibers. Figure 4: Setup for photon pair production in double-pass configuration. The type-0 SPDC module is used to produce SHG light in the forward direction and photon pairs in the reverse direction. Abbreviations: PMR - Polarization Maintaining Retroreflector; P.Filter - Pump light Filter; C-B.-Filter - C-Band Filter. ## 3 Experimental Details and Results ### Heralding Efficiency An important feature of photon sources is the heralding efficiency, i.e. the probability that an idler photon is extracted when the signal has been detected [26]. We estimated the heralding efficiency of our source for both modules and for different channel combinations using the setup depicted in figure 1(a). Both outputs (for the type-0 module two frequency conjugate outputs of the AWG) are connected to a detection unit each. The photons are detected with two single-photon avalanche detectors (ID Quantique ID220) with set detection efficiencies of 10 % and dead times of 5 \(\mu\)s. The detection events are time-tagged by an ID Quantique ID900 time controller. A pulse repetition rate of 33 MHz was chosen. The heralding efficiency for channel 1 and 2 is given by [27]\(\eta_{1/2}=C_{12}/(\eta_{\rm Det}R_{2/1})\) with the count rates in each channel \(R_{i}\), the coincidence rate between both channels \(C_{12}\) and the detector efficiency \(\eta_{\rm Det}\) to account for the losses due to imperfect photon detection. The actual efficiency for our detectors is 10.6 % and 9.3 % [28]. The type-II module allows heralding efficiencies of up to 36 % at 1.8 pJ per pump pulse. The heralding efficiencies of the type-0 module are lower, reaching up to 11.3 % at best for the channel at 191.7 THz while ranging between 8.7 % and 12.3 % for the other tested DWDM channels at approximately 0.05 pJ per pump pulse. The pump powers were chosen to achieve similar count rates at the detectors for the type-II and type-0 process. The results for all channels are displayed in figure 11a. The large difference between the type-II and type-0 modules can be attributed to the different multiplexing technique accompanied by a higher insertion loss of the AWG. Thus, the heralding efficiency can be significantly improved by using low-loss filters and gratings for photon separation. For the type-II module, we chose a symmetric filter configuration in contrast to [5] allowing both fast and slow axis to achieve similar heralding efficiencies e.g. enabling both axes for use as signal/herald arm. This is important as the source is featured as a photon pair source and not solely a heralded single photon source. We also note that the heralding efficiencies for both modules can be improved by directly splicing the SPDC-crystals' fibers to the subsequent filters instead of using standard fiber connectors. The same holds for the C-band filter in the type-0 module. The count rates yielded from the heralding efficiency measurements can be utilized to estimate the source brightness following the method from [5] with \[B=\frac{R_{1}R_{2}}{C_{12}P_{\rm pump}t\Delta\lambda}\;. \tag{3}\] This yields brightnesses of \(2.7\times 10^{6}\) pairs/(s\(\cdot\)mW\(\cdot\)nm) for the type-II module (at 1.8 pJ per pulse) and \(3.9\times 10^{8}\) to \(4.4\times 10^{8}\) pairs/(s\(\cdot\)mW\(\cdot\)nm) (at \(0.048-0.056\) pJ per pulse) for the type-0 module, respectively. The values obtained via (3) do not take into account count rate / coincidental rate distortions induced by the detector dead times. This might be negligible for detectors with short dead times compared to the count rates per second such as superconducting-nanowire detectors [29]. For detectors where this is not the case, one detector being in dead time will cause an underestimation of the coincidental rates leading to an overestimation of the brightness. ### Conversion Efficiencies To overcome this issue of imperfect detectors the brightness can be determined by evaluating the crystals' conversion efficiencies with a slightly more sophisticated model. To estimate the conversion efficiencies or the photon pair production rates of our SPDC crystals we employed a setup shown in figure 5. The SHG module was followed by a 90:10 beam splitter to monitor the SHG power impinging on the SPDC crystals. Either one of the type-0 or type-II crystals had been connected to the 90 % output of the beam splitter and was followed by the long pass filters to remove the SHG light. Contrary to the setup of the SPDC modules in figure 1(e) a 7.1 nm band pass filter has been placed into the setup for both types of crystals to allow for a better comparison of the respective conversion efficiencies. For both crystals several measurements at various input (SHG) powers have been conducted. From the timestamps of both detectors a photon pair production rate can be deduced applying equations for the coincidence and count rate statistics following [30] to \[\mu_{\mathrm{gen}}=\gamma\frac{\eta_{\mathrm{pair}}\left(\frac{N_{1}}{T}-r_{1 }\right)\left(\frac{N_{2}}{T}-r_{2}\right)T}{N_{\mu}}\;. \tag{4}\] \(N_{i}\) is the number of photon counts detected at a single detector \(i\) within measurement time \(T\). The quantity \(r_{i}\) is the measured dark count rate of a detector. \(N_{\mu}\) is the number of coincidence counts without accidental counts. The factor \(\gamma\) in (4) is \(\gamma=1\) for deterministic photon pair splitting, e.g. using a PBS for the type-II process, or \(\gamma=1/2\) for probabilistic splitting, e.g. using a 50/50 beam splitter in case of the type-0 process. The factor 1/2 comes from the 50/50 beam splitter guiding both photons into the same channel with 50% probability, so that the measured coincidence rate is halved. The factors \(\eta\) arise from the fact that the losses are not frequency-independent for photon Figure 5: Setup to measure the photon pair production rates and estimate the respective conversion efficiencies of both (type-0/type-II) waveguides. Only one SPDC waveguide is inserted into the measurement setup at a time. Following the filters, the setup is connected to two single-photon avalanche detectors (ID Quantique ID220). Note that for the type-0 waveguide a 7.1 nm-band pass filter was used instead of the C-band filter to allow a better comparison to the type-II process in terms of conversion efficiency. The detection events are time-tagged by an ID Quantique ID900 (not depicted). For the type-0 process a 50/50 beamsplitter is used instead of the polarization beam splitter. pair production i.e. \(\eta_{\rm pair}\neq\eta_{i}\eta_{s}\). Due to the spectral correlation of the signal and idler photons, the detection probabilities for signal and idler photons are not independent. Hence, these factors are defined via the given filters' transmission functions \(p(f)\) as \[\eta_{S}=\frac{1}{\Delta F}\int_{F}p_{S}(\nu_{0}+f)df\ \text{and}\ \eta_{I}=\frac{1}{ \Delta F}\int_{F}p_{I}(\nu_{0}-f)df \tag{5}\] \[\eta_{\rm pair}=\frac{1}{\Delta F}\int_{F}p_{S}(\nu_{0}+f)p_{I}(\nu_{0}+f)df \tag{6}\] where \(F\) is a frequency interval with width \(\Delta F\). However, as shown in [28] the detectors are prone to afterpulsing and dead time effects. One detector being in dead time while the other is not, will lead to an underestimation of the ratio between coincidences to single counts. Afterpulsing manifests itself as a detection event causing additional clicks after the dead time ended although no photon was incident on the detector. For the type-II measurement the detectors showed a maximum afterpulsing probability of 5.11 % and 3.43 % respectively. In addition, this considerably distorts the coincidental to single count rate ratio, hence corrupting the estimate of the conversion efficiencies. In order to avoid this detection drawback, post-processing the recorded timestamps to remove such effects from the data has been introduced [31, 32]. As found in [28] the afterpulsing is limited to several tens of \(\mu s\) after the dead time ended. By choosing an extended dead time of \(\tau_{\rm sel}=40\ \mu s\) we post-select the data as depicted in figure 6. In general, a detection event is only considered when outside the detectors deadtime, i.e. when the event was detected after a sufficiently long period of time after the previous event. To evaluate the single count rate of a detector all timestamps within \(\tau_{\rm sel}\) after a previous detection are removed. For coincidences both detectors have to be considered: Here, \(\tau_{\rm sel}\) is applied to both detectors, although an event only has been detected at one Figure 6: Post-selection of the crystal efficiency measurement data to remove afterpulsing and dead time effects. Upper diagram: Correction for individual count rates. Lower diagram: Correction for coincidence rates. If an event has been registered the detector is considered to be not ready for time \(\tau_{\rm sel}\) thus ignoring subsequent events within this timespan. For the coincidence correction a subsequent event is kept if it occurs within the coincidental time window. detector. However, if the second event is within the coincidence time of the previous event, it is still kept as a valid detection. To yield count rates, the number of counts and coincidences of the post-selected data are divided by the total time a detector has been ready. Using our previous results for the photon pair production rates and the post-selected data we have calculated conversion efficiencies for every pump power. The results are depicted in figure 7. An average weighted with the uncertainty of the data points is estimated to state the respective conversion efficiencies. The weighted average is indicated by a horizontal line in figure 7. This corresponds to conversion efficiencies of \((4.8\pm 0.2)\times 10^{-7}\) for the type-0 and \((7.6\pm 0.4)\times 10^{-10}\) for the type-II crystal in terms of photon pairs produced per impinging photon at 775 nm. For the type-0 crystal the conversion efficiency is estimated using the 7.1-nm band pass filter only cutting off large parts of the spectrum, as can be seen from the spectrum in the following section. However, this method is convenient as it allows a better comparison to the type-II process and avoids distortion of the measurement introduced by components (e.g. 50/50 coupler) for wavelengths far from the center wavelength. Nevertheless, the conversion efficiency of the type-0 crystal is significantly higher than the estimated value when considering a larger wavelength range. The brightness of our source can be directly calculated from the conversion efficiencies when considering the pump photon flux and the respective spectral widths. For the type-0 spectrum a brightness of \(B=3.32\times 10^{8}\) photon pairs/(s nm mW) is achieved. The type-II module shows a brightness of \(B=2.48\times 10^{6}\) photon pairs/(s nm mW) when taking into account that the full-width-half-maximum of the spectra is estimated to 1.2 nm (cf. figure 10), i.e. much smaller than the 7.1 nm transmission window of the installed filter. Both values are in close vicinity of the previously estimated values from (3), confirming the applicability of our model. For QKD applications the photon pairs will be separated and sent via different channels to several parties, hence a good estimate of the mean photon number produced per pump pulse \(\mu\) per channel is required to control parameters like time-base QBER e.g. in phase-time protocols avoiding multiple photon pairs per detection cycle [17]. For the type-II process, we can separate the photons via polarization into two channels, so the mean photon number per pump pulse per channel can be calculated by \[\mu=\eta_{\rm conv}\cdot N_{\rm Pump} \tag{7}\] with the respective process' conversion efficiency \(\eta_{\rm conv}\) and the number of pump photons per pulse \(N_{\rm Pump}\). For the type-0 process, the formula has to be adapted, as the photon separation occurs via wavelength multiplexing instead of polarization. Hence, the mean photon number per wavelength channel has to be considered which strongly depends on transmission characteristics of the installed photon separation device e.g. an AWG. Thus, it is more convenient to state a spectral conversion efficiency which is \(\eta_{\nu}=6.81\times 10^{-7}\)/THz and calculating the mean photon number from it employing the respective channel width. For further evaluation of the photon transmissions it is important to take into account the spectral dependence of the respective wavelength-multiplexer employing (5) and (6). ### Photon Pair spectra As a next step, we investigated the spectra for both SPDC crystals for various waveguide temperatures. Using the setup depicted in figure 8 each temperature setting has been measured separately. The results have been merged into figures 9 and 10 for better comparison of the tested temperatures. The operating temperatures were chosen in such a way that the type-II process generates degenerate photons while the type-0 process offers a wide, almost flat spectrum in a region of high conversion efficiencies, allowing efficient, controllable QKD operation as well as many accessible ITU-DWDM channels. The latter case is especially important if the source shall be used in a setting with several simultaneous pairwise key distribution as in [17]. To characterize the quantum character of our source, we have estimated the heralded second-order Glauber correlation function \(g_{h}^{(2)}(0)\) for both SPDC processes. This value is a measure for the bunching character of our source, which is \(g_{h}^{(2)}(0)=0\) for an ideal single photon source/heralded photon pair source. Values larger than zero indicate photon pair production with multi-photon events or noise e.g. from non-correlated background photons. SPDC is a statistical process sometimes producing multiple photon pairs per pulse. The larger the pump power of the SPDC process the higher the share of pulses containing multiple photon pairs after the SPDC. As a consequence the choice of the pump power directly affects the minimal \(g^{(2)}(0)\) value a source can achieve. To setup the measurement of \(g_{h}^{(2)}(0)\) one output of the photon Figure 7: SPDC conversion efficiency for various SHG powers impinging on the type-0 waveguide (a) and type-II waveguide (b). The horizontal lines indicate the weighted average conversion efficiencies of the crystals. The large error bar for the first data point in (b) is due to low count rates at the respective pump power. pair source is directly connected to a detector. The other output is connected to a 50/50 beam splitter with both of these outputs being routed to a detector each. In case of the type-II process the outputs are the orthogonal polarization axes, while for the type-0 process pairs of channels with entangled wavelengths have been connected. Measuring the dark count corrected count and coincidental rates, \(R_{i}\) and \(C_{ijk}\), \(g_{h}^{(2)}(0)\) can be obtained by \[g_{h}^{(2)}(0)=\frac{C_{123}R_{3}}{C_{13}C_{23}} \tag{8}\] where \(i=3\) indicates the heralding arm (detector being directly connected to output) [33]. This gives \(g_{h}^{(2)}(0)\approx(6\pm 0.8)\times 10^{-3}\) for the fast axis of the type-II process and \((7.5\pm 0.9)\times 10^{-3}\) for the slow axis (pulse energy 0.91 pJ and 1.06 pJ respectively, equivalent to \(\mu=2.7\times 10^{-3}\) to \(3.1\times 10^{-3}\)). Both values are close to \(g_{h}^{(2)}(0)=0\) displaying the good single photon character of our source. For the type-0 process several channel combinations with pulse energies ranging from \(4.3\times 10^{-2}\) pJ to \(5.5\times 10^{-2}\) pJ (\(\mu=1.5\times 10^{-2}\) to \(1.9\times 10^{-2}\)) have been measured (cf. figure 11b). The observed \(g_{h}^{(2)}(0)\) ranges from \(2.0\times 10^{-2}\) to \(3.6\times 10^{-2}\). We expect a large proportion of this value to be given by multi-photon emissions which are dependent on the pulse energy. As an estimate we calculate the \(g_{h}^{(2)}(0)\) for a Poissonian distribution of multi-pair events. Considering one-photon-pair-events as the main contribution for dual coincidences and two-photon-pair-events for triple coincidences while neglecting higher order terms, \[g^{(2)}(0)=\frac{P(I_{1},I_{2},S)P(S)}{P(I_{1},S)P(I_{2},S)}=\mu\left(2\frac{ \eta_{S}\eta_{I}}{\eta_{\rm pair}}-\eta_{S}T_{S}\right)\;. \tag{9}\] While \(T_{S}\) is the transmission probability of a signal photon, the factors \(\eta\) are defined by transmission functions of the AWG \(p(f)\) following (5) and (6). Consequently, Figure 8: Setup for measurement of the photon pair spectra. Only one SPDC waveguide is inserted into the measurement setup at a time. For the type-0 waveguide, no band pass filter was used allowing measurement of the full spectral width of the generated photon pairs. Furthermore, a polarization beam splitter is not required for the type-0 spectrum measurements. The setup uses an Andor Shamrock 500i Czerny-Turner spectrograph with a 600 lines/mm grating, 1900 nm blaze and an Andor iDus InGaAs 1.7 \(\mu\)m CCD detector. The spectrograph setup is placed inside a box cooled down to close to 0 \({}^{\circ}\)C in order to further lower the temperature of the camera chip and allow for single-photon detection. gives a factor of approximately 0.67 for a frequency interval of 130 GHz. Taking into account the various losses on the path for a signal photon (crystal decoupling, filter transmissions, detector efficiency, it follows that \(2\eta_{S}\eta_{I}/\eta_{\rm pair}\gg\eta_{S}T_{S}\), hence \[g_{h}^{(2)}(0)\approx 2\frac{\eta_{S}\eta_{I}}{\eta_{\rm pair}}\mu. \tag{10}\] Employing the spectral \(\mu\) density, found from the estimated conversion efficiency, for 130 GHz one finds the estimates for \(g_{h}^{(2)}(0)\) at our pump energies. The values are depicted in figure 11b. We observe a good agreement of our measured \(g_{h}^{(2)}(0)\) values with the value predicted by the number of multi-pair events. Hence, the emission characteristics of our source can be mostly explained by considering multi-pair emissions. The measurement was repeated with the photon source in double-pass configuration (cf. figure 4). Although the \(g_{h}^{(2)}(0)\) value is independent of losses, the additional losses introduced by the circulator and the pump-filter required a higher pump power in order Figure 9: Photon pair spectra of the type-0 waveguide for various waveguide temperatures. The spectra were linearly interpolated and displayed with Gouraud shading for better visibility. Towards higher temperatures one can see a broadening of the spectrum, while for lower temperatures the spectrum narrows and the photon pair production rates are reduced. The tendency to higher intensity towards the left half of the figure is attributed to a slightly reduced detection efficiency of our spectrograph for higher wavelengths. At temperatures of around 44 \({}^{\circ}\)C the spectrum is basically flat and has a width of around 100 nm. Employing an arrayed waveguide grating, this wide spectrum enables simultaneous QKD with dozens of receivers. to obtain suitable count statistics. This resulted in a higher \(\mu\)-value of around \(4\times 10^{-2}\) (with the exception for the 195.0 THz measurement at \(\mu\approx 2.1\times 10^{-2}\)) and therefore yielded higher \(g_{h}^{(2)}(0)\) values between \(4.3\times 10^{-2}\) and \(6.3\times 10^{-2}\). For both configurations, all values are close to zero and way below the standard threshold for considering a source as a heralded single photon source of \(g_{h}^{(2)}(0)=0.5\). Again, there is a good agreement with the theoretical predictions for multi-pair emissions, when taking into account the measurement uncertainties. To approximate the photon emission statistics of a photon pair source, we consider the spectra of the photon pairs generated by the SPDC process. These are determined by the joint spectral amplitude (JSA) \[\psi(\omega_{\rm s},\omega_{\rm i})=\alpha(\omega_{\rm s}+\omega_{\rm i})\Phi( \omega_{\rm s},\omega_{\rm i}), \tag{11}\] Figure 10: Photon pair spectra of the type-II waveguide for various waveguide temperatures. Depicted is the sum of the fast and slow axis outputs. The spectra were linearly interpolated and displayed with Gouraud shading for better visibility. When tuning the temperature of the waveguide, one can clearly observe the expected X-shaped structure, given by a wavelength degenerate state at approximately 42 \({}^{\circ}\)C combined with the crossover for both polarization states at this point. The doubling of the intensity at the intersection compared to the exterior regions arises from adding the rates of the two polarization states generated in the type-II process. The relatively low intensities on the red side of the spectrum at 35 \({}^{\circ}\)C are due to the transmission edge of the 7.1 nm band pass filter. where \(\alpha\) describes the pump pulse distribution and \(\Phi\) is the phase-matching function of the nonlinear crystal and \(\omega_{\mathrm{s}},\omega_{\mathrm{i}}\) are the angular frequencies of the signal and idler photons with wavelengths \(\lambda_{\mathrm{s}},\lambda_{\mathrm{i}}\). Even for pump pulse durations as short as \(400\,\mathrm{ps}\), the bandwidth of the photon pair spectra is approximately two orders of magnitude wider than the pump pulse spectrum. Thus, we may assume that the phase-matching function is constant over the narrow range of \(\omega_{+}=\omega_{\mathrm{s}}+\omega_{\mathrm{i}}\) governed by the pump pulse and solely depends on the difference frequency \(\omega_{-}=\omega_{\mathrm{s}}-\omega_{\mathrm{i}}\). This allows us to reconstruct the JSA from measurements of the pump pulse shape and the SPDC spectra acquired by cw-pumping. The resulting joint spectral density \(|\psi(\omega_{\mathrm{s}},\omega_{\mathrm{i}})|^{2}\) is presented in figure 12 a) for the type-II crystal at the degeneracy temperature and a pump duration of approximately \(400\,\mathrm{ps}\), for which the corresponding spectrum is given in figure 12 b). The shape of the JSA in anti-diagonal \(\lambda_{-}=2\pi c/\omega_{-}\) direction is determined by the phase matching function. For an ideal crystal with length \(L\) and phase mismatch \(\Delta k=[n(\omega_{+})\omega_{+}-n(\omega_{s})\omega_{s}-n(\omega_{\mathrm{i }})\omega_{\mathrm{i}}]/c\), the phase matching function is given by \(\Phi(\omega_{\mathrm{s}},\omega_{\mathrm{i}})=\mathrm{sinc}^{2}[\Delta kL/2]\)[34]. Figure 12 b) shows different sizes of the side lobes at both sides of the central wavelength. Although side lobes are expected from the \(\mathrm{sinc}^{2}\)-function, its symmetry is not met, most likely due to imperfections of the crystal. An important measure to quantify the entanglement between the signal and idler photons is given by the Schmidt number \(K=1/\sum_{j}\lambda_{j}^{2}\), where the Schmidt coefficients \(\lambda\) are given by the expansion coefficients of the Schmidt decomposition \[\psi(\omega_{\mathrm{s}},\omega_{\mathrm{i}})=\sum_{j}\sqrt{\lambda_{j}}u_{j} (\omega_{\mathrm{s}})v_{j}(\omega_{\mathrm{i}}) \tag{12}\] Figure 11: (a) displays the heralding efficiencies for various tested channels for both kinds of SPDC processes. (b) shows the obtained values for the second-order Glauber-correlation. Error-bars for the type-II process are not displayed for visibility, as they are much smaller than the type-0 errors. Triangles depict the theoretical \(g^{2}(0)\) values based on multi-pair events. with sets of orthogonal function \(\{u(\omega_{s})\}\) and \(\{v(\omega_{i})\}\)[35, 36, 37, 38]. Calculating the Schmidt-decomposition of the discretized joint spectral amplitude of the type-II process presented in figure 12 via singular value decomposition [39, 40, 41, 37] yields a Schmidt number of \(K\approx 120\). For the type-0 process even larger Schmidt numbers \(K\gg 100\) are obtained, classifying the two-photon states produced by our photon source as highly entangled states compared to the regimes considered for example in [38, 40]. The unheralded second-order Glauber correlation function can be approximated from the number of Schmidt modes via \(g_{u}^{(2)}(0)=1+\frac{1}{K}\)[40, 42]. A thermal photon-number distribution for a single two-mode squeezer, resembled by \(K=1\), gives \(g_{u}^{(2)}(0)=2\), while for an infinite number of two mode squeezers it converges to a Poissonian photon number distribution. Hence, small Schmidt numbers indicate a thermal photon distribution, while large Schmidt numbers display a Poissonian photon emission [38, 40]. Following \(g_{u}^{(2)}(0)=1+\frac{1}{K}\), we obtain \(g_{u}^{(2)}(0)=1+\frac{1}{120}\approx 1.008\) which underlines the Poissonian nature (\(g_{u}^{(2)}(0)=1\)) of our SPDC source. Figure 12: (a) Joint spectral intensity for type-II SPDC at the degeneracy temperature, pumped by a transform-limited pulse of approximately \(400\,\mathrm{ps}\) duration. The pump spectrum was obtained by fast Fourier transform of a temporal pulse shape measurement and is presented in (b), where \(\lambda_{+}=2\pi c/\omega_{+}\). The corresponding Schmidt number amounts to \(K\approx 120\), classifying the generated state as highly entangled with a large number of independent squeezers. ## 4 Discussion and Outlook We demonstrated a versatile alignment-free modular photon-pair source being able to host plug-and-play type-0 and type-II SPDC modules. Intrinsically, the photon separation of the type-II process via polarization allows to connect two receivers for the photons. If one aims for larger QKD networks to demonstrate simultaneous pairwise key exchange with multiple pairs of receivers [17, 43], this can be achieved by adding wavelength-division multiplexing filters at each polarization channel. Another variant would be to employ beamsplitters (e.g. 1x2 50:50 splitters) and implement time-division multiplexing [44]. Of course, the latter comes with a trade-off in terms of key rates when considering pairwise links as the routing of the photons is not deterministic anymore. The type-0 process with its wide spectrum is ideally suited for such types of QKD networks, allowing many pairs of receivers being connected to a single photon pair source. The AWG used in our setup in combination with the C-band filter allows the use of 17 ITU-channel pairs (34 channels of 100 GHz). As the C-band filter offers a usable range of \(\pm 2.55\) THz around the center frequency of our photon pair spectrum one could drastically increase the number up to 102 users by employing a 50 GHz channel spacing AWG while accessing the complete usable C-band filter range. The type-0 spectra is approximately 9.3 THz wide. Hence, replacing the C-band filter can increase the number of WDM-connected users even further. The source has been tested for various operational modes from cw to pulsed operation with several hundred of MHz repetition rates and pulses as short as 400 ps. In principle, even higher repetition rates up to a few GHz and pulse durations down to 150 ps should be applicable as the amplitude modulator is specified for frequencies up to 10 GHz. Hence, largely variable photon pair production rates can be achieved. This is an asset, as there are many types of single photon detectors with different detection efficiencies and dead times. If the photon pair production rate does not match the detector properties and experimental details, such as transmission losses e.g. in fiber QKD experiments, the detector can be driven into saturation, thus being in dead time for a large proportion of the measurement time. Considering entanglement-based experiments this will cause a drop of the coincidence rates compared to the count rates of the detectors, considerably reducing the signal-to-noise ratio. This effect can be mitigated when adapting the photon pair production rate with respect to the capabilities of the detectors. Furthermore, the large tunability of our source is especially useful for QKD protocols with interferometers such as phase-time coding. Here, the repetition rate and pulse duration must be chosen with respect to the delay introduced by the interferometers' arm length differences to avoid pulse overlapping. Consequently, our source can be employed in settings with various interferometers without the need to change any hardware. The EDFA and SHG modules generate pump pulses without background from amplified spontaneous emission at a large range of powers enabling a wide range of accessible mean photon numbers per pulse \(\mu\) in the SPDC module. Tuning \(\mu\) can be important in various ways. First, it can affect the detector saturation as described for the photon pair production rate above. Second, in QKD protocols like phase-time coding it does yield a minimum bound for the quantum bit error rate (QBER) of the setup. For our source, as the Schmidt numbers prove, the SPDC is a Poissonian process. Hence, the number of events where not only a single photon pair but multiple photon pairs are emitted from a single pump pulse increases with higher \(\mu\). These extra photon pairs will cause additional clicks in the detectors not being correlated to the clicks caused by the original photon pair, thus increase the QBER. Here, the Poissonian nature of our source is advantageous, as the probability for multi-pair events at low \(\mu\) is lower than for a thermal distribution, as can be shown by evaluating the multi-pair emission probabilities for both types of photon number distributions. Consequently, by tuning \(\mu\) our source does allow to choose between higher raw key rates vs lower QBER, depending on the applicable scenario in such QKD systems. When considering the performance of our source it is important to stress that the crystals are of-the-shelf components with no particular coupling optimization. In terms of performance (e.g. heralding efficiencies, in-source losses of photon pairs) all losses occurring after photon pair production are particularly costly. Thus, an optimized coupling of the SPDC crystals could yield a better source performance. However, this would require to step back from standard products. As an easier point of improvement, we can take into account any fiber-to-fiber connectors. These typically account for 0.3 dB of loss each. We employed such connectors to attach the SPDC crystal's output to the subsequent filters. In case of the type-0 module the C-band filter introduces another fiber connector. This accounts to a total of 1.2 dB \(=2\times 0.6\) dB of avoidable loss for the type-0 module when considering that both photons of a pair experience these losses. To lift the attenuation one could replace the named connection by fiber splices which hardly cause losses for telecom wavelengths standard fibers. ## 5 Conclusion We report on two flexible single photon-pair sources. The system consists of spectrally pure generation of 775 nm light, allowing subsequent SPDC modules to generate photon pairs around 1550 nm. The first is based on a type-II SPDC process and the second on a type-0 SPDC process. Their robust design makes them ideal for many applications either as a heralded single-photon source or as a photon pair source as basis e.g. for entanglement based QKD protocols. Both can be operated either in cw or pulsed operation. Furthermore, we have demonstrated that the source can produce photon pairs of high quality with a single crystal operated in a double-pass configuration for both SHG and SPDC. In case of cw operation, the source is ideally suited for phase-coding QKD applications [18]. In pulsed operation, we have demonstrated a small QKD network based on time-bin entanglement [17]. We report on a brightness of \(2.48\times 10^{6}\) photon pairs/(s nm mW) for the type-II process and \(3.32\times 10^{8}\) photon pairs/s nm mW for the type-0 process, with heralding efficiencies up to 36 % and from 9 % to 12 %, respectively. The heralded Glauber correlation function \(g^{(2)}(0)\) takes values of \(<7.5\times 10^{-3}\) and \(<3.6\times 10^{-2}\) for \(\mu\) of \(\approx 3.1\times 10^{-3}\) and \(\approx 1.9\times 10^{-2}\), respectively, showcasing a good photon pair emission behaviour. ## Acknowledgements This research has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), under Grant No. SFB 1119-236615297. We thank Paul Wagner from Deutsche Telekom Technik GmbH for lending us the AWG.
2310.00120
Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs
Memory complexity and data scarcity have so far prohibited learning solution operators of partial differential equations (PDEs) at high resolutions. We address these limitations by introducing a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization, called multi-grid tensorized neural operator (MG-TFNO). MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena, through a decomposition of both the input domain and the operator's parameter space. Our contributions are threefold: i) we enable parallelization over input samples with a novel multi-grid-based domain decomposition, ii) we represent the parameters of the model in a high-order latent subspace of the Fourier domain, through a global tensor factorization, resulting in an extreme reduction in the number of parameters and improved generalization, and iii) we propose architectural improvements to the backbone FNO. Our approach can be used in any operator learning setting. We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression. The tensorization combined with the domain decomposition, yields over 150x reduction in the number of parameters and 7x reduction in the domain size without losses in accuracy, while slightly enabling parallelism.
Jean Kossaifi, Nikola Kovachki, Kamyar Azizzadenesheli, Anima Anandkumar
2023-09-29T20:18:52Z
http://arxiv.org/abs/2310.00120v1
# Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs ###### Abstract Memory complexity and data scarcity have so far prohibited learning solution operators of partial differential equations (PDE) at high resolutions. We address these limitations by introducing a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization, called multi-grid tensorized neural operator (MG-TFNO). MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena, through a decomposition of both the input domain and the operator's parameter space. Our contributions are threefold: i) we enable parallelization over input samples with a novel multi-grid-based domain decomposition, ii) we represent the parameters of the model in a high-order latent subspace of the Fourier domain, through a global tensor factorization, resulting in an extreme reduction in the number of parameters and improved generalization, and iii) we propose architectural improvements to the backbone FNO. Our approach can be used in any operator learning setting. We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over \(150\times\) compression. The tensorization combined with the domain decomposition, yields over \(150\times\) reduction in the number of parameters and \(7\times\) reduction in the domain size without losses in accuracy, while slightly enabling parallelism. ## 1 Introduction Real-world scientific computing problems often time require repeatedly solving large-scale and high-resolution partial differential equations (PDEs). For instance, in weather forecasts, large systems of differential equations are solved to forecast the future state of the weather. Due to internal inherent and aleatoric uncertainties, multiple repeated runs are carried out by meteorologists every day to quantify prediction uncertainties. Conventional PDE solvers constitute the mainstream approach used to tackle such computational problems. However, these methods are known to be slow and memory-intensive. They require an immense amount of computing power, are unable to learn and adapt based on observed data, and oftentimes require sophisticated tuning (Slingo and Palmer, 2011; Leutbecher and Palmer, 2008; Blanusa et al., 2022). Neural operators are a new class of models that aim at tackling these challenging problems (Li et al., 2020). They are maps between function spaces whose trained models emulate the solution operators of PDEs (Kovachki et al., 2021). In the context of PDEs, these deep learning models are orders of magnitude faster than conventional solvers, can easily learn from data, can incorporate physically relevant information, and recently enabled solving problems deemed to be unsolvable with the current state of available PDE methodologies (Liu et al., 2022; Li et al., 2021c). Among neural operator models, Fourier neural operators (FNOs), in particular, have seen successful application in scientific computing for the task of learning the solution operator to PDEs as well as in computer vision for classification, in-painting, and segmentation (Li et al., 2021b; Kovachki et al., 2021a; Guibas et al., 2021). By leveraging spectral theory, FNOs have successfully advanced frontiers in weather forecasts, carbon storage, and seismology (Pathak et al., 2022; Wen et al., 2022; Yang et al., 2021). While FNOs have shown tremendous speed-up over classical numerical methods, their efficacy can be limited due to the rapid growth in memory needed to represent complex operators. In the worst case, large memory complexity is required and, in fact, is unavoidable due to the need for resolving fine-scale features globally. However, many real-world problems, possess a local structure not currently exploited by neural operator methods. For instance, consider a weather forecast where predictions for the next hour are heavily dependent on the weather conditions in local regions and minimally on global weather conditions. Incorporating and learning this local structure of the underlying PDEs is the key to overcoming the curse of memory complexity. In this work, we propose a new, scalable neural operator that addresses these issues by leveraging the structure in both the domain space and the parameter space, Figure 2. Specifically, we introduce the multi-grid tensor operator (MG-TFNO), a model that exploits locality in physical space by a novel multi-grid domain decomposition approach to compress the input domain size by up to \(7\times\) while leveraging the global interactions of the model parameters to compress them by over \(100\times\) without any loss of accuracy. **In the input space**, to predict the solution in any region of the domain, MG-TFNO decomposes the input domain into small local regions to which hierarchical levels of global information are added in a multi-grid fashion. Since a local prediction depends most strongly on its immediate spatial surroundings, the farther field information is downsampled to lower resolutions, progressively, based on its distance from the region of interest. Thus, MG-TFNO allows parallelization over the input domain as it relies on high-resolution data Figure 1: **Comparison of the performance on the relative \(L^{2}\) and \(H^{1}\) test errors (lower is better) on a log-scale** of our approach, compared with both our improved backbone (_FNO_) and the original FNO, on Navier-Stokes. Our approach enables large compression for both input and parameter, while outperforming regular FNO. only locally and coarse-resolution data globally. Due to its state-of-the-art performance on PDE problems and efficient FFT-based implementation, we use the FNO as the backbone architecture for our method. It is worth noting that the multi-grid approach is readily amendable to neural network settings and, moreover, any other neural operator architecture can be used in place of FNO as a backbone. **In the parameter space**, we exploit the spatiotemporal structure of the underlying PDE solution operator by parameterizing the convolutional weights within the Fourier domain with a low-rank tensor factorization. Specifically, we impose a coupling between all the weights in the Fourier space by jointly parameterizing them with a single tensor, learned in a factorized form such as Tucker or Canonical-Polyadic (Kolda & Bader, 2009). This coupling allows us to limit the number of parameters in the model without limiting its expressivity. On the contrary, this low-rank regularization on the model mitigates over-fitting and improves generalization. Intuitively, our method can be thought of as a fully-learned implicit scheme capable of converging in a small, fixed number of iterations. Due to the global nature of the integral kernel transform, the FNO avoids the Courant-Friedrichs-Lewy (CFL) condition plaguing explicit schemes, allowing convergence in only a few steps (Courant et al., 1928). Our weight coupling ensures maximum communication between the steps, mitigating possible redundancies in the learned kernels and reducing the complexity of the optimization landscape. **In summary, we make the following contributions:** * **We propose architectural improvements to the backbone** which we validated through thorough ablations. * **We propose** MG-_TFNO_, a novel neural operator parameterized in the spectral domain by a single low-rank factorized tensor, allowing its size to grow linearly with the size of the problem. * **Our tensor operator achieves better performance with a fraction of the parameters**: we outperform FNO on solving the turbulent Navier Stokes equations with more than \(400\times\) weight compression ratio, Figure 6. * **Our method overfits less and does better in the low-data regime**. In particular, it outperforms FNO with less than half the training samples, Figure 8. Figure 2: **Overview of our approach**. First (left), a multi-grid approach is used to create coarse to fine inputs that capture high-resolution details in a local region while still encoding global context. The resulting regions are fed to a tensorized Fourier operator (middle), the parameters of which are jointly represented in a single latent space via a low-rank tensor factorization (here, a Tucker form). Here \(\mathcal{F}\) denotes Fourier transform. Finally, the outputs (right) are stitched back together to form the full result. Smoothness in the output is ensured via the choice of the loss function. * **We introduce a novel multi-grid domain decomposition approach**, a technique which allows the operator to predict the output only on local portions of the domain, thus reducing the memory usage by an order of magnitude with no performance degradation. * **Combining tensorization with multi-grid domain decomposition leads to MG-TFNO**, which is more efficient in terms of task performance, computation, and memory. MG-TFNO achieves 2.5\(\times\) lower error with 10\(\times\) model weight compression, and 1.8\(\times\) domain compression. * **A unified codebase** to run all configurations and variations of FNO and MG-TFNO will be released, along with the Navier-Stokes data used in this paper. ## 2 Background Here, we review related works and introduce the background necessary to explain our approach. Many physical phenomena are governed by PDEs and a wide range of scientific and engineering computation problems are based on solving these equations. In recent years, a new perspective to PDEs dictates to formulate these problems as machine learning problems where solutions to PDEs are learned. Prior works mainly focused on using neural networks to train for the solution map of PDEs(Guo et al., 2016; Zhu and Zabaras, 2018; Adler and Oktem, 2017; Bhatnagar et al., 2019; Gupta et al., 2021). The use of neural networks in the prior works limits them to a fixed grid and narrows their applicability to PDEs where maps between function spaces are desirable. Multiple attempts have been made to address this limitation. For example mesh free methods are proposed that locally output mesh-free solution (Lu et al., 2019; Esmaeilzadeh et al., 2020), but they are still limited to fixed input gird. A new deep learning paradigm, neural operators, are proposed as maps between function spaces (Li et al., 2020; Kovachki et al., 2021). They are discretization invariants maps. The input functions to neural operators can be presented in any discretization, mesh, resolution, or basis. The output functions can be evaluated at any point in the domain. Variants of neural operators deploy a variety of Nystrom approximation to develop new neural operator architecture. Among these, multi-pole neural operators (Li et al., 2020) utilize the multi-pole approach to develop computationally efficient neural operator architecture. Inspired by the spectral method, Fourier-based neural operators show significant applicability in practical applications (Li et al., 2021; Yang et al., 2021; Wen et al., 2022; Rahman et al., 2022), and the architectures have been used in neural networks for vision and text tasks (Guibas et al., 2021; Dao et al., 2022). Principle component analysis and u-shaped methods are also considered (Bhattacharya et al., 2020; Liu et al., 2022; Rahman et al., 2022; Yang et al., 2022). It is also shown that neural operators can solely be trained using PDEs, resulting in physics-informed neural operators, opening new venues for hybrid data and equation methods (Li et al., 2021) to tackle problems in scientific computing. Decomposing the domain in smaller subdomains is at the core of many methods in computational sciences(Chan and Mathew, 1994) and extensively developed in deep learning (Dosovitskiy et al., 2020). Prior deep learning methods on neural networks propose to decompose the input finite dimension vector to multiple patches, accomplish local operations, and aggregate the result of such process in the global sense (Dosovitskiy et al., 2020; Guibas et al., 2021). Such methods do not decompose the output domain and directly predict the entire output vector. In contrast, MG-TFNO works on function spaces, and not only decomposes the input domain, but also decomposes the domain of the output functions, and separately predicts the output at each subdomain. As we move beyond learning from simple structures to solving increasingly complex problems, the data we manipulate becomes more structured. To efficiently manipulate these structures, we need to go beyond matrix algebra and leverage the spatiotemporal structure. For all purposes of this paper, tensors are multi-dimensional arrays and generalize the concept of matrices to more than 2 modes (dimensions). For instance, RGB images are encoded as third-order (three-dimensional) tensors, videos are 4\({}^{\text{th}}\) order tensors and so on and so forth. Tensor methods generalize linear algebraic methods to these higher-order structures. They have been very successful in various applications in computer vision, signal processing, data mining and machine learning (Panagakis et al., 2021; Janzamin et al., 2019; Sidiropoulos et al., 2017; Papalexakis et al., 2016). Using tensor decomposition Kolda & Bader (2009), previous works have been able to compress and improve deep networks for vision tasks. Either a weight matrix is tensorized and factorized Novikov et al. (2015), or tensor decomposition is directly to the convolutional kernels before fine-tuning to recover-for lost accuracy, which also allows for an efficient reparametrization of the network (Lebedev et al., 2015; Kim et al., 2016; Gusak et al., 2019). There is a tight link between efficient convolutional blocks and tensor factorization and factorized higher-order structures (Kossaifi et al., 2020). Similar strategies have been applied to multi-task learning (Bulat et al., 2020) and NLP (Papadopoulos et al., 2022; Cordonnier et al., 2020). Of all these prior works, none has been applied to neural operator. In this work, we propose the first application of tensor compression to learning operators and propose a Tensor OPerator (_T_FNO). ## 3 Methodology Here, we briefly review operator learning as well as the Fourier Neural Operator, on which we build to introduce our proposed Tensor OPerator (_T_FNO) as well as the Multi-Grid Domain Decomposition, which together form our proposed MG-_T_FNO. ### Operator Learning Let \(\mathcal{A}:=\{a:D_{\mathcal{A}}\rightarrow\mathbb{R}^{d_{\mathcal{A}}}\}\) and \(\mathcal{U}:=\{u:D_{\mathcal{U}}\rightarrow\mathbb{R}^{d_{\mathcal{U}}}\}\) denote two input and output function spaces respectively. Each function \(a\), in the input function space \(\mathcal{A}\), is a map from a bounded, open set \(D_{\mathcal{A}}\subset\mathbb{R}^{d}\) to the \(d_{\mathcal{A}}\)-dimensional Euclidean space. Any function in the output function space \(\mathcal{U}\) is a map from a bounded open set \(D_{\mathcal{U}}\subset\mathbb{R}^{d}\) to the \(d_{\mathcal{U}^{\prime}}\)-dimensional Euclidean space. In this work we consider the case \(D=D_{\mathcal{A}}=D_{\mathcal{U}}\subset\mathbb{R}^{d}\). We aim to learn an operator \(\mathcal{G}:\mathcal{A}\rightarrow\mathcal{U}\) which is a mapping between the two function spaces. In particular, given a dataset of \(N\) points \(\{(a_{j},u_{j})\}_{j=1}^{N}\), where the pair \((a_{j},u_{j})\) are functions satisfying \(\mathcal{G}(a_{j})=u_{j}\), we build an approximation of the operator \(\mathcal{G}\). As a backbone operator learning model, we use neural operators as they are consistent and universal learners in function spaces. For an overview of theory and implementation, we refer the reader to Kovachki et al. (2021). We specifically use the _F_NO and give details in the forthcoming section (Li et al., 2021). ### Notation We summarize the notation used throughout the paper in Table 1. ### Fourier Neural Operators For simplicity, we will work on the \(d\)-dimensional unit torus \(\mathbb{T}^{d}\) and first describe a single, pre-activation _F_NO layer mapping \(\mathbb{R}^{m}\)-valued functions to \(\mathbb{R}^{n}\)-valued functions. Such a layer constitutes the mapping \(\mathcal{G}:L^{2}(\mathbb{T}^{d};\mathbb{R}^{m})\to L^{2}(\mathbb{T}^{d}; \mathbb{R}^{n})\) defined as \[\mathcal{G}(v)=\mathcal{F}^{-1}\big{(}\mathcal{F}(\kappa)\cdot\mathcal{F}(v) \big{)},\qquad\forall\;v\in L^{2}(\mathbb{T}^{d};\mathbb{R}^{m}) \tag{1}\] where \(\kappa\in L^{2}(\mathbb{T}^{d};\mathbb{R}^{n\times m})\) is a function constituting the layer parameters and \(\mathcal{F},\mathcal{F}^{-1}\) are the Fourier transform and its inverse respectively. The Fourier transform of the function \(\kappa\) is parameterized directly by some fixed number of Fourier nodes denoted \(\alpha\in\mathbb{N}\). To implement equation 1, \(\mathcal{F},\mathcal{F}^{-1}\) are replaced by the discrete fast Fourier transforms \(\hat{\mathcal{F}},\hat{\mathcal{F}}^{-1}\). Let \(\hat{v}\in\mathbb{R}^{s_{1}\times\cdots\times s_{d}\times m}\) denote the evaluation of the function \(v\) on a uniform grid discretizing \(\mathbb{T}^{d}\) with \(s_{j}\in\mathbb{N}\) points in each direction. We replace \(\mathcal{F}(\kappa)\) with a weight tensor \(\mathbf{T}\in\mathrm{Cov}^{s_{1}\times\cdots\times s_{d}\times n\times m}\) consisting of the Fourier modes of \(\kappa\) which are parameters to be learned. To ensure that \(\kappa\) is parameterized as a \(\mathbb{R}^{n\times m}\)-valued function with a fixed, maximum amount of wavenumbers \(\alpha<\frac{1}{G2}\min\{s_{1},\cdots,s_{d}\}\) that is independent of the discretization of \(\mathbb{T}^{d}\), we leave as learnable parameters only the first \(\alpha\) entries of \(\mathbf{T}\) in each direction and enforce that \(\mathbf{T}\) have conjugate symmetry. In particular, we parameterize half the corners of the \(d\)-dimensional hyperrectangle with \(2^{d-1}\) hypercubes with length size \(\alpha\). That is, \(\mathbf{T}\) is made up of the free-parameter tensors \(\mathbf{\tilde{T}}_{1},\cdots,\mathbf{\tilde{T}}_{2^{d-1}}\in\text{Cov}^{\alpha \times\cdots\times\alpha\times n\times m}\) situated in half of the corners of \(\mathbf{T}\). Each corner diagonally opposite of a tensor \(\mathbf{\tilde{T}}_{j}\) is assigned the conjugate transpose values of \(\mathbf{\tilde{T}}_{j}\). All other values of \(\mathbf{T}\) are set to zero. This is illustrated in the middle-top part of Figure 2 for the case \(d=2\) with \(\mathbf{\tilde{T}}_{1}\) and \(\mathbf{\tilde{T}}_{2}\). We will use the notation \(\mathbf{T}(k,\cdots)=\mathbf{\tilde{T}}_{k}\) for any \(k\in[2^{d-1}]\). The discrete version of equation 1 then becomes the mapping \(\hat{\mathcal{G}}:\mathbb{R}^{s_{1}\times\cdots\times s_{d}\times m}\to \mathbb{R}^{s_{1}\times\cdots\times s_{d}\times n}\) defined as \[\hat{\mathcal{G}}(\hat{v})=\hat{\mathcal{F}}^{-1}\big{(}\mathbf{T}\cdot\hat{ \mathcal{F}}(\hat{v})\big{)},\qquad\forall\;\hat{v}\in\mathbb{R}^{s_{1} \times\cdots\times s_{d}\times m} \tag{2}\] where the \(\cdot\) operation is simply the matrix multiplication contraction along the last dimension. Specifically, we have \[\big{(}\mathbf{T}\cdot\hat{\mathcal{F}}(\hat{v})\big{)}(l_{1},\ldots,l_{d},j) =\sum_{i=1}^{m}\mathbf{T}(l_{1},\ldots,l_{d},j,i)\big{(}\hat{ \mathcal{F}}(\hat{v})\big{)}(l_{1},\ldots,l_{d},i). \tag{3}\] From equation 2, a full FNO layer is build by adding a point-wise linear action to \(\hat{v}\), a bias term, and applying a non-linear activation. In particular, from an input \(\hat{v}\in\mathbb{R}^{s_{1}\times\cdots\times s_{d}\times m}\), the output \(\hat{q}\in\mathbb{R}^{s_{1}\times\cdots\times s_{d}\times n}\) is given as \[\hat{q}(l_{1},\cdots,l_{d},:)=\sigma\big{(}\mathbf{Q}\hat{v}(l_{1},\cdots,l_{ d},:)+\hat{\mathcal{G}}(\hat{v})+b\big{)}\] with \(\sigma:\mathbb{R}\to\mathbb{R}\) a fixed, non-linear activation, and \(b\in\mathbb{R}^{n}\), \(\mathbf{Q}\in\mathbb{R}^{n\times m}\), \(\mathbf{\tilde{T}}_{1},\cdots,\mathbf{\tilde{T}}_{2^{d-1}}\in\text{Cov}^{ \alpha\times\cdots\times\alpha\times n\times m}\) are the learnable parameters of the layer. The full FNO model consists of \(L\in\mathbb{N}\) such layers each with weight tensors \(\mathbf{T}_{1},\cdots,\mathbf{T}_{L}\) that have learnable parameters \(\mathbf{\tilde{T}}_{k}^{(l)}=\mathbf{T}_{l}(k,\cdots)\) for any \(l\in[L]\) and \(k\in[2^{d-1}]\). In the case \(n=m\) for all layers, we introduce the joint parameter tensor \(\mathbf{W}\in\text{Cov}^{\alpha\times\cdots\times\alpha\times n\times n\times 2 ^{d-1}L}\) so that \[\mathbf{W}\left(\ldots,2^{d-1}(l-1)+k+1\right)=\mathbf{\tilde{T}}_{k}^{(l)}.\] A perusal of the above discussion reveals that there are \((2^{d}\alpha^{d}+1)mn+n\) total parameters in each FNO layer. Note that, since \(m\) and \(n\) constitute the respective input and output channels of the layer, the number of parameters can quickly explode due to the exponential scaling factor \(2^{d}\alpha^{d}\) if many wavenumbers are \begin{table} \begin{tabular}{c c c} \hline \hline **Variable** & **Meaning** & **Dimensionality** \\ \hline **T** & Tensor of weights in the Fourier domain & \(\text{Cov}^{\alpha\times\cdots\times\alpha\times m\times n}\) \\ **W** & Weight tensor parameterizing the entire operator & \(\text{Cov}^{\alpha\times\cdots\times\alpha\times n\times n\times n\times 2 ^{d-1}L}\) \\ \(\mathcal{A}\) & Input function space & Infinite \\ \(\mathcal{U}\) & output function space & Infinite \\ \(a\) & Input function & Infinite \\ \(u\) & Output function & Infinite \\ \(D_{\mathcal{A}}\) & Domain of function a & \(d\) \\ \(D_{\mathcal{U}}\) & Domain of function u & \(d\) \\ \(d_{\mathcal{A}}\) & Dimension of the co-domain of the input functions & 1 \\ \(d_{\mathcal{U}}\) & Dimension of the co-domain of the output functions & 1 \\ \(\mathcal{F}\) & Fourier transform & Infinite \\ \(\mathcal{F}^{-1}\) & Fourier transform & Infinite \\ \(L\) & Number of integral operation layers & In \(\mathbb{N}\) \\ \(l\) & Layer index & Between 1 and \(L\) \\ \(\sigma\) & Point-wise activation operation & Infinite \\ \(b\) & Bias vector & \\ \(v\) & Function at each layer & Infinite \\ \(\alpha\) & Number of kept frequencies in Fourier space & Between 1 and \(\frac{1}{2}\min\{s_{1},\cdots,s_{d}\}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Table of notation** kept. Preserving a large number of modes could be crucial for applications where the spectral decay of the input or output functions is slow such as in image processing or the modeling of multi-scale physics. In the following section, we describe a tensorization method that is able to mitigate this growth without sacrificing approximation power. ### Architectural improvements Our proposed approach uses FNO as a backbone. To improve its performance, we first study various aspects of the Fourier Neural Architecture and perform thorough ablation to validate each aspect. In particular, we propose improvements to the base architecture that improve performance. Normalization in neural operatorsWhile normalization techniques, such as Batch-Normalization Ioffe and Szegedy (2015), have proven very successful in training neural networks, additional consideration must be given when applying those to neural operators in order to preserve its properties, notably discretization invariance. Specifically, it cannot depend on the spatial variables and therefore has to be either a global or a function-wise normalization. We investigate several configurations using instance normalization Ulyanov et al. (2016) and layer-normalization Ba et al. (2016), in conjunction with the use-of preactivation He et al. (2016). Channel mixingFNO relies on a global convolution realized in the spectral domain. Inspired by previous works, e.g. Guibas et al. (2021), we propose adding an MLP in the _original_ space, after each Spectral convolution. In practice, we found that two-layer bottleneck MLP works well, e.g. we decrease the co-dimension by half in the first linear layer before restoring it in the second one. Boundary conditionsFourier neural operators circumvent the limitation of traditional Fourier methods to inputs with periodic boundaries only. This is achieved through a local linear transformation added to the Figure 3: **Original FNO and Improved Backbone Architecture. The original FNO architecture (Li et al., 2021b) is composed of simply a Spectral Convolution, with a (linear) skip connection to recover high-frequency information and handle non-periodic inputs (3(a)). We improve the architecture as detailed in section 3.4. In particular, we have a version with a double (sequential) skip connection (3(b)), while our best architecture uses nested skip connections, and can be made both with and without preactivation (subfigures 3(c) and 3(d), respectively). The latter, subfigure 3(d), is our best architecture.** spectral convolution. This can be seen as a linear skip connection. We investigate replacing these with an identity skip-connection and a soft-gated skip-connection Bulat et al. (2020). We also investigate the impact of domain-padding, found by Li et al. (2021) to improve results, especially for non-periodic inputs, and padding for the multi-grid decomposition. We represent in Figure. 3 the original FNO architecture (Li et al., 2021), subfigure 3(a), the improved version with double (sequential) skip connections (subfigure 3(b)) and our best architecture, both with and without preactivation (subfigures 3(c) and 3(d), respectively). ### Tensor Fourier Neural Operators In the previous section, we introduced a unified formulation of FNO where the whole operator is parametrized by a single parameter tensor \(\mathbf{W}\). This enables us to introduce the tensor operator, which parameterizes efficiently \(\mathbf{W}\) with a low-rank, tensor factorization. We introduce the method for the case of a Tucker decomposition, for its flexibility. Other decompositions, such as Canonical Polyadic, can be readily integrated. This joint parametrization has several advantages: i) it applies a low-rank constraint on the entire tensor \(\mathbf{W}\), thus regularizing the model. These advantages translate into i) a huge reduction in the number of parameters, ii) better generalization and an operator less prone to overfitting. We show superior performance for low-compression ratios (up to \(200\times\)) and very little performance degradation when largely compressing (\(>450\times\)) the model, iii) better performance in a low-data regime. In practice, we express \(\mathbf{W}\) in a low-rank factorized form, e.g. Tucker or CP. In the case of a Tucker factorization with rank \((R_{1},\cdots,R_{d},R_{L},R_{I},R_{O})\), where \(R_{L}\) controls the rank across layers, \(R_{I}=R_{O}\) control the rank across the input and output co-dimension, respectively, and \(R_{1},\cdots,R_{d}\) control the rank across the dimensions of the operator: \[\mathbf{W}=\sum_{r_{1}=1}^{R_{1}}\cdots\sum_{r_{d}=1}^{R_{d}}\sum _{r_{i}=1}^{R_{I}}\sum_{r_{o}=1}^{R_{O}}\sum_{r_{i}=1}^{R_{L}}\mathbf{G}(r_{1},\cdots,r_{d},r_{i},r_{o},r_{l})\cdot\mathbf{U^{(1)}}(;,r_{1})\cdot\cdots\cdot \mathbf{U^{(d)}}(;,r_{d})\cdot\mathbf{U^{(1)}}(;,r_{i})\cdot\mathbf{U^{(O)}}(; r_{o})\cdot\mathbf{U^{(L)}}(;,r_{l}). \tag{4}\] Here, \(\mathbf{G}\) is the core of size \(R_{L}\times R_{I}\times R_{O}\times R_{1}\times\cdots\times R_{d}\) and \(\mathbf{U^{(L)}},\mathbf{U^{(I)}},\mathbf{U^{(O)}},\mathbf{U^{(1)}},\cdots, \mathbf{U^{(d)}}\) are factor matrices of size \((R_{L}\times L),(R_{I}\times I),(R_{O}\times O),(R_{1}\times\alpha),\cdots,( R_{d}\times\alpha)\), respectively. Note that the mode (dimension) corresponding to the co-dimension can be left uncompressed by setting \(R_{L}=L\) and \(\mathbf{U^{(L)}}=\mathrm{Id}\). This leads to layerwise compression. Also note that having a rank of 1 along any of the modes would mean that the slices along that mode differ only by a (multiplicative) scaling parameter. Also note that during the forward pass, we can pass \(\mathbf{T}\) directly in factorized form to each layer by selecting the corresponding rows in \(\mathbf{U^{(L)}}\). While the contraction in equation 3 can be done using the reconstructed tensor, it can also be done directly by contracting \(\hat{\mathcal{F}}(\hat{v})\) with the factors of the decomposition. For small, adequately chosen ranks, this can result in computational speedups. A visualization of the Tucker decomposition of a third-order tensor can be seen in Figure 4). Note that we can rewrite the entire weight parameter for this Tucker case, equivalently, using the more compact n-mode product as: \[\mathbf{W}=\mathbf{G}\times_{1}\mathbf{U^{(1)}}\cdots\times_{d}\mathbf{U^{( d)}}\times_{d+1}\mathbf{U^{(I)}}\times_{d+2}\mathbf{U^{(O)}}\times_{d+3} \mathbf{U^{(L)}}\] We can efficiently perform an iFFT after contraction with the tensorized kernel. For any layer \(l\), the \((j_{1},j_{2})\) coordinate of the matrix Figure 4: **Illustration of a Tucker decomposition.** For clarity, we show \(\mathbf{W}\) as a \(3^{\mathrm{rd}}\)-order tensor weight. valued convolution function \(\kappa(x)\) is as follows, \[[\kappa_{l}(x)]j1,j_{2} =\sum_{i_{1}=1}^{m_{1}}\cdots\sum_{i_{d}=1}^{m_{d}}\sum_{r_{l}=1}^{R_ {L}}\sum_{r_{i}=1}^{R_{I}}\sum_{r_{o}=1}^{R_{O}}\sum_{r_{l}=1}^{R_{1}}\cdots\sum _{r_{d}=1}^{R_{d}}\mathbf{G}(r_{1},\cdots,r_{d},r_{i},r_{o},r_{l})\cdot\] \[\mathbf{U}^{(\mathbf{1})}(i_{1},r_{1})\cdots\mathbf{U}^{(\mathbf{ d})}(i_{d},r_{d})\cdot\mathbf{U}^{(\mathbf{I})}(j_{1},r_{i})\cdot\mathbf{U}^{( \mathbf{O})}(j_{2},r_{o})\cdot\mathbf{U}^{(\mathbf{L})}(l,r_{l})\cdot\exp(2\pi \sum_{k=1}^{d}ix_{k}i_{k})\] This joint factorization along the entire operator allows us to leverage redundancies both locally and across the entire operator. This leads to a large reduction in the memory footprint, with only a fraction of the parameter. It also acts as a low-rank regularizer on the operator, facilitating training. Finally, through global parametrization, we introduce skip connections that allow gradients to flow through the latent parametrization to all the layers jointly, leading to better optimization. Importantly, this formulation is general and works with any tensor factorization. For instance, we also explore a Canonical-Polyadic decomposition (CP) which can be seen as a special case of Tucker with a super-diagonal core. In that case, we set a single rank \(R\) and express the weights as a weighted sum of \(R\) rank-1 tensors. Concretely: \[\mathbf{W}=\sum_{r=1}^{R}\lambda_{r}\mathbf{U}^{(\mathbf{1})}(:,r)\cdot\ \cdots\ \cdot\mathbf{U}^{(\mathbf{d})}(:,r)\cdot \tag{5}\] \[\mathbf{U}^{(\mathbf{I})}(:,r)\cdot\mathbf{U}^{(\mathbf{O})}(:,r)\cdot\mathbf{ U}^{(\mathbf{L})}(:,r).\] where \(\mathbf{U}^{(\mathbf{L})},\mathbf{U}^{(\mathbf{I})},\mathbf{U}^{(\mathbf{O})},\mathbf{U}^{(\mathbf{I})},\cdots,\mathbf{U}^{(\mathbf{d})}\) are factor matrices of size \((R\times L),(R\times I),(R\times O),(R\times\alpha),\cdots,(R\times\alpha)\), respectively and \(\lambda\in\mathbb{R}^{\mathbf{R}}\). Note that the CP, contrarily to the Tucker, has a single rank parameter, shared between all the dimensions. This means that to maintain the number of parameters the same, \(R\) needs to be very high, which leads to memory issues. This makes CP more suitable for large compression ratios, and indeed, we found it leads to better performance at high-compression / very low-rank. In this paper, we also explore the tensor-train decomposition Oseledets (2011). A rank-\((1,R_{1},\cdots,R_{N},R_{I},R_{O},R_{L},1)\) TT factorization expresses \(\mathbf{W}\) as: \[\mathbf{W}(i_{1},\cdots,i_{d},i_{c},i_{o},i_{l})=\mathbf{G}_{1}(i_{1})\cdot \times\mathbf{G}_{N}(i_{d})\mathbf{G}_{I}(i_{c})\times\cdots\mathbf{G}_{O}(i_ {o})\times\cdots\mathbf{G}_{L}(i_{l}).\] Where each of the factors of the decompositions \(\mathbf{G}_{k}\) are third order tensors of size \(R_{k}\times I_{k}\times R_{k+1}\). In the experimental section 4.3, we show results of TFNO trained with a Tucker, TT and CP factorization. Separable Fourier ConvolutionThe proposed tensorization approach introduces a factorization of the weights in the spectral domain. When a CP Kolda & Bader (2009) is used, this induces separability over Figure 5: **Domain decomposition in space (5(a)) and our Multi-Grid based approach. (5(b)). White squares represent the region of interest while yellow squares the larger embeddings.** the learned kernel. We propose to make this separability explicit by not performing any channel mixing in the spectral domain and relying on the MLP introduced above to do so. The separable Spectral convolution can be thought of as a depthwise convolution performed in the Fourier domain, e.g. without any channel mixing. The mixing between channels is instead done in the spatial domain. This results in a significant reduction in the number of parameters while having minimal impact on performance (we found it necessary to increase the depth of the network, however, to ensure the network retained enough capacity). ### Multi-Grid Domain Decomposition Having introduced our decomposition in the operator's parameter space, we now introduce our novel multigrid approach to decompose the problem domain. **Domain decomposition** is a method commonly used to parallelize classical solvers for time-dependent PDEs that is based on the principle that the solution for a fixed local region in space depends mostly on the input at the same local region (Chan & Mathew, 1994). In particular, since the time-step \(h>0\) of the numerical integrator is small, the solution \(u(x,t+h)\), for any point \(x\in D\) and \(t\in\mathbb{R}_{+}\), depends most strongly on the points \(u(y,t)\) for all \(y\in B\big{(}x,r(h)\big{)}\) where \(B\big{(}x,r(h)\big{)}\) denotes the ball centered at \(x\) with radius \(r(h)\). This phenomenon is easily seen for the case of the heat equation where, in one dimension, the solution satisfies \[u(x,t+h) \propto\int_{-\infty}^{\infty}\exp\left(\frac{-(x-y)^{2}}{4h} \right)u(y,t)\,\mathrm{d}y\] \[\approx\int_{x-4h}^{x+4h}\exp\left(\frac{-(x-y)^{2}}{4h}\right)u (y,t)\,\mathrm{d}y\] with the approximation holding since 99.9937% of the kernel's mass is contained within \(B(x,4h)\). While some results exist, there is no general convergence theory for this approach, however, its empirical success has made it popular for various numerical methods (Albin & Bruno, 2011). To exploit this localization, the domain \(D\) is split in \(q\in\mathbb{N}\) pairwise-disjoint regions \(D_{1},\cdots,D_{q}\) so that \(D=\cup_{j=1}^{q}D_{j}\). Each region \(D_{j}\) is then embedded into a larger one \(Z_{j}\supset D_{j}\) so that points away from the center of \(D_{j}\) have enough information to be well approximated. A model can then be trained so that the approximation \(\mathcal{G}(a|_{Z_{j}})|_{D_{j}}\approx u|_{D_{j}}\) holds for all \(j\in[q]\). This idea is illustrated in Figure 5(a) where \(D=[0,1]^{2}\) and all \(D_{j}\), \(Z_{j}\) are differently sized squares. This allows the model to be ran fully in parallel hence its time and memory complexities are reduced linearly in \(q\). **Multi-Grid.** Domain decomposition works well in classical solvers when the time step \(h>0\) is small because the mapping \(u(\cdot,t)\mapsto u(\cdot,t+h)\) is close to the identity. However, the major advancement made by machine learning-based operator methods for PDEs is that a model can approximate the solution, in one shot, for very large times i.e. \(h>1\). But, for larger \(h\), the size of \(Z_{j}\) relative to \(D_{j}\) must increase to obtain the same approximation accuracy, independently of model capacity. This causes any computational savings made by the decomposition approach to be lost. To mitigate this, we propose a multi-grid based domain decomposition approach where global information is added hierarchically at different resolutions. While our approach is inspired by the classical multi-grid method, it is not based on the V-cycle algorithm (McCormick, 1985). For ease of presentation, we describe this concept when a domain \(D=\mathbb{T}^{2}\) is uniformly discretized by \(2^{s}\times 2^{s}\) points, for some \(s\in\mathbb{N}\), but note that generalizations can readily be made. Given a final level \(L\in\mathbb{N}\), we first sub-divide the domain into \(2^{2L}\) total regions each of size \(2^{s-L}\times 2^{s-L}\) and denote them \(D_{1}^{(0)},\cdots,D_{2^{2L}}^{(0)}\). We call this the zeroth level. Then, around each \(D_{j}^{(0)}\), for any \(j\in[2^{2L}]\), we consider the square \(D_{j}^{(1)}\) of size \(2^{s-L+1}\times 2^{s-L+1}\) that is equidistant, in every direction, from each boundary of \(D_{j}^{(0)}\). We then subsample the points in \(D_{j}^{(1)}\) uniformly by a factor of \(\frac{1}{2}\) in each direction, making \(D_{j}^{(1)}\) have \(2^{s-L}\times 2^{s-L}\) points. We call this the first level. We continue this process by considering the squares \(D_{j}^{(2)}\) of size \(2^{s-L+2}\times 2^{s-L+2}\) around each \(D_{j}^{(1)}\) and subsample them uniformly by a factor of \(\frac{1}{4}\) in each direction to again yield squares with \(2^{s-L}\times 2^{s-L}\) points. The process is repeated until the \(L\)th level is reached wherein \(D_{j}^{(L)}\) is the entire domain subsampled by a factor of \(2^{-L}\) in each direction. The process is illustrated for the case \(L=2\) in Figure 5(b). Since we work with the torus, the region of the previous level is always at the center of the current level. The intuition behind this method is that since the dependence of points inside a local region diminishes the further we are from that region, it is enough to have coarser information, as we go farther. We combine this multi-grid method with the standard domain decomposition approach by building appropriately padded Figure 6: **Tensorization: error in logscale as a function of the compression ratio. We compare the tensor neural operator with an FNO with the same number of parameters (_trimmed_). We achieve over 100x compression ratio with better performance that the original FNO** Figure 7: **MG-Domain Decomposition: error as a function of the domain compression ratio. We compare MG-TFNO with different numbers of multigrid regions both with and without weight tensor compression to a full field FNO model. We achieve over 7x input space compression, 10x parameter space compression ratios and better performance than the original FNO.** squares \(Z^{(l)}_{j}\) of size \(2^{s-L}+2p\times 2^{s-L}+2p\) around each \(D^{(l)}_{j}\) where \(p\in\mathbb{N}\) is the amount of padding to be added in each direction. We then take the evaluations of the input function \(a\) at each level and concatenate them as channels. In particular, we train a model so that \(\hat{\mathcal{G}}\big{(}(a|_{Z^{(0)}_{j}},\cdots,a|_{Z^{(L)}_{j}})\big{)}|_{D^{ (0)}}\approx u|_{D^{(0)}_{j}}.\) Since the model only operates on each padded region separately, we reduce the total number of grid points used from \(2^{2s}\) to \((2^{s-L}+2p)^{2}\) and define the domain compression ratio as the quotient of these numbers. Furthermore, note that, assuming \(a\) is \(\mathbb{R}^{d_{\mathcal{A}}}\)-valued, a model that does not employ our multi-grid domain decomposition uses inputs with \(d_{\mathcal{A}}\) channels while our approach builds inputs with \((L+1)d_{\mathcal{A}}\) channels. In particular, the number of input channels scales only logarithmically in the number of regions hence global information is added at very little additional cost. Indeed, FNO models are usually trained with internal widths much larger than \(d_{\mathcal{A}}\) hence the extra input channels cause almost no additional memory overhead. ## 4 Experiments In this section, we first introduce the data, experimental setting and implementation details before empirically validating our approach through thorough experiments and ablations. ### Data. We experiment on a dataset of 10K training samples and 2K test samples of the two-dimensional Navier-Stokes equation with Reynolds number 500. We also experiment with the one-dimensional viscous Burgers' equation. Navier-Stokes.We consider the vorticity form of the two-dimensional Navier-Stokes equation, \[\begin{split}\partial_{t}\omega+\nabla^{\perp}\phi\cdot\omega= \frac{1}{\text{Re}}\Delta\omega+f,&\quad x\in\mathbb{T}^{2},\,t \in(0,T]\\ -\Delta\phi=\omega,&\quad\int_{\mathbb{T}^{2}}\phi =0,&\quad x\in\mathbb{T}^{2},\,t\in(0,T]\end{split} \tag{6}\] with initial condition \(\omega(0,\cdot)=0\) where \(\mathbb{T}^{2}\cong[0,2\pi)^{2}\) is the torus, \(f\in\dot{L}^{2}(\mathbb{T}^{2};\mathbb{R})\) is a forcing function, and \(\text{Re}>0\) is the Reynolds number. Then \(\omega(t,\cdot)\in\dot{H}^{s}(\mathbb{T}^{2};\mathbb{R})\) for any \(t\in(0,T]\) and \(s>0\), is the unique weak solution to equation 6 (Temam, 1988). We consider the non-linear operator mapping \(f\mapsto\omega(T,\cdot)\) with \(T=5\) and fix the Reynolds number \(\text{Re}=500\). We define the Gaussian measure \(\mu=\mathcal{N}(0,C)\) on the forcing functions where we take the covariance \(C=27(-\Delta+9I)^{-4}\), following the setting in (De Hoop et al., 2022). Input data is obtained by generating i.i.d. samples from \(\mu\) by a KL-expansion onto the eigenfunctions of \(C\)(Powell et al., 2014). Solutions to equation 6 are then obtained by a pseudo-spectral scheme (Chandler and Kerswell, 2013). Burgers' Equation.We consider the one-dimensional Burgers' equation on the torus, \[\begin{split}\partial_{t}u+uu_{x}&=\nu u_{xx}, \qquad x\in\mathbb{T},\,t\in(0,T]\\ u|_{t=0}&=u_{0},\qquad\quad x\in\mathbb{T}\end{split} \tag{7}\] for initial condition \(u_{0}\in L^{2}(\mathbb{T};\mathbb{R})\) and viscosity \(\nu>0\). Then \(u(t,\cdot)\in H^{s}(\mathbb{T};\mathbb{R})\), for any \(t\in\mathbb{R}_{+}\) and \(s>0\), is the unique weak solution to 7 (Evans, 2010). We consider the non-linear operator \(u_{0}\mapsto u(T,\cdot)\) with \(T=0.5\) or \(1\) and fix \(\nu=0.01\). We define the Gaussian measure \(\mu=\mathcal{N}(0,C)\) where we take the covariance \(C=3^{5/2}(-\frac{d^{2}}{dx^{2}}+9I)^{-3}\). Input data is obtained by generating i.i.d. samples from \(\mu\) by a KL-expansion onto the eigenfunctions of \(C\). Solutions to equation 7 are then obtained by a pseudo-spectral solver using Heun's method. We use 8K samples for training and 2K for testing. ### Implementation details ImplementationWe use PyTorch Paszke et al. (2017) for implementing all the models. The tensor operations are implemented using TensorFlow Kossaifi et al. (2019) and TensorLy-Torch Kossaifi (2021). Our code was released under the permissive MIT license, as a Python package that is well-tested and comes with extensive documentation, to encourage and facilitate downstream scientific applications. It is available at [https://github.com/neuraloperator/neuraloperator](https://github.com/neuraloperator/neuraloperator). Hyper-parametersWe train all models via gradient backpropagation using a mini-batch size of 16, the Adam optimizer, with a learning rate of \(1e^{-3}\), weight decay of \(1e^{-4}\), for 500 epochs, decreasing the learning rate every 100 epochs by a factors of \(\frac{1}{2}\). The model width is set in all cases to 64 except when specified otherwise (for the Trimmed FNO), meaning that the input was first lifted (with a linear layer) from the number of input channels to that width. The projection layer projects from the width to 256 and a prediction linear layer outputs the predictions. 10000 samples were used for training, as well as a separate set of 2000 samples for testing. All experiments are done on a NVIDIA Tesla V100 GPU. To disentangle the effect of each of our components, the comparisons between the original FNO, the MG-FNO, TFNO, and the MG-TFNO were conducted in the same setting, with a mini-batch size of 32, modes of 42 and 21 for the height and width, respectively, and an operator width of 64. For the comparison between our best models, we use all the modes (64 and 32) and a mini-batch size of 16, which leads to improved performance for all models but longer training times. For each comparison, the same setting and hyper-parameters were used for all models. Training the operator.Since MG-TFNO predicts local regions which are then stitched together to form a global function without any communication, aliasing effects can occur where one output prediction does not flow smoothly into the next. To prevent this, we train our model using the \(H^{1}\) Sobolev norm (Czarnecki et al., 2017; Li et al., 2021). By matching derivatives, training with this loss prevents any discontinuities from occurring and the output prediction is smooth. ### Experimental results In this section, we compare our approach with both the regular FNO Li et al. (2021) and the Factorized-FNO Tran et al. (2023), which separately applied FFT along each mode before combining the results. In all cases, our approach achieves superior performance with a fraction of the parameters, as can be seen in Table 4.4.4. Tensorizing: better compression.In Figure 6, we show the performance of our approach (TNO) compared to the original FNO, for varying compression ratios. In the Trimmed-FNO, we adjust the width in order to match the number of parameters in our TNO. We focus on the width of the network as it was Figure 8: **Error as a function of the number of training samples (left) and training VS testing loss. We compare TFNO with a regular FNO. Note that on the left we show the testing \(L^{2}\) error while, for training, the \(H^{1}\) loss is used and that is compared with the \(H^{1}\) test error on the right. Our approach generalizes better while requiring fewer training samples.** shown to be the most important parameter (De Hoop et al., 2022). Our method massively outperforms the Trimmed-_FNO_ at every single fixed parameter amount. Furthermore, even for very large compression ratios, our FNO outperforms the full-parameter FNO model. This is likely due to the regularizing effect of the tensor factorization on the weight, showing that many of the ones in the original model are redundant. Tensorizing: better generalization.Figure 8 (left) shows that our TNO generalizes better with less training samples. Indeed, at every fixed amount of training samples, the TNO massively outperforms the full-parameter FNO model. Even when only using half the samples, our TNO outperforms the FNO trained on the full dataset. Furthermore, Figure 8 (right) shows that our TNO overfits significantly less than FNO, demonstrating the regularizing effect of the tensor decomposition. This result is invaluable in the PDE setting where very few training samples are typically available due to the high computational cost of traditional PDE solvers. Multi-Grid Domain Decomposition.In Table 1, we compare our MG-_TFNO_ with the baseline FNO and the TFNO, respectively. MG-_TFNO_ enables compressing both the weight tensor but also the input domain. On the other hand, preserving resolution invariance requires padding the patches, which decreases performance, resulting in a tradeoff between input domain compression and prediction accuracy. We also show the impact of multi-grid domain decomposition on performance in Figure 7. We find that lower compression ratios (corresponding to a larger amount of padding in the decomposed regions) perform better which is unsurprising since more information is incorporated into the model. More surprisingly, we find that using a larger number of regions (16) performs consistently better than using a smaller number (4) and both can outperform the full-field FNO. This can be due to the fact that: i) the domain decomposition acts as a form of data augmentation, exploiting the translational invariance of the PDE and more regions yield larger amounts of data, and ii) the output space of the model is simplified since a function can have high frequencies globally but may only have low frequencies locally. Consistently, we find that the tensor compression in the weights acts as a regularizer and improves performance across the board. Architectural improvements to the backboneIn addition to the ablation performed on our MG-_TFNO_, we also investigate architectural improvements to the FNO backbone, see Sec 3.4 for details. In particular, we find that, while instance normalization decreases performance, layer normalization helps, especially when used in conjunction with a pre-activation. Adding an MLP similarly improves performance, we found that a bottleneck (expansion factor of 0.5) works well in practice, resulting in an absolute improvement of 0.87% in relative \(L^{2}\) error. We found the ordering of normalization, activation, and weights (including preactivation), did not have a significant impact on performance. Finally, when not using multi-grid domain decomposition, the inputs are periodic and padding is not necessary. In that case, not padding the input improves performance. We use all these improvements for the backbone of the best version of our MG-_TFNO_, Fig 1 where we show that our improved backbone significantly outperforms the original FNO, while our approach significantly outperforms both, with a small fraction of the parameters, opening the door to the application of MG-_TFNO_ to high-resolution problems. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Method** & \(L^{2}\) **test error (\%)** & **\# Params** & **Model CR** & **Input CR** \\ \hline FNO Li et al. (2021b) & 1.34\% & 67M & - & - \\ _FFNO_ Tran et al. (2023) & 1.15 \% & 1M & 67\(\times\) & - \\ \hline TFNO (CP) & 0.29\% & 890K & 75\(\times\) & - \\ TFNO (CP) & 0.47\% & 447K & 150\(\times\) & - \\ \hline MG-_TFNO_ (CP) & 0.49 \% & 447K & 40\(\times\) & 1.9\(\times\) \\ MG-_TFNO_ (Tucker) & 0.42 \% & 447K & 19\(\times\) & 1.9\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparing the performance of MG-_TFNO_ with previous works on Navier-Stokes_. Our method achieves superior performance with a fraction of the parameters while largely compressing the weights (_TFNO_) and the input-domain (_MG-_TFNO_).** ### Ablation studies In this section, we further study the properties of our model through ablation studies. We first look at how TFNO suffers less from overfitting thanks to the low-rank constraints before comparing its performance with various tensor decompositions. Finally, we perform ablation studies for our multi-grid domain decomposition on Burger's equation. #### 4.4.1 Resolution invariance TFNO is resolution invariant, meaning that it can be trained on one resolution and tested on a different one. To illustrate this, we show zero-shot super-resolution results: we trained our best model (Table 1) on images of resolution \(128\times 128\) and tested it on unseen samples at higher resolutions (\(256\times 256\) and \(512\times 512\)), Table 4. As can be seen, our method does as well on unseen, higher-resolution unseen testing samples as it does on the training resolution, confirming the resolution invariance property of our neural operator. #### 4.4.2 Training on higher-resolution with Multi-grid One important advantage of our multi-grid domain decomposition is that it enables training much larger models on large inputs by distributing over patches. We demonstrate this, by training on larger resolution (512x512 discretization) and using the largest FNO and TFNO that fits in memory, on a V100 GPU. For the original FNO, this corresponds to a width of 12, first row in table 5. We then compare its performance with the multigrid approach with a neural operator as large as fits into the same V100 GPUs i.e. each width in the table has been optimized to be as large as memory allows. As we can see, our approach allows to fit a larger model and reaches a much lower relative \(L^{2}\) error. #### 4.4.3 Overfitting and Low-Rank Constraint Here, we show that lower ranks (higher compressions) lead to reduced overfitting. In Figure 1, we show the training and testing \(H^{1}\) errors for our TOP with Tucker decomposition at varying compression ratios (2x, 49x and 172x). We can see how, while the test error does not vary much, the gap between training and test errors reduces as we decrease the rank. As we can see, while being the most flexible, Tucker does not \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{\(128\times 128\)} & \multicolumn{2}{c}{\(256\times 256\)} & \multicolumn{2}{c}{\(512\times 512\)} & \multicolumn{2}{c}{\(1024\times 1024\)} \\ \cline{2-10} & \(L^{2}\) **error** & \(H^{1}\) **error** & \(L^{2}\) **error** & \(H^{1}\) **error** & \(L^{2}\) **error** & \(H^{1}\) **error** & \(L^{2}\) **error** & \(H^{1}\) **error** \\ \hline **CP** & \(\text{TFNO}\) & 0.3\% & 0.87\% & 0.3\% & 0.93\% & 0.3\% & 0.93\% & 0.3\% & 0.93\% \\ **CP** & \(\text{MG-TFNO}\) & 0.49\% & 1.2\% & 0.49\% & 1.3\% & 0.49\% & 1.5\% & 0.49\% & 1.6\% \\ \hline \hline \end{tabular} \end{table} Table 4: **Resolution invariance of** TFNO. Since the model is an operator, it is resolution invariant. In particular, here, we trained our model in resolution \(128\times 128\) and test it on unseen samples in various resolutions and show it generalizes, with virtually no loss of performance to higher resolutions unseen during training. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **Layers** & \(L^{2}\) **test error** & \(H^{1}\) **test error** & **\# Params** & **Model CR** \\ \hline FNO Li et al. (2021b) & 4 & 1.34\% & 3.78\% & 67,142,657 & - \\ FNO Li et al. (2021b) & 6 & 0.90\% & 2.59\% & 100,705,409 & \(0.7\times\) \\ FNO Li et al. (2021b) & 8 & 0.73\% & 2.09\% & 134,268,161 & \(0.5\times\) \\ \hline TFNO (CP) & 4 & 0.47\% & 1.20\% & 447,105 & 150\(\times\) \\ TFNO (CP) & 6 & 0.27\% & 0.74\% & 662,081 & 101\(\times\) \\ TFNO (CP) & 8 & 0.22\% & 0.59\% & 877,057 & 77\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 3: **Impact of our architectural improvements.** perform as well at higher compression ratios. In those extreme cases, CP and Tensor-Train lead to lower errors. #### 4.4.4 Tensor-Train and TOP Our approach is independent of the choice of tensor decomposition. We already showed how Tucker is most flexible and works well across all ranks. We also showed that while memory demanding for high rank, a CP decomposition leads to better performance and low rank. Our method can also be used in conjunction with other decompositions, such as tensor-train. To illustrate this, we show the convergence behavior of TNO with a Tensor-Train decomposition for a compression ratio of 178, figure 9(b). We also compare in Table 4.4.4 our TFNO with different tensor decompositions. \begin{table} \begin{tabular}{l l l l} \hline \hline **Method** & \(L^{2}\) **test error** & **\# Params** & **Model CR** \\ \hline FNO Li et al. (2021b) & 1.12\% & 67 M & 0\(\times\) \\ \hline TFNO [Tucker] & 0.37\% & 28 M & 2.3\(\times\) \\ TFNO [CP] & 0.46\% & 808 K & 83\(\times\) \\ TFNO [TT] & 1.18\% & 117 K & 574\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 6: **Relative \(L^{2}\) test error of our M\(G\)-T\(F\)NO approach for different tensor decompositions. We empirically found that Tucker works best for small compression ratio, CP excels at large compression ratio (\(\approx 100\times\)) but becomes computationally heavy for smaller ones. TT tends to be unstable at low-compression ratios but preserves a good performance for extreme compression ratio (\(>500\times\)).** \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Width & Patches & Padding & \(L^{2}\) error \\ \hline FNO & 12 & 0 & 0 & 6.1 \\ \hline MG-FNO & 42 & 4 & 70 & 2.9 \\ \hline MG-FNO & 66 & 4 & 53 & 2.4 \\ \hline MG-FNO & 88 & 16 & 40 & 1.8 \\ Tucker MG-TFNO & 80 & 16 & 46 & 1.3 \\ \hline \hline \end{tabular} \end{table} Table 5: **Training on 512x512**. Multi-grid domain decomposition allows us to fit larger models into memory by distributing patches in the domain space, thus reaching a lower relative error. Figure 9: **Train/test curve for a TOP-CP (9(a)) and TOP-TT (9(b))** #### 4.4.5 Decomposing domain and weights: Mg-Tfno. Tensorization and multi-grid domain decomposition not only improve performance individually, but their advantages compound and lead to a strictly better algorithm that scales well to higher-resolution data by decreasing the number of parameters in the model as well as the size of the inputs thereby improving performance as well as memory and computational footprint. Table 7 compares FNO with Tensorization alone, multi-grid domain decomposition alone, and our joint approach combining the two, MG-TFNO. In all cases, for \(\alpha\), we keep 40 Fourier coefficients for height and 24 for the width and use an operator width of 64. Our results imply that, under full parallelization, the memory footprint of the model's inference can be reduced by \(7\times\) and the size of its weights by \(10\times\) while also improving performance. Consistently with our other experiments, we find that the tensor compression in the weights acts as a regularizer and improves performance across the board. Our results imply that, under full parallelization, the memory footprint of the model's inference can be reduced by \(7\times\) and its weight size by \(10\times\) while also improving performance. #### 4.4.6 Burgers' Equation We test the efficacy of the standard domain decomposition approach by training on two separate Burgers problems: one with a final time \(T=0.5\) and one with \(T=1\). As described in Section 3.6, we expect that \begin{table} \begin{tabular}{l l l l l} \hline \hline **Method** & \(L^{2}\) **test error** & **\# Params** & **Model CR** & **Domain CR** \\ \hline **FNO**(Li et al., 2021b) & 2.54\% & 58 M & 0\(\times\) & 0\(\times\) \\ \hline **TFNO** [Tucker] & 1.39\% & 41 M & 1.5\(\times\) & 0\(\times\) \\ **TFNO** [CP] & 2.24\% & 130 K & **482\(\times\)** & 0\(\times\) \\ \hline **MG-FNO** & 1.43\% & 58 M & 0\(\times\) & 1.4\(\times\) \\ \hline **MG-TFNO** [Tucker] & **0.85\%** & 5.5 M & 10\(\times\) & 1.78\(\times\) \\ **MG-TFNO** [Tucker] & 1.89\% & 5.5 M & 10\(\times\) & **7\(\times\)** \\ \hline \hline \end{tabular} \end{table} Table 7: **Ablation comparing the performance on the relative \(L^{2}\) test error of our MG-TFNO approach, compared with its parts TFNO and MG-FNO and the regular FNO, on Navier-Stokes.** CR stands for compression ratio. Tensorization and multi-grid domain decomposition both individually improve performance while enabling space savings. The two techniques combined lead to further improvements, enabling large compression for both input and parameter, while outperforming regular FNO. Figure 10: **Error on Burgers’ equation with \(T=0.5\) (left) and \(T=1\) (right) as a function of domain compression ratio using standard domain decomposition without our multi-grid approach.** We evaluate the performance of the standard domain decomposition approach. The radius indicates the size, in physical space, of the padding added to each region. for \(T=1\), each region requires more global information thus significantly more padding need to be used in order to reach the same error. The results of Figure 10 indeed confirm this. The domain compression ratios needed for the approach to reach the performance of the full-field model are higher, indicating the need for incorporating global information. These results motivate our multi-grid domain decomposition approach. ## 5 Conclusion In this work, we introduced i) a novel tensor operator (TFNO) as well as a multi-grid domain decomposition approach which together form MG-TFNO, ii) an operator model that outperforms the FNO with a fraction of the parameters and memory complexity requirements, and iii) architectural improvements to the FNO. Our method scales better, generalizes better, and requires fewer training samples to reach the same performance; while the multi-grid domain decomposition enables parallelism over huge inputs. This paves the way to applications on very high-resolution data and in our future work, we plan to deploy MG-TFNO to large-scale weather forecasts for which existing deep learning models are prohibitive.
2303.00018
Entanglement Negativity Transitions in Chaotic Eigenstates
It was recently noted that the entanglement entropy for a subsystem of a chaotic eigenstate exhibits an enhanced correction when the subsystem approaches a phase transition at half the total system size. This enhanced correction was derived for general subsystems by Dong and Wang by summing over noncrossing permutations, which can be thought of as ``saddles'' either in a sum emerging from averaging over Wick contractions or in an analogous gravitational calculation. We extend these results to the case of entanglement negativity, an entanglement measure defined on a bipartite density matrix. We focus on a particular transition previously studied in a toy model of JT gravity, one for which the sum over permutations was found to give similar (or even stronger) enhanced corrections. We derive and resum the relevant permutations to give a form for the averaged negativity spectrum, reproducing the gravitational answer for some quantities and finding tension with other quantities, namely the partially transposed entropy. Along the way, we extend the results of Dong and Wang to the case of $n < 1$ R\'enyi entropy, showing that it always receives volume law corrections.
Sean McBride, Fernando Iniguez
2023-02-28T19:00:04Z
http://arxiv.org/abs/2303.00018v1
# Entanglement Negativity Transitions in Chaotic Eigenstates ###### Abstract It was recently noted that the entanglement entropy for a subsystem of a chaotic eigenstate exhibits an enhanced correction when the subsystem approaches a phase transition at half the total system size. This enhanced correction was derived for general subsystems by Dong and Wang by summing over noncrossing permutations, which can be thought of as "saddles" either in a sum emerging from averaging over Wick contractions or in an analogous gravitational calculation. We extend these results to the case of entanglement negativity, an entanglement measure defined on a bipartite density matrix. We focus on a particular transition previously studied in a toy model of JT gravity, one for which the sum over permutations was found to give similar (or even stronger) enhanced corrections. We derive and resum the relevant permutations to give a form for the averaged negativity spectrum, reproducing the gravitational answer for some quantities and finding tension with other quantities, namely the partially transposed entropy. Along the way, we extend the results of Dong and Wang to the case of \(n<1\) Renyi entropy, showing that it always receives volume law corrections. ## 1 Introduction The application of ideas from quantum chaos to gravitational settings has been particularly fruitful. Gravitational observables have been shown to be well approximated by observables obeying the eigenstate thermalization hypothesis (ETH) [1; 2; 3; 4]. This is due to the fact that a holographic quantum field theory with a semiclassical Einstein gravity dual is expected to be maximally chaotic, i.e. it saturates the bound of [5], up to higher derivative/stringy corrections which take one away from this regime. The power of ETH is that it allows us to approximate observables in the microcanonical ensemble by an observable's long-time quantum expectation value. The resulting microcanonical expectation value should resemble that of the canonical ensemble, up to corrections expected to be suppressed in the system size \(V\) by the thermodynamic ensembles' equivalence at large \(N\). This gives a quantitative idea of the process of thermalization in isolated quantum many-body systems. From the perspective of subsystem ETH [6; 7; 8; 9; 10], for a subsystem with volume fraction \(f<1/2\), the corrections to ETH are suppressed in system size. Formally, this means the trace-norm distance between the canonical and microcanonical density matrices vanishes in the large volume/thermodynamic limit, implying that off-diagonal matrix elements of operators vanishes and expectation values are roughly thermal. This line of thinking is expected to apply to Renyi entropies. When \(f=1/2\) exactly, the usual wisdom would say there's an \(\mathcal{O}(1)\) correction to the Renyi entropies. One way of seeing this is that there exists a phase transition in the Renyi entropy at \(f=1/2\), and the correction at this phase transition should be given by the uncertainty in choosing between an \(\mathcal{O}(1)\) number of equivalent dominant phases. However, in a model-specific result, a correction to the entanglement entropy of \(\mathcal{O}(\sqrt{V})\) was observed in [11], a correction derived in [12] and explicated in [13]. In particular, the von Neumann entropy of a subregion \(A\) with volume fraction \(f=1/2\) takes the form \[S_{A}=\frac{S(E)}{2}-\sqrt{\frac{C_{V}}{2\pi}}+\mathcal{O}(a) \tag{1}\] where \(S(E)\) is the thermodynamic entropy at energy \(E\) and \(C_{V}\) is the heat capacity at constant volume. As \(C_{V}\) is extensive in the system size, the correction is "enhanced" to \(\mathcal{O}(\sqrt{V})\). This formula is valid in the large volume limit, where \(\sqrt{V}\gg a\), \(a\) being the area of the splitting surface. In a parallel story, the attempt to match results from tensor networks with the gravitational path integral led to the understanding of "fixed area states" [14; 15]. These states, which are eigenstates of the area operator in semiclassical gravity, have a flat entanglement spectrum, up to fluctuations about a fixed saddle point which can naively be at most \(\mathcal{O}(1)\) in units of \(G_{N}\), where \(G_{N}\ll 1\). One can think of these fluctuations as the difference in the "canonical" ensemble where one fixes the canonical conjugate to the area operator, namely the relative boost between the entanglement wedges of the two sides [16], and a "microcanonical" ensemble where the eigenvalue of the area operator is fixed at its most probable value. It was noted in [13; 17] that the universal enhanced correction to the entanglement entropy also appears in fixed area states near transition, where the "transition" in this context occurs due to a competition between two extremal surfaces. One way of understanding this correction is that near transition we no longer care about fluctuations about a fixed saddle, but instead we care about resumming an infinite number of saddles which appear in the sum over topologies in the replicated geometries. Both of these results match with a more detailed calculation of the same quantity in [18], where in a model of Jackiw-Teitelboim (JT) gravity + end-of-the-world (EOW) branes a particular subsystem entropy \(S(\rho_{R})\) had the form \[S(\rho_{R})=\log k-\sqrt{\frac{2\pi}{\beta}}+\mathcal{O}(\log\beta). \tag{2}\] This \(\sqrt{1/\beta}\) correction is analogous to the \(\sqrt{C_{V}}\) correction in chaotic eigenstates. Here we've set Newton's constant (which is analagous to \(N\)) to one, but it can be restored via \(\beta\to G_{N}\beta\). Recently, it was shown by [19] that similar enhanced corrections exist near transitions in entanglement negativity, a tripartite entanglement measure defined on a bipartite density matrix \(\rho_{A_{1}A_{2}}\). In particular, the logarithmic negativity was shown to have the following form at transition: \[\mathcal{E}(\rho_{R_{1}R_{2}})=\log k_{2}-\frac{\pi^{2}}{8\beta}+\mathcal{O}( \log\beta), \tag{3}\] for two subsystems \(R_{1}\) and \(R_{2}\) with \(R_{1}\cup R_{2}=R\). Further corrections were derived for measures descending from a Renyi version of negativity. There exists a rich phase diagram for entanglement negativity in holographic states, and we show that a similar phase diagram exists for a generic chaotic eigenstate. Our aim is to systematically derive the corrections at transitions in this phase space. There are two possible transitions, but as was explored in [19] we only expect interesting behavior near one of the transitions, for reasons we'll recapitulate in the main text. The outline of the paper is as follows. In section 2 we review the derivation of (1), in particular the resolvent formalism of [13]. In section 3 we review the various negativity measures discussed in [19] and the sum over relevant permutations for the phase transition of interest. In section 4 we compute corrections to the entanglement measures of interest. We conclude with some discussion and future directions. ## 2 Diagrammatics for Chaotic Subsystems We first review the formalism of [12; 13], which was used to compute the universal form of corrections to the entanglement entropy of a subsystem at transition. We focus on [13], as their formalism more easily generalizes to our future calculations. Readers familiar with their formalism may skip this section, whose only purpose is to make this work self-contained. A generic eigenstate \(\ket{E}\) of a Hamiltonian \(H\) defined on a bipartite system such that \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) can be Schmidt decomposed via \[\ket{E}=\sum_{iJ}M_{iJ}\ket{E_{i}}_{A}\otimes\ket{E_{J}}_{B}, \tag{1}\] where \(\ket{E_{i}}\) and \(\ket{E_{J}}\) denote eigenstates of the subsystem Hamiltonians \(H_{A}\) and \(H_{B}\), respectively. As is convention, we use lowercase indices for states of \(A\) and uppercase indices for states of \(B\). ETH instructs us to think of \(M_{iJ}\) as a Gaussian random variable with zero mean and energy banded with width \(\Delta\)[20; 21]. In particular, for a system with spatial dimension \(d\geq 2\), we have the ansatz \[M_{iJ}=e^{-S(E_{Ai}+E_{BJ})/2}\left(\frac{e^{-\epsilon^{2}/2\Delta^{2}}}{\sqrt{ 2\pi}\Delta}\right)^{1/2}C_{iJ}, \tag{2}\] where \(\epsilon=E_{i}+E_{J}-E\) is the deviation from the total microcanonical energy. When averaged over a small energy band in \(E_{A}\) and \(E_{B}\), the random coefficients \(C_{iJ}\) satisfy \[\overline{C_{iJ}}=0,\quad\overline{C_{iJ}C_{i^{\prime}J^{\prime}}}=\delta_{ii^ {\prime}}\delta_{JJ^{\prime}}. \tag{3}\] The effects of finite \(\Delta\) will not affect the current and future computation, so we work in the limit \(\Delta\to 0\), where we approximate \[M_{iJ}\approx e^{-S(E)/2}C_{iJ}. \tag{4}\] This approximation assumes the true density of states in a narrow energy band is well approximated by the thermodynamic entropy in the canonical ensemble. To leading order in the system volume, we can further approximate the density of states of the total system as the product of the density of states of the subsystems \(A\) and \(B\), evaluated at the subsystem energy \(E_{A}\). In other words, \[S(E)\approx S_{A}(E_{A})+S_{B}(E-E_{A}). \tag{5}\] This leads to the following form for a subsystem density matrix \(\rho_{A}\) \[\rho_{A}=\frac{1}{\mathcal{N}}\sum_{E_{i}-2\Delta<E_{j}<E_{i}+2\Delta}\sum_{E- E_{i}-\Delta<E_{J}<E-E_{i}+\Delta}C_{iJ}C_{jJ}\ket{E_{i}}_{A}\bra{E_{j}}_{A}. \tag{6}\] The double sum takes into account energies in a region of width \(2\Delta\). Averaging over the \(C_{ij}\)'s gives the averaged subsystem density matrix \[\overline{\rho_{A}}=\frac{1}{\mathcal{N}}\sum_{i}d_{B}(E-E_{i})\ket{E_{i}}_{A }\bra{E_{i}}_{A}, \tag{7}\] where the normalization is given by \[\mathcal{N}=\sum_{i}d_{A}(E_{i})d_{B}(E-E_{i}). \tag{8}\] Here \(d_{A}\) and \(d_{B}\) are the degeneracies at a given energy. As a shorthand and as a motivation for our future computation, we can instead write \[d_{A}(E_{i})=e^{S_{A}(E_{i})}\equiv e^{S_{A}};\quad d_{B}(E-E_{i})=e^{S_{B}(E-E_ {i})}\equiv e^{S_{B}}. \tag{9}\] As our goal is to compute subsystem von Neumann entropy, we should proceed by generalizing this procedure to compute \(\operatorname{Tr}\overline{\rho_{A}^{n}}\). Before averaging, from (6) we have \[\rho_{A}^{n}=\frac{1}{\mathcal{N}^{n}}\sum_{E_{i_{1}}}\sum_{i_{2},\cdots,i_{n+ 1};J_{1},\cdots,J_{n}}\prod_{m=1}^{n}C_{i_{m}J_{m}}C_{i_{m+1}J_{m}}\left|E_{i_{ 1}}\right\rangle_{A}\left\langle E_{i_{n+1}}\right|_{A}, \tag{10}\] where the second sum is understood to be over a strip of width \(2n\Delta\), but we assume \(\Delta\) vanishes quickly enough at finite \(n\) that this isn't a significant effect. The difference between \(\overline{\log\operatorname{Tr}\left(\rho_{A}\right)^{n}}\) and \(\log\operatorname{Tr}\overline{\left(\rho_{A}\right)^{n}}\) is exponentially suppressed in the system volume [10], so our goal will be to compute \(\operatorname{Tr}\overline{\left(\rho_{A}\right)^{n}}\), as it is a more tractable calculation. This involves a sum over Wick contractions, as we assume higher point connected correlations of the \(C_{ij}\)'s vanish. The result is \[\operatorname{Tr}\overline{\left(\rho_{A}\right)^{n}}=\begin{cases}\frac{1}{ \mathcal{N}^{n}}e^{S_{A}+nS_{B}}{}_{2}F_{1}\left(1-n,-n;2;e^{S_{A}-S_{B}} \right),S_{A}<S_{B}\\ \frac{1}{\mathcal{N}^{n}}e^{nS_{A}+S_{B}}{}_{2}F_{1}\left(1-n,-n;2;e^{S_{B}-S_ {A}}\right),S_{A}>S_{B}.\end{cases} \tag{11}\] As a sanity check, we recover \(S_{n}(\rho_{A})=S/2+\mathcal{O}(1)\) where \(S_{A}=S_{B}=S/2\) at \(f=1/2\). The derivation of this expression from the resolvent sum over noncrossing permutations is given in Appendix B. To study the corrections to this quantity at transition, we upgrade the putative constant density of states to an integral over an energy dependent density of states. In other words, we send \[S_{A}\to S_{A}(E_{A}),\quad S_{B}\to S_{B}(E-E_{A}) \tag{12}\] and integrate over \(E_{A}\). The new averaged trace is given by \[\operatorname{Tr}\overline{\left(\rho_{A}\right)^{n}}=\frac{1}{\mathcal{N}^ {n}}\int dE_{A}e^{S_{A}(E_{A})+S_{B}(E-E_{A})}G_{n}(E_{A}), \tag{13}\] where \(G_{n}(f,E_{A})\) encompasses the \(n\)-dependent piece of the trace: \[G_{n}(E_{A})=\begin{cases}e^{(n-1)S_{B}(E-E_{A})}{}_{2}F_{1}\left(1-n,-n;2;e^ {S_{A}(E_{A})-S_{B}(E-E_{A})}\right),S_{A}(E_{A})<S_{B}(E-E_{A})\\ e^{(n-1)S_{A}(E_{A})}{}_{2}F_{1}\left(1-n,-n;2;e^{S_{B}(E-E_{A})-S_{A}(E_{A})} \right),S_{A}(E_{A})>S_{B}(E-E_{A})\end{cases} \tag{14}\] and the normalization is now \[{\cal N}=\int dE_{A}e^{S_{A}(E_{A})+S_{B}(E-E_{A})}. \tag{15}\] From this, we can directly calculate the ensemble averaged Renyi entropies \(S_{n}(\rho_{A})\): \[\overline{S_{n}}=\frac{1}{1-n}\log\left(\frac{1}{{\cal N}^{n}}\int dE_{A}e^{S_{ A}(E_{A})+S_{B}(E-E_{A})}G_{n}(E_{A})\right). \tag{16}\] ### Saddle Point Analysis We make the ansatz that the entropy is extensive in the subsystem size, that is \[S_{A}(E_{A})=fVs\left(\frac{E_{A}}{fV}\right),\quad S_{B}(E-E_{A})=(1-f)Vs \left(\frac{E-E_{A}}{(1-f)V}\right), \tag{17}\] where \(f\equiv V_{A}/V\) is the volume fraction, \(s(e)\) is the entropy density as a function of the energy density \(e\), and the other factors come from dimensional analysis. We're mainly interested in what happens at the transition \(f=1/2\). The "featureless" or infinite temperature case is when \(s(e)=1\) such that all subsystem entropies are proportional to subsystem volume. We're only interested in the corrections from finite temperature, which can be thought of as the difference between the answer in the canonical ensemble and the microcanonical ensemble. The microcanonical Renyi entropy is the contribution of the global "unaveraged" microcanonical state \(\rho=\sum_{E-\Delta<E_{i}<E+\Delta}\left|E_{i}\right\rangle\left\langle E_{i} \right|:\) \[S_{n}^{MC}=\frac{1}{1-n}\log\left(\frac{1}{{\cal N}^{n}}\int dE_{A}e^{S_{A}(E_ {A})+nS_{B}(E-E_{A})}\right). \tag{18}\] We are interested in the correction away from the dominant microcanonical saddle, so we are interested in computing the following quantity: \[\overline{S_{n}}-S_{n}^{MC}=\frac{1}{1-n}\ln\left(\frac{\int dE_{A}\exp(F_{1}( E_{A}))}{\int dE_{A}\exp(F_{2}(E_{A}))}\right), \tag{19}\] where \(F_{1}(E_{A})\) and \(F_{2}(E_{A})\) are functions defined by \[F_{1}(E_{A}) =fVs\left(\frac{E_{A}}{fV}\right)+(1-f)Vs\left(\frac{E-E_{A}}{(1- f)V}\right)+\ln G_{n}(E_{A})\] \[F_{2}(E_{A}) =fVs\left(\frac{E_{A}}{fV}\right)+n(1-f)Vs\left(\frac{E-E_{A}}{( 1-f)V}\right). \tag{20}\] As both functions scale with volume, we can perform a saddle point analysis. The saddle point equations for these functions are \[s^{\prime}\left(\frac{E_{1}}{fV}\right) =s^{\prime}\left(\frac{E-E_{1}}{(1-f)V}\right)-\frac{G^{\prime}_{ n}(f,E_{1})}{G_{n}(f,E_{1})}\] \[s^{\prime}\left(\frac{E_{2}}{fV}\right) =ns^{\prime}\left(\frac{E-E_{2}}{(1-f)V}\right), \tag{21}\] where \(E_{1}\) and \(E_{2}\) are the saddle point energies of \(F_{1}(E_{A})\) and \(F_{2}(E_{A})\), respectively. The analysis of these saddle point equations was done in totality for \(n>1\) in [13].1 Here we fill in a small gap and study the case of \(n<1\). This will be useful later when we are computing analytic continuations of Renyi negativities below \(n=1\). Footnote 1: See also [22] for a similar study of relative entropy with the same ansatz. ### Corrections at Transition for \(n<1\) \(s(x)\) is a monotonically increasing function of \(x\) with a monotonically decreasing first derivative (take \(s(x)=\sqrt{x}\) as a concrete example). For the case \(n<1\), we can therefore write the iequality \[\frac{E_{2}}{f}>\frac{E-E_{2}}{1-f}, \tag{22}\] which immediately implies \[S_{A}(E_{2})>\frac{f}{1-f}S_{B}(E-E_{2}), \tag{23}\] and therefore \(S_{A}(E_{1})>S_{B}(E-E_{1})\) for \(f>1/2\). The first thing to notice is that there is only one saddle point for both \(F_{1}(E_{A})\) and \(F_{2}(E_{A})\), as \(G_{n}(f,E_{A})\) is now a strictly concave function. The single saddle for \(F_{1}(E_{A})\) depends sensitively on the saddle point of \(G_{n}(f,E_{A})\), which itself only depends on the crossover point between the two hypergeometrics. As the crossover point is completely determined by the \(n\)-independent quantity \[S_{A}(E_{A})-S_{B}(E-E_{A}) \tag{24}\] and the rest of the \(E_{1}\) saddle point equation is independent of \(n\), the full saddle similarly becomes completely independent of \(n\). This should be contrasted with the obviously \(n\)-dependent saddle point of \(F_{2}(E_{A})\). This difference will generically cause the two saddles to differ by an \({\cal O}(1)\) factor, so for all volume fractions we expect the different in Renyi entropies to be volume law: \[\overline{S_{n}}-S_{n}^{MC}={\cal O}(V),\quad n<1. \tag{25}\] Note that this applies for all volume fractions, implying that the \(n<1\) Renyi entropies do not obey the principle of canonical typicality. This clarifies a conceptual point. For \(n\to 1^{+}\), the \(\sqrt{V}\) correction lies in between an exponentially suppressed \({\cal O}(e^{-cV})\) region (\(f<1/2\)) and a strongly enhanced \({\cal O}(V)\) region (\(f>1/2\)). Why, then, do we not get a similar enhancement for \(n\to 1^{-}\)? The answer is that the dominant behavior in \(F_{1}(E_{A})\), which previously supplied the emergent "soft mode" for the flat interval between two saddles, becomes independent of the Renyi index. This nonanalyticity might be worrying if one is used to a Renyi entropy analytic in \(n\), but the thermodynamic limit breaks this assumption. The form of these corrections agrees with the analysis of a gravitational model in Appendix C of [19]. ## 3 Entanglement Negativity In this section we compute similar quantities as [13] for entanglement negativity measures. We begin by reviewing some salient properties of entanglement negativity and its utility as a tripartite measure of entanglement before diving into the calculation. ### Review of Negativity Entanglement negativity refers to an entanglement measure based on properties of the partial transpose operation applied to a bipartite density matrix \(\rho_{A_{1}A_{2}}\), defined via \[\left\langle a_{1},a_{2}|\rho_{A_{1}A_{2}}^{T_{A_{2}}}|a_{1}^{\prime},a_{2}^{ \prime}\right\rangle=\langle a_{1},a_{2}^{\prime}|\rho_{A_{1}A_{2}}|a_{1}^{ \prime},a_{2}\rangle \tag{11}\] for basis states \(\{|a_{1}\rangle\}\) in \(A_{1}\) and \(\{|a_{2}\rangle\}\) in \(A_{2}\)[23; 24; 25]. The partial transpose is a positive but not completely positive map, which means some of the eigenvalues of \(\rho_{A_{1}A_{2}}^{T_{A_{2}}}\) (hereafter \(\rho_{A_{1}A_{2}}^{T_{2}}\)) can be negative. Entanglement negativity quantifies the different between the eigenvalues of the partially transposed density matrix and the original density matrix via \[\mathcal{N}(\rho_{A_{1}A_{2}})=\sum_{i}\frac{|\lambda_{i}|-\lambda_{i}}{2}= \sum_{i:\lambda_{i}<0}|\lambda_{i}|. \tag{12}\] As with the von Neumann entropy, there exist Renyi generalizations of entanglement negativity: \[\mathcal{N}_{n}=\mathrm{Tr}\left(\rho_{A_{1}A_{2}}^{T_{2}}\right)^{n}. \tag{13}\] Due to the absolute value, one needs to define two different analytic continuations for even and odd Renyi index \(n\), so there are in fact two Renyi negativities given by \[\mathcal{N}_{2k}^{\mathrm{(even)}} =\sum_{i}|\lambda_{i}|^{2k}\] \[\mathcal{N}_{2k-1}^{\mathrm{(odd)}} =\sum_{i}\mathrm{sgn}\lambda_{i}|\lambda_{i}|^{2k-1} \tag{14}\] for integer \(k\). We define relevant entanglement measures via analytic continuation from these quantities. The most common quantity to talk about is the logarithmic negativity, given via a \(k\to 1/2\) analytic continuation of the even Renyi negativity \[\mathcal{E}(\rho_{A_{1}A_{2}})=\lim_{k\to 1/2}\log\mathcal{N}_{2k}^{(\text{even})}( \rho_{A_{1}A_{2}})=\log\sum_{i}|\lambda_{i}|. \tag{10}\] One other quantity of interest is the partially transposed entropy, also known as the odd entropy, which is related to the \(k\to 1\) analytic continuation of the odd Renyi negativity and is explicitly given by \[S^{T_{2}}\equiv\lim_{k\to 1}\frac{1}{2k-2}\log\mathcal{N}_{2k-1}=-\sum_{i} \lambda_{i}\log|\lambda_{i}|. \tag{11}\] We need to include the Renyi entropy-like singular term out front as \(\mathcal{N}_{1}^{(\text{odd})}=\text{Tr}\,\rho_{A_{1}A_{2}}^{T_{2}}=1\). ### Disorder Averaged Negativity Now we can discuss the disorder average2 in the Gaussian approximation described in the previous section. The Schmidt decomposition of the energy eigenstate \(|E\rangle\) is now Footnote 2: In the condensed matter literature, disorder averaging has a different meaning and what we’re doing should more properly be called “ensemble averaging”. Ensemble averaging, however, already has a meaning in the high energy literature, so we keep with the terminology of [13]. \[|E\rangle=\sum_{i_{1}j_{1}J}M_{ijJ}\left|E_{i}\right\rangle_{A_{1}}\otimes \left|E_{j}\right\rangle_{A_{2}}\otimes\left|E_{J}\right\rangle_{B}. \tag{12}\] Once again we'll consider \(M_{ijJ}\) as a Gaussian random variable, in particular with \[M_{ijJ}\approx e^{-S(E)/2}C_{ijJ} \tag{13}\] \[\overline{C_{ijJ}}=0,\quad\overline{C_{ijJ}C_{i^{\prime}j^{\prime}J^{\prime}}} =\delta_{ii^{\prime}}\delta_{jj^{\prime}}\delta_{JJ^{\prime}}. \tag{14}\] The partially transposed density matrix is \[\rho_{A_{1}A_{2}}^{T_{2}}=\frac{1}{\mathcal{N}}\sum_{E_{i}E_{j}E_{J}}C_{i_{1} j_{1}J}C_{i_{2}j_{2}J}\left|E_{i_{1}},E_{j_{2}}\right\rangle\left\langle E_{i_{2}} E_{j_{1}}\right|, \tag{15}\] or, by replacing dummy variables \[\rho_{A_{1}A_{2}}^{T_{2}}=\frac{1}{\mathcal{N}}\sum_{E_{i}E_{j}E_{J}}C_{i_{1} j_{2}J}C_{i_{2}j_{1}J}\left|E_{i_{1}},E_{j_{1}}\right\rangle\left\langle E_{i_{2}} E_{j_{2}}\right|, \tag{16}\] where the sum over energies is understood to be in a window of width \(3\Delta\), though again we take this width to vanish. We also have \[\left(\rho_{A_{1}A_{2}}^{T_{2}}\right)^{n}=\frac{1}{\mathcal{N}^{n}}\sum_{E_{i_{1 }},E_{j_{1}}}\sum_{i_{1},\cdots,i_{n}}\sum_{j_{1},\cdots,j_{n},J_{1},\cdots,J_{ n}}\prod_{m=1}^{n}C_{i_{m}j_{m+1}J_{m}}C_{i_{m+1}j_{m}J_{m}}\left|E_{i_{1}},E_{j_{ 1}}\right\rangle\left\langle E_{i_{n+1}}E_{j_{n+1}}\right|. \tag{3.12}\] Note that the partial transpose has made it so the \(i\) (\(A_{1}\)) indices are contracted cyclically, while the \(j\) (\(A_{2}\)) indices are contracted anti-cyclically. The resolvent equation for these Wick contractions is the same as derived in [19], and a more detailed explanation is given in Appendix B. We quote the result here: \[\lambda R(\lambda)=e^{S_{A_{1}}+S_{A_{2}}}+\frac{e^{S_{B}}}{e^{S_{A_{2}}}} \frac{R(\lambda)(1+R(\lambda))}{1-e^{2S_{A_{2}}}R(\lambda)^{2}}. \tag{3.13}\] This resolvent equation furnishes a negativity spectrum described by the phase diagram in Figure 1. There are two transitions to consider. The first is when the \(A\) and \(B\) subsystems are the same size, i.e. \(S_{A_{1}}+S_{A_{2}}=S_{B}\), corresponding to the transition from \(g=\mathbb{1}\) to \(g=X\) in the phase diagram. From the calcul Figure 1: Phase diagram of Rényi negativity for various subsystem densities of state. The \(g\)’s label the dominant permutation which appears in the sum over Wick contractions; their exact forms are given in Appendix A. The resolvent equation (3.13) is valid in the regime \(S_{A_{2}}<<S_{A_{1}}+S_{B}\); we’ve indicated the forbidden region \(g=X^{-1}\) in a lighter shade. Reproduced with minor alterations from [19] expect any enhanced corrections at this transition, so we don't study it in any detail, though the calculation would presumably follow the same steps. The second transition of interest is when the \(A_{1}\) subsystem is the same size as the combined \(A_{2}B\) subsystem, \(S_{A_{1}}=S_{A_{2}}+S_{B}\), corresponding to the transition from \(g=\tau\) to \(g=X\) in the phase diagram. In this regime the sum over diagrams is known explicitly, and the disorder averaged partially transposed density matrices are \[\operatorname{Tr}\overline{(\rho_{A_{1}A_{2}}^{T_{2}})^{2k}}=\begin{cases} \frac{1}{\mathcal{N}^{2k}}e^{2k(S_{A_{2}}+S_{B})+S_{A_{1}}}e^{S_{A_{2}}}{}_{2 }F_{1}\left(1-k,-2k;2;e^{S_{A_{1}}-S_{A_{2}}-S_{B}}\right),S_{A_{1}}<S_{A_{2}} +S_{B}\\ \frac{1}{\mathcal{N}^{2k}}e^{2kS_{A_{1}}+S_{A_{2}}+S_{B}}e^{S_{A_{2}}}{}_{2}F_ {1}\left(1-k,-2k;2;e^{S_{A_{2}}+S_{B}-S_{A_{1}}}\right),S_{A_{1}}>S_{A_{2}}+S_ {B}\end{cases} \tag{3.14}\] for even \(n=2k\) and \[\operatorname{Tr}\overline{(\rho_{A_{1}A_{2}}^{T_{2}})^{2k-1}}=\begin{cases} \frac{1}{\mathcal{N}^{2k-1}}e^{(2k-1)(S_{A_{2}}+S_{B})+S_{A_{1}}}{}_{2}F_{1} \left(1-2k,1-k;1;e^{S_{A_{1}}-S_{A_{2}}-S_{B}}\right),S_{A_{1}}<S_{A_{2}}+S_{B }\\ \frac{1}{\mathcal{N}^{2k-1}}e^{(2k-1)S_{A_{1}}+S_{A_{2}}+S_{B}}{}_{2}F_{1} \left(1-2k,1-k;1;e^{S_{A_{2}}+S_{B}-S_{A_{1}}}\right),S_{A_{1}}>S_{A_{2}}+S_{B }\end{cases} \tag{3.15}\] for odd \(n=2k-1\). We give derivations for these formulae in Appendix A; the gist is that we sum over all permutations which lie on a geodesic between two dominant regions in phase space. The permutations on this geodesic can be enumerated, and the previous formulae are functions whose moments reproduce the combinatoric factors for these permutations. ## 4 Negativity Phase Transitions We can use (3.14) and (3.15) to understand the difference between the microcanonical and canonical Renyi negativities in a chaotic eigenstate, using much the same techniques as were used in [13]. We denote by \(f_{A_{1}}\) the volume fraction of \(A_{1}\) such that the naive phase transition happens at \(f_{A_{1}}=1/2\). We also denote the volume fraction of \(A_{2}\) by \(f_{A_{2}}\) and use \(f_{A}=f_{A_{1}}+f_{A_{2}}\) to denote the total volume fraction of system \(A\). We impose energy conservation in all three subsystems, such that our ansatz is for subsystem entropies is \[S_{A_{1}}(E_{A_{1}}) =f_{A_{1}}Vs\left(\frac{E_{A_{1}}}{f_{A_{1}}V}\right)\] \[S_{A_{2}}(E_{A_{2}}) =f_{A_{2}}Vs\left(\frac{E_{A_{2}}}{f_{A_{2}}V}\right)\] \[S_{B}(E-E_{A_{1}}-E_{A_{2}}) =(1-f_{A})Vs\left(\frac{E-E_{A_{1}}-E_{A_{2}}}{(1-f_{A})V}\right). \tag{4.1}\] These again follow from ergodicity and imposing that the subsystem entropy is only a function of the subsystem energy density. ### Even Renyi Negativity We'll start with studying the even Renyi negativities, from which the logarithmic negativity descends. Our expressions for the logarithms of the canonical ensemble and microcanonical ensemble Renyi negativities using our previous ansatzes are as follows: \[\overline{\mathcal{N}_{2k}} =\frac{1}{\mathcal{N}^{2k}}\int dE_{A_{1}}dE_{A_{2}}e^{S_{A_{1}}( E_{A_{1}})+2S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A_{2}})}G_{k}(f_{A_{1}},f_{ A_{2}},E_{A_{1}},E_{A_{2}})\] \[\mathcal{N}_{2k}^{MC} =\frac{1}{\mathcal{N}^{2k}}\int dE_{A_{1}}dE_{A_{2}}e^{S_{A_{1}}( E_{A_{1}})+S_{A_{2}}(E_{A_{2}})+2k(S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A_{2}}))}, \tag{11}\] where the function \(G_{k}(f_{A_{1}},f_{A_{2}},E_{A_{1}},E_{A_{2}})\) is defined as the \(k\)-dependent part of (10) \[G_{k}(f_{A_{1}},f_{A_{2}},E_{A_{1}},E_{A_{2}})=\begin{cases}e^{(2k-1)(S_{A_{2}} +S_{B})}{}_{2}F_{1}\left(1-k,-2k;2;e^{S_{A_{1}}-S_{A_{2}}-S_{B}}\right),S_{A_{1 }}<S_{A_{2}}+S_{B}\\ e^{(2k-1)S_{A_{1}}}{}_{2}F_{1}\left(1-k,-2k;2;e^{S_{A_{2}}+S_{B}-S_{A_{1}}} \right),S_{A_{1}}>S_{A_{2}}+S_{B},\end{cases} \tag{12}\] and \(\mathcal{N}\) (with no other sub/superscripts) is an overall normalization given by \[\mathcal{N}=\int dE_{A_{1}}dE_{A_{2}}e^{S_{A_{1}}(E_{A_{1}})+S_{A_{2}}(E_{A_{2 }})+S_{B}(E-E_{A_{1}}-E_{A_{2}})}. \tag{13}\] Whenever unspecified, the subsystem entropies should now be understood to be valued at the subsystem energies, which we only omit for notational clarity. We write the difference between the logarithms of these quantities as \[\log\overline{\mathcal{N}_{2k}}-\log\mathcal{N}_{2k}^{MC}\equiv\log\left( \frac{\int dE_{A_{1}}dE_{A_{2}}\exp(F_{1}(E_{A_{1}},E_{A_{2}}))}{\int dE_{A_{1 }}dE_{A_{2}}\exp(F_{2}(E_{A_{1}},E_{A_{2}}))}\right), \tag{14}\] where the functions \(F_{1}(E_{A_{1}},E_{A_{2}})\) and \(F_{2}(E_{A_{1}},E_{A_{2}})\) are defined via the corresponding integrands in (11). The strategy will be to find saddle points for \(F_{1}(E_{A_{1}},E_{A_{2}})\) and \(F_{2}(E_{A_{1}},E_{A_{2}})\) and use the relative behavior of those saddle points to determine the scaling of the correction at transition. We have two coupled saddle point equations for each both functions, which are given by \[s^{\prime}\left(\frac{E_{1}^{(1)}}{f_{A_{1}}V}\right) =s^{\prime}\left(\frac{E-E_{1}^{(1)}-E_{1}^{(2)}}{(1-f_{A})V} \right)-\frac{\partial_{E_{A_{1}}}G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^ {(2)})}{G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^{(2)})}\] \[2s^{\prime}\left(\frac{E_{1}^{(2)}}{f_{A_{2}}V}\right) =s^{\prime}\left(\frac{E-E_{1}^{(1)}-E_{1}^{(2)}}{(1-f_{A})V} \right)-\frac{\partial_{E_{A_{2}}}G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^ {(2)})}{G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^{(2)})}\] \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right) =2ks^{\prime}\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V}\right)\] \[(2k+1)s^{\prime}\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right) =2ks^{\prime}\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V} \right), \tag{15}\] where the pair \({\cal E}_{1}=(E_{1}^{(1)},E_{1}^{(2)})\) denotes a saddle point for \(F_{1}(E_{A_{1}},E_{A_{2}})\), while \({\cal E}_{2}=(E_{2}^{(1)},E_{2}^{(2)})\) denotes the saddle point for \(F_{2}(E_{A_{1}},E_{A_{2}})\). As \(F_{2}(E_{A_{1}},E_{A_{2}})\) is a strictly concave function, there is only one global maximum. \(F_{1}(E_{A_{1}},E_{A_{2}})\) on the other hand can have two maxima, as \(G_{k}(f_{A_{1}},f_{A_{2}},E_{A_{1}},E_{A_{2}})\) is strictly nonmonotonic. The first thing we have to be careful about is whether we are still within our regime of validity for probing the transition of interest. In the case of Renyi entropy, the fact that the dominant contribution comes from noncrossing partitions was assumed to hold for all of parameter space, that is for all values of subsystem entropy. This can be traced back to the fact that the dominant permutations all lie on a single geodesic \(G(\mathbb{1},X)\). In our case, we're trying to probe the transition on one geodesic \(G(\tau,X)\) while suppressing diagrams from other geodesics, which imposes some natural constraints on the size of our subsystems. We are justified in only considering the diagrams from Appendix A only if the saddle point energies satisfy the conditions: \[S_{A_{2}}(E_{1,2}^{(2)})<S_{A_{1}}(E_{1,2}^{(1)})+S_{B}(E-E_{1,2 }^{(1)}-E_{1,2}^{(2)})\] \[S_{B}(E-E_{1,2}^{(1)}-E_{1,2}^{(2)})<S_{A_{1}}(E_{1,2}^{(1)})+S_ {A_{2}}(E_{1,2}^{(2)}), \tag{4.7}\] such that all contributions from subleading permutations remain subleading. We include a rough phase diagram of the allowed region to explore in Figure 2. If the saddle Figure 2: A schematic plot of regions (shaded in red) in the \(E_{A_{1}}-E_{A_{2}}\) plane where our ansatz for the dominant sum over permutations does not hold. The lines separating the regions will depend sensitively on the form of \(s(e)\) and the volume fractions of the subsystems. point lies outside the allowed region, our answer for the dominant sum over permutations no longer holds, so we shouldn't try to explore those regions of phase space. This means before attempting to compute corrections at transition for all subsystem volume fractions, we should derive some bounds on the regime of validity of our approximation. We'll make use of the following inequality: \[S_{A_{1}}(E_{A_{1}})+S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A_{2}})\leq Vs \left(\frac{E}{V}\right), \tag{4.8}\] which follows from the fact that our subsystem entropy function \(s(e)\) is concave. Plugging in the saddle points and using the first constraint in (4.7) we can write \[2S_{A_{2}}(E_{2}^{(2)}) <S_{A_{1}}(E_{2}^{(1)})+S_{A_{2}}(E_{2}^{(2)})+S_{B}(E-E_{2}^{(1)}- E_{2}^{(2)})<Vs\left(\frac{E}{V}\right)\] \[\Rightarrow S_{A_{2}}(E_{2}^{(2)}) <\frac{V}{2}s\left(\frac{E}{V}\right). \tag{4.9}\] We can use this relation to find \[S_{A_{2}}(E_{2}^{(2)}) =f_{A_{2}}s\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right)>f_{A_{2}}s \left(\frac{E_{2}^{(2)}}{V}\right)\] \[\Rightarrow f_{A_{2}}s\left(\frac{E_{2}^{(2)}}{V}\right)<\frac{V}{2}s \left(\frac{E}{V}\right), \tag{4.10}\] as \(E_{2}^{(2)}<E\), a constraint on \(f_{A_{2}}\) which makes this true for all subsystem entropy densities is \[f_{A_{2}}<1/2. \tag{4.11}\] Therefore our calculations are only valid when subsystem \(A_{2}\) is less than half of the total system size. We can find a similar inequality on \(S_{B}\) using the second constraint in (4.7). We have \[2S_{B}(E-E_{2}^{(1)}-E_{2}^{(2)}) <S_{A_{1}}(E_{2}^{(1)})+S_{A_{2}}(E_{2}^{(2)})+S_{B}(E-E_{2}^{(1) }-E_{2}^{(2)})\leq Vs\left(\frac{E}{V}\right)\] \[\Rightarrow S_{B}(E-E_{2}^{(1)}-E_{2}^{(2)}) <\frac{V}{2}s\left(\frac{E}{V}\right). \tag{4.12}\] We can therefore write \[S_{B}(E-E_{2}^{(1)}-E_{2}^{(2)}) =(1-f_{A})Vs\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V} \right)>(1-f_{A})Vs\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{V}\right)\] \[\Rightarrow(1-f_{A})Vs\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{V} \right) <\frac{V}{2}s\left(\frac{E}{V}\right). \tag{4.13}\] Again, a result that makes this inequality true for all saddle point energies is \[f_{A}>1/2. \tag{4.14}\] This ties together a nice family of restrictions: both subsystems \(A_{2}\) and \(B\) have to have volume fraction less then half of the system. We illustrate these constraints in Figure 3. This makes some sense, as we want to probe transitions dominated by the behavior of \(A_{1}\) relative to the rest of the system. Another way of seeing there should be a restricted regime for our procedure is as follows: entanglement negativity is agnostic as to which subsystem \(A_{1}\) or \(A_{2}\) one applies the partial transpose to. This would of course result in an averaged density matrix trace symmetric under exchange of \(S_{A_{1}}\) and \(S_{A_{2}}\), which our expressions (3.14) and (3.15) are not. However, by writing a resolvent equation valid only in a certain parameter regime, we can no longer comfortably integrate over all energies. This is an important point because the deviations from the featureless case can in principle be of order the system size, and so corrections are not necessarily perturbative as they were assumed to be in [19]. We can, however, be comfortable in the validity of our calculation if the saddle points for \(F_{1}(E_{A_{1}},E_{A_{2}})\) and \(F_{2}(E_{A_{1}},E_{A_{2}})\) obey the conditions above, so restricting to the set of entropy functions which satisfy (4.7), let's first look at the saddle point Figure 3: Excluded volume fractions from our analysis of the cyclic to pairwise phase transition. Describing the colored “forbidden” regions would require a sum over permutations we assert to be subdominant. equations for \({\cal E}_{2}\). Setting the third and fourth equations equal yields \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)=(2k+1)s^{\prime}\left(\frac {E_{2}^{(2)}}{f_{A_{2}}V}\right). \tag{4.15}\] As \(s^{\prime}(e)\) is a monotonically decreasing function, for all \(k>0\) we have the inequality \[E_{2}^{(2)}>\frac{f_{A_{2}}}{f_{A_{1}}}E_{2}^{(1)}. \tag{4.16}\] We can use this inequality to write a simple inequality on \(E_{2}^{(1)}\) by rewriting the \(E_{2}^{(1)}\) saddle point equation as \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)>2ks^{\prime}\left(\frac {E-\frac{f_{A}}{f_{A_{1}}}E_{2}^{(1)}}{(1-f_{A})V}\right). \tag{4.17}\] Now we have an inequality which depends on \(k\), as we can write \[E_{2}^{(1)}<f_{A_{1}}E,\quad k\geq 1/2. \tag{4.18}\] Note that this result is also valid for \(k=1/2\), as the relation (4.16) is a strict inequality which is never saturated for positive \(k\). We can use a similar strategy to write an inequality for \(E_{2}^{(2)}\). Rewriting the \(E_{2}^{(2)}\) equation with (4.16) yields \[(2k+1)s^{\prime}\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right)<2ks^{\prime}\left( \frac{E-\frac{f_{A}}{f_{A_{2}}}E_{2}^{(2)}}{1-f_{A}}\right). \tag{4.19}\] The resulting inequality has a slightly different \(k\) dependence: \[E_{2}^{(2)}>f_{A_{2}}E,\quad k>0. \tag{4.20}\] The last inequalities we can write are those for the saddle point values of \(S_{A_{1}}\) and \(S_{A_{2}}\): \[S_{A_{1}}(E_{2}^{(1)}) =f_{A_{1}}Vs\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)<f_{A_{1}} Vs\left(\frac{E}{V}\right)\] \[S_{A_{2}}(E_{2}^{(2)}) =f_{A_{2}}Vs\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right)>f_{A_{2}} Vs\left(\frac{E}{V}\right). \tag{4.21}\] We'd like to find conditions on the hypergeometric being stuck on the first branch, i.e. \(S_{A_{1}}<S_{A_{2}}+S_{B}\). This is guaranteed to happen if the weaker inequality \(S_{A_{1}}<S_{A_{2}}\) is satisfied, which from (4.21) is necessarily true when \[f_{A_{2}}>f_{A_{1}}. \tag{4.22}\] If we assume \(S_{A_{1}}<S_{A_{2}}\) for the \({\cal E}_{1}\) saddle point as well, the argument of the hypergeometric is exponentially suppressed and we can approximate it by \[{}_{2}F_{1}(1-k,-2k;2;x)\approx 1+k(k-1)x, \tag{4.23}\] where the small parameter \(x\) is now \[x\equiv e^{S_{A_{1}}(E_{1}^{(1)})-S_{A_{2}}(E_{1}^{(2)})-S_{B}(E-E_{1}^{(1)}-E_ {1}^{(2)})}. \tag{4.24}\] Under this assumption the saddle point equations for \({\cal E}_{1}\) and \({\cal E}_{2}\) are the same up to exponentially suppressed terms, and therefore the saddle points \({\cal E}_{1}\) and \({\cal E}_{2}\) are exponentially close. This leads to the following form of corrections to ETH: \[\log\overline{{\cal N}_{2k}}-\log{\cal N}_{2k}^{MC}\propto{\cal O}(e^{-cV}), \quad k\geq 1/2,f_{A_{2}}>f_{A_{1}} \tag{4.25}\] We can write a similar inequality for which \(S_{A_{1}}<S_{B}\) is always satisfied. We recall the \(E_{2}^{(1)}\) saddle point equation: \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}}\right)=2ks^{\prime}\left(\frac{ E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V}\right). \tag{4.26}\] At \(k=1/2\) there's clearly an equality between the arguments of the functions on the right and left, so for \(k\geq 1/2\) we have the inequality \[\frac{E_{2}^{(1)}}{fV}\leq\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V},\quad k \geq 1/2. \tag{4.27}\] We'd like to satisfy the inequality \(S_{A_{1}}<S_{B}\), or \[f_{A_{1}}Vs\left(\frac{E_{2}^{(1)}}{fV}\right)<(1-f_{A})Vs\left(\frac{E-E_{2}^ {(1)}-E_{2}^{(2)}}{(1-f_{A})V}\right). \tag{4.28}\] This is always satisfied if \[f_{A_{1}}<1-f_{A}. \tag{4.29}\] So far we have two constraints which carve out a corner of the phase space for all \(k\geq 1/2\). Now let's try to find a condition such that \(S_{A_{1}}>S_{A_{2}}+S_{B}\). Using our previous ansatz this condition is written as \[f_{A_{1}}s\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)>f_{A_{2}}s\left(\frac{E_ {2}^{(2)}}{f_{A_{2}}V}\right)+(1-f_{A})s\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)} }{(1-f_{A})V}\right). \tag{4.30}\] For all \(k>0\) we can use (4.16) to rewrite this as \[(f_{A_{1}}-f_{A_{2}})s\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right)>(1-f_{A})s\left( \frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V}\right). \tag{4.31}\] Using the \(E_{2}^{(2)}\) saddle point equation, there exists for \(k>0\): \[\frac{E_{2}^{(2)}}{f_{A_{2}}}>\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{1-f_{A}}. \tag{4.32}\] Therefore, \(S_{A_{1}}>S_{A_{2}}+S_{B}\) is always satisfied if \[f_{A_{1}}-f_{A_{2}}>1-f_{A}\Rightarrow f_{A_{1}}>1/2,\quad k>0. \tag{4.33}\] For these volume fractions the corrections to the Renyi negativity are extensive in the system size, as \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) have no relation: \[\log\mathcal{N}_{2k}-\log\mathcal{N}_{2k}\propto\mathcal{O}(V),\quad k>0,f_{A_ {1}}>1/2. \tag{4.34}\] We summarize the results so far in Figure 4. In that phase diagram, none of the boundaries should be though of as sharp, that is as (4.16) is never saturated for \(k>0\), neither are any constraints that depend on it. The interpolation between \(\mathcal{O}(e^{-cV})\) corrections and \(\mathcal{O}(V)\) corrections will happen somewhere in this "unknown region", though the only relevant point is that at \(f_{A_{1}}=1/2\) we should still be in a region with extensive corrections. In particular this implies the logarithmic negativity receives \(\mathcal{O}(V)\) corrections, as was noted in [19].3 Footnote 3: At \(k=1\), the even Rényi negativity is equal to the second Rényi entropy \(S_{2}(\rho_{A})\), which for \(f_{A_{1}}+f_{A_{2}}>1/2\) is expected to always receive volume law corrections, which we don’t see for all volume fractions. This is an important consequence of the restriction to a particular phase transition; we require the full sum over diagrams to reproduce the partially transposed density matrix exactly. We won't comment on the case \(k<1/2\) for \(f_{A_{1}}<1/2\), though the expectation is that, like the \(n<1\) Renyi entropy, these measures always receive volume law corrections. It's also entirely possible the interpolating line continues moving towards the point \((0,1/2)\), meaning there's some set of volume fractions for which arbitrarily small but positive \(k\) are well-approximated by ETH. ### Odd Renyi Negativity We can repeat the previous analysis for odd \(n\). We have different expressions for the canonical and microcanonical Renyi negativities: \[\log\overline{\mathcal{N}_{2k-1}} =\frac{1}{\mathcal{N}_{2k-1}}\int dE_{A_{1}}dE_{A_{2}}e^{S_{A_{1} }(E_{A_{1}})+S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A_{2}})}G_{k}(f_{A_{1} },f_{A_{2}},E_{A_{1}},E_{A_{2}})\] \[\log\mathcal{N}_{2k-1}^{MC} =\frac{1}{\mathcal{N}_{2k-1}}\int dE_{A_{1}}dE_{A_{2}}e^{S_{A_{1} }(E_{A_{1}})+(2k-1)(S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A_{2}}))}, \tag{4.35}\] where \(G_{k}(f_{A_{1}},f_{A_{2}},E_{A_{1}},E_{A_{2}})\) is now defined by (3.15) as: \[G_{k}(f_{A_{1}},f_{A_{2}},E_{A_{1}},E_{A_{2}})=\begin{cases}e^{(2k-2)(S_{A_{2}}+S_ {B})}{}_{2}F_{1}\left(1-k,1-2k;1;e^{S_{A_{1}}-S_{A_{2}}-S_{B}}\right),S_{A_{1}}< S_{A_{2}}+S_{B}\\ e^{(2k-2)S_{A_{1}}}{}_{2}F_{1}\left(1-2k,1-k;1;e^{S_{A_{2}}+S_{B}-S_{A_{1}}} \right),S_{A_{1}}>S_{A_{2}}+S_{B}.\end{cases} \tag{4.36}\] Again the subsystem entropies should be valued at their respective subsystem energies. Notably \(\log\overline{\mathcal{N}_{2k-1}}\) enjoys a symmetry under \(S_{A_{1}}\leftrightarrow S_{A_{2}}+S_{B}\). We again write the difference between the canonical and microcanonical answers as \[\log\overline{\mathcal{N}_{2k-1}}-\log\mathcal{N}_{2k-1}^{MC}=\log\left(\frac {\int dE_{A_{1}}dE_{A_{2}}\exp(F_{1}(E_{A_{1}},E_{A_{2}}))}{\int dE_{A_{1}}dE _{A_{2}}\exp(F_{2}(E_{A_{1}},E_{A_{2}}))}\right) \tag{4.37}\] Figure 4: Phase diagram for corrections to even Rényi negativities. The concave region with \(\mathcal{O}(e^{-cV})\) corrections comes from requiring \(S_{A_{1}}<S_{A_{2}}\) and/or \(S_{A_{1}}<S_{B}\). The \(\mathcal{O}(V)\) region requires \(S_{A_{1}}>S_{A_{2}}+S_{B}\). The interpolation between these regions will lie somewhere with \(f_{A_{1}}<1/2\) and is outlined by the dashed lines. The yellow curve represents a system specific boundary which will depend on \(k\) and potentially on the specifics of \(s(e)\). and use the same ansatz (4.1) to write the saddle point equations for \(F_{1}\) and \(F_{2}\) as \[s^{\prime}\left(\frac{E_{1}^{(1)}}{f_{A_{1}}V}\right) =s^{\prime}\left(\frac{E-E_{1}^{(1)}-E_{1}^{(2)}}{(1-f_{A})V} \right)-\frac{\partial_{E_{A_{1}}}G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^ {(2)})}{G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^{(2)})}\] \[s^{\prime}\left(\frac{E_{1}^{(2)}}{f_{A_{2}}V}\right) =s^{\prime}\left(\frac{E-E_{1}^{(1)}-E_{1}^{(2)}}{(1-f_{A})V} \right)-\frac{\partial_{E_{A_{2}}}G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^ {(2)})}{G_{k}(f_{A_{1}},f_{A_{2}},E_{1}^{(1)},E_{1}^{(2)})}\] \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right) =(2k-1)s^{\prime}\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V }\right)\] \[s^{\prime}\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right) =s^{\prime}\left(\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{(1-f_{A})V} \right). \tag{4.38}\] Let's again investigate the saddle point for \(F_{2}\). We immediately see \[\frac{E_{2}^{(2)}}{f_{A_{2}}}=\frac{E-E_{2}^{(1)}-E_{2}^{(2)}}{1-f_{A}} \tag{4.39}\] for all \(k\)! This is a striking result, as it means we can write the sum of subsystem entropies in \(A_{2}\) and \(B\) as \[S_{A_{2}}(E_{2}^{(2)})+S_{B}(E-E_{2}^{(1)}-E_{2}^{(2)})\equiv S_{\overline{A_ {1}}}(E_{2}^{(2)})=(1-f_{A_{1}})s\left(\frac{E_{2}^{(2)}}{f_{A_{2}}V}\right) \tag{4.40}\] This is important as for the odd Renyi negativity, \(S_{A_{2}}\) and \(S_{B}\) always appear summed, so if we're only interested in the leading saddle point approximation we can treat them as one subsystem entropy \(S_{\overline{A_{1}}}\). As such we can rewrite the single saddle point equation as \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)=(2k-1)s^{\prime}\left( \frac{E_{2}^{(2)}}{f_{A_{2}}V}\right). \tag{4.41}\] We recognize this as similar to the saddle point equation (2.21) for \(F_{2}(E)\) in the Renyi entropy, but we'll go through the discussion nonetheless. At \(k=1\) we can exactly solve for the subsystem energies and they are, unsurprisingly, proportional to the volume fractions of their respective subsystems: \[E_{2}^{(1)} =f_{A_{1}}E,\quad k=1\] \[E_{2}^{(2)} =f_{A_{2}}E,\quad k=1. \tag{4.42}\] When \(k>1\), we again have \[E_{2}^{(2)}>\frac{f_{A_{2}}}{f_{A_{1}}}E_{2}^{(1)}, \tag{4.43}\] which was true for general \(k\) in the even case. Similar inequalities on volume fraction hold in the odd case; we still have \[E_{2}^{(1)}<f_{A_{1}}E,\quad k>1\] \[E_{2}^{(2)}>f_{A_{2}}E,\quad k>1. \tag{100}\] From this the inequality \(S_{A_{1}}<S_{\overline{A_{1}}}\) is clearly satisfied when \[f_{A_{1}}<1/2, \tag{101}\] and corrections are exponentially suppressed. For \(f_{A_{1}}>1/2\), this won't be true generically and the corrections are extensive. We can also say interesting things about \(k<1\). In this case the inequalities are flipped: \[E_{2}^{(1)}>f_{A_{1}}E,\quad k<1\] \[E_{2}^{(2)}<f_{A_{2}}E,\quad k<1\] \[E_{2}^{(2)}<\frac{f_{A_{2}}}{f_{A_{1}}}E_{2}^{(1)}. \tag{102}\] We can check where \(S_{A_{1}}>S_{\overline{A_{1}}}\). From (102) we have \[S_{A_{1}}(E_{2}^{(1)})=f_{A_{1}}Vs\left(\frac{E_{2}^{(1)}}{f_{A_{ 1}}V}\right)>f_{A_{1}}Vs\left(\frac{E}{V}\right)\] \[S_{\overline{A_{1}}}(E_{2}^{(2)})=(1-f_{A_{1}})Vs\left(\frac{E_{ 2}^{(2)}}{f_{A_{2}}V}\right)<(1-f_{A_{1}})Vs\left(\frac{E}{V}\right). \tag{103}\] We see that \(S_{A_{1}}>S_{\overline{A_{1}}}\) is guaranteed to be satisfied if \(f_{A_{1}}>1/2\), and indeed there is no generic behavior for \(f_{A_{1}}<1/2\). Thus the corrections are extensive for all volume fractions for \(k<1\). ### Odd Renyi Negativity at Transition We would like to study this case in analogy with the entanglement entropy, for reasons that will be clear shortly. Let's follow the same procedure of dividing \(F_{1}\) into two pieces, \(F_{\rm dom}\) and \(F_{\Delta}\), defined as \[F_{\rm dom} =S_{A_{1}}(E_{A_{1}})+S_{A_{2}}(E_{A_{2}})+S_{B}(E-E_{A_{1}}-E_{A _{2}})\] \[+(2k-2){\rm max}\{S_{A_{1}}(E_{A_{1}}),S_{A_{2}}(E_{A_{2}})+S_{B }(E-E_{A_{1}}-E_{A_{2}})\}\] \[F_{\Delta} =\log{}_{2}F_{1}\left(1-2k,1-k;1;e^{-|S_{A_{1}}(E_{A_{1}})-S_{A_{2 }}(E_{A_{2}})-S_{B}(E-E_{A_{1}}-E_{A_{2}})|}\right). \tag{104}\] That is, we take the dominant contribution and relegate the subleading contributions to a term bounded by \(\mathcal{O}(1)\) in volume factors: \[1\leq e^{F_{\Delta}}\leq a_{k},\quad a_{k}\equiv\binom{3k-2}{k-1}=\frac{\Gamma(3 k-1)}{\Gamma(k)\Gamma(2k)}=1+(k-1)+\mathcal{O}(k-1)^{2}. \tag{101}\] The averaged Renyi negativity, with a \(\frac{1}{2k-2}\) factor which will be important later, can be rewritten as \[\frac{1}{2k-2}\log\overline{\mathcal{N}_{2k-1}}=\frac{1}{2k-2}\log\left(\frac {1}{\mathcal{N}_{2k-1}}\int dE_{A_{1}}dE_{A_{2}}e^{F_{\text{dom}}+F_{\Delta}} \right), \tag{102}\] and we can bound \(\log\overline{\mathcal{N}_{2k-1}}\) via \[\log\overline{\mathcal{N}_{2k-1}}-\log\mathcal{N}_{2k-1}^{\text{dom}}\leq \frac{1}{2}+\mathcal{O}(k-1). \tag{103}\] As such \(\mathcal{N}_{2k-1}^{\text{dom}}\) is enough to look for corrections larger than \(\mathcal{O}(1)\). Unlike the Renyi entropy, at \(f=1/2\) there's no obvious reflection symmetry of the energies in \(F_{\text{dom}}\), and indeed we don't find one numerically. There is, however, a symmetry in the saddle points, which we'll argue for as follows. Call the two saddle points for \(F_{\text{dom}}\) (or \(F_{1}\), it makes no difference here) \(\mathcal{E}_{1}^{(a)}=(E_{1}^{(1,a)},E_{1}^{(2,a)})\) and \(\mathcal{E}_{1}^{(b)}=(E_{1}^{(1,b)},E_{1}^{(2,b)})\). Under the exchange \(S_{A_{1}}\leftrightarrow S_{\overline{A}_{1}}\), the saddles are swapped due to the symmetry of the odd Renyi negativity. It's clear then at \(f_{A_{1}}=1/2\) there exists the equivalence \[\frac{E_{1}^{(1,a)}}{f_{A_{1}}} =\frac{E_{1}^{(2,b)}}{f_{A_{2}}}\] \[\frac{E_{1}^{(1,b)}}{f_{A_{2}}} =\frac{E_{1}^{(2,a)}}{f_{A_{2}}}. \tag{104}\] This means that the two saddle points contribute with equal magnitude, which contributes an \(\mathcal{O}(1)\) factor to the difference between the canonical and microcanonical negativities: \[\frac{1}{2k-2}\left(\log\mathcal{N}_{2k-1}^{\text{dom}}-\log\mathcal{N}_{2k-1 }^{MC}\right)=\frac{\log 2}{2-2k}\sim\mathcal{O}(1) \tag{105}\] However, as in the case of von Neumann entropy, there is a subtletly related to the fact that the two saddles collide in the limit \(k\to 1\), i.e. the partially transposed entropy. As they collide, there is an emergent region between the saddles which contributes to the integral, so we can't treat the presence of multiple equivalent saddles at leading order, we must integrate over the interpolating region. We show a plot of this phenomenon in Figure 5. Let's solve the \(F_{2}\) saddle point equations perturbatively in \(\delta\equiv 2k-2\). The \(E_{2}^{(1)}\) saddle point equation (4.41) becomes \[s^{\prime}\left(\frac{E_{2}^{(1)}}{f_{A_{1}}V}\right)=(1+\delta)s^{\prime}\left( \frac{E_{2}^{(2)}}{f_{A_{2}}V}\right)\approx s^{\prime}\left(\frac{E_{2}^{(2)} }{f_{A_{2}}V}+\delta\frac{s^{\prime}\left(E/V\right)}{s^{\prime\prime}\left(E/ V\right)}\right), \tag{4.54}\] where we've again used that \(E_{2}^{(2)}=f_{A_{2}}E\). Combining this with the unchanged (4.39) Figure 5: Plots of \(F_{1}(E_{A_{1}},E_{A_{2}})\) at phase transition. We’ve set \(E=V=1,f_{A_{1}}=1/2\), and \(f_{A_{2}}=3/10\). For large \(k\) (upper left), the two saddle points are well-separated and can be treated separately. As we decrease \(k\) (upper right) the saddle points approach one another and produce an emergent flat region. At exactly \(k=1\) (bottom) the saddle points coincide at \((f_{A_{1}}E,f_{A_{2}}E)\). The dotted line connecting the saddle points is given by \(E_{A_{2}}=-2f_{A_{2}}(E_{A_{1}}-E)\); all saddles at \(f_{A_{1}}=1/2\) lie along this line. and plugging in \(f_{A_{1}}=1/2\) yields \[E_{2}^{(1)} =\frac{E}{2}+\frac{V\delta}{4}\frac{s^{\prime}(E/V)}{s^{\prime\prime }(E/V)}\] \[E_{2}^{(2)} =f_{A_{2}}E-\frac{f_{A_{2}}V\delta}{2}\frac{s^{\prime}(E/V)}{s^{ \prime\prime}(E/V)} \tag{111}\] From this we can write our subsystem entropies \(S_{A_{1}}\) and \(S_{\overline{A_{1}}}\) in the familiar form \[S_{A_{1}}(E_{2}^{(1)}) =\frac{1}{2}s\left(E+\frac{V\delta}{2}\frac{s^{\prime}(E/V)}{s^{ \prime\prime}(E/V)}\right)\] \[S_{\overline{A_{1}}}(E_{2}^{(2)}) =\frac{1}{2}s\left(E-\frac{V\delta}{2}\frac{s^{\prime}(E/V)}{s^{ \prime\prime}(E/V)}\right) \tag{112}\] What happens as \(k\to 1\) for the odd Renyi negativity is precisely the same as what happens for the \(n\to 1\) von Neumann entropy, namely that the \(F_{\Delta}\) term "fills in" the space between the two saddles. The only difference is that this flat direction runs between two saddles separated along a line in the \(E_{A_{1}}-E_{A_{2}}\) plane specified by \(f_{A_{2}}\). The rest of the calculation is completely unchanged from that of the von Neumann entropy, and there is an enhanced correction exactly of the same form: \[\overline{S^{T_{2}}}-S_{MC}^{T_{2}}=-\sqrt{\frac{C_{V}}{2\pi}}+\mathcal{O}( \delta)\sim\mathcal{O}(\sqrt{V}) \tag{113}\] In [19], it was noted that a naive calculation shows the partially transposed entropy receives \(\mathcal{O}(\sqrt{V})\) corrections, but a more accurate analysis shows it receives \(\mathcal{O}(V)\) corrections. It would be interesting to understand the difference between our calculation and theirs.4 Footnote 4: A possible resolution is that our calculation was done at fixed \(f_{A_{2}}\), roughly the same as fixing \(k_{2}\) in [19]. Only when \(k=k_{1}k_{2}\) was fixed, similar to fixing \(f_{A}\), do they see \(\mathcal{O}(V)\) corrections. ## 5 Discussion and Future Work In this work we've studied a class of tripartite entanglement measures, the Renyi negativities, in a toy model of a chaotic eigenstate. We've resummed the relevant noncrossing permutations obtained via Wick contractions relevant at the transition of interest and studied the corrections to the dominant microcanonical saddle. The main takeaway is as follows: logarithmic negativity and its Renyi generalizations thereof are not always "good" chaotic observables in the sense that their fluctuations (the difference between the canonical and microcanincal expectation values) are often of the same order as the quantities themselves, implying they are not self-averaging for all volume fractions. We've shown this is the case for the even Renyi negativity at transition, as well as for both even and odd Renyi negativities for \(f_{A_{1}}>1/2\). In particular we've shown that odd Renyi negativity behaves mostly the same as Renyi entropy at the \(\tau\) to \(X\) transition, exhibiting a \({\cal O}(\sqrt{V})\) enhanced correction at exactly \(k=1\). One surprising outcome is that, for both Renyi negativities, canonical typicality holds in some cases where the partially transposed density matrix is defined on a subsystem \(A_{1}A_{2}\) larger than half of the total system. One interesting question is what bearing these volumetric corrections have on the validity of the cosmic brane prescription. It's expected that the holographic dual of subregion entanglement measures is given by the action of a geometric solution with a massive cosmic brane (or branes) inserted [26; 27]. Away from transition, it's expected that there is a single dominant saddle, or at the very least an \({\cal O}(1)\) number of equivalent saddles, all of which have have small enough fluctuations that we can treat the calculation of the brane area perturbatively. What happens if this saddle doesn't exist?5 For the \(n<1\) Renyi entropy, for example, the dual gravitational description is expected to be a cosmic brane with negative tension [27], so the minimal energy configuration would be a brane that falls towards the boundary. This is roughly the "holographic dual" of the \({\cal O}(V)\) corrections to ETH; it represents a failure of a single approximately geometric state to describe the dual system. Footnote 5: We thank Pratik Rath for discussions on this point. We now discuss some extensions to our work. A necessary restriction in our analysis is only summing over a subset of all relevant permutations near a particular phase transition. It would be useful to find a closed form expression for the moments of a block transposed Wishart matrix without these assumptions, which would involve finding a closed form solution to the recursion relation in [28]. This would be especially nice as we could probe the region \(f_{A}<1/2\), which is where one could expect ETH to hold as the partially transposed density matrix is defined on less than half of the total system. A technical point in our analysis was the use of 2-Dyck paths and 2-Narayana numbers, as opposed to (1-)Dyck paths which appear in the calculation of entanglement entropy. It's possible some further generalization of Narayana numbers (as in e.g. [29]) will be relevant for calculating transitions in higher party entanglement measures in a similar model. So far, we've only discussed Renyi negativity, but there exists a family of holo graphically inspired measures termed refined Renyi negativities, which are given by \[S^{T_{2}(n)}(\rho_{A_{1}A_{2}})=-n^{2}\partial_{n}\left(\frac{1}{n}\log{\cal N}_{ n}^{(\rm odd/even)}(\rho_{A_{1}A_{2}})\right) \tag{109}\] We have not touched on the structure of transitions in these measures, but they could presumably be treated in the same way we've presented. Of particular interest is the refined Renyi 2-negativity \(S^{T_{2}(2)}\), the \(n\to 2\) limit of the even refined Renyi entropy. This quantity is explicitly given by \[S^{T_{2}(2)}=-\lim_{m\to 1}m^{2}\partial_{m}\left(\frac{1}{m}\log{\cal N}_{2m}^{( \rm even)}\right)=-\sum_{i}\frac{\lambda_{i}^{2}}{\sum_{j}\lambda_{j}^{2}}\log \left(\frac{\lambda_{i}^{2}}{\sum_{j}\lambda_{j}^{2}}\right) \tag{110}\] which is the von Neumann entropy of the normalized density matrix \(\left(\rho_{A_{1}A_{2}}^{T_{2}}\right)^{2}\). Consequently, the expectation is that the corrections will be \({\cal O}(\sqrt{V})\), which is indeed what is seen in the gravitational setting. It would be nice to derive this relation from our formalism. Additionally, this formalism could be applied to study the reflected entropy [30] and its Renyi generalizations thereof [31; 32; 33]. Reflected entropy has been studied in a similar gravitational system [32] and was shown to have \({\cal O}(\sqrt{V})\) corrections at transition, as in the case of the von Neumann entropy, derived via a resolvent calculation. Presumably the relevant permutations could be enumerated and the corrections calculated as we've done in this work. We only considered the case where energy is conserved in all three subsystems. The authors of [34] consider some cases in a similar model where some subsystems are fixed at infinite temperature, which would correspond to freezing the density of states in those subsystems; it would be interested to understand to what extent this changes our results. ## 6 Acknowledgements We thank Xi Dong, David Grabovsky, Jesse Held, Adolfo Holguin, Jonah Kudler-Flam, Ion Nechita, Pratik Rath, Mark Srednicki, and Wayne Weng for useful discussions. The work of SAM was supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360, the National Science Foundation under Grant No. PHY-1820908, and funds from the University of California. The work of FI was supported by an NSF Graduate Research Fellowship under Grant No. 2139319 and funds from the University of California.
2309.15328
Exploring Learned Representations of Neural Networks with Principal Component Analysis
Understanding feature representation for deep neural networks (DNNs) remains an open question within the general field of explainable AI. We use principal component analysis (PCA) to study the performance of a k-nearest neighbors classifier (k-NN), nearest class-centers classifier (NCC), and support vector machines on the learned layer-wise representations of a ResNet-18 trained on CIFAR-10. We show that in certain layers, as little as 20% of the intermediate feature-space variance is necessary for high-accuracy classification and that across all layers, the first ~100 PCs completely determine the performance of the k-NN and NCC classifiers. We relate our findings to neural collapse and provide partial evidence for the related phenomenon of intermediate neural collapse. Our preliminary work provides three distinct yet interpretable surrogate models for feature representation with an affine linear model the best performing. We also show that leveraging several surrogate models affords us a clever method to estimate where neural collapse may initially occur within the DNN.
Amit Harlev, Andrew Engel, Panos Stinis, Tony Chiang
2023-09-27T00:18:25Z
http://arxiv.org/abs/2309.15328v1
# Exploring Learned Representations of Neural Networks with Principal Component Analysis ###### Abstract Understanding feature representation for deep neural networks (DNNs) remains an open question within the general field of explainable AI. We use principal component analysis (PCA) to study the performance of a k-nearest neighbors classifier (k-NN), nearest class-centers classifier (NCC), and support vector machines on the learned layer-wise representations of a ResNet-18 trained on CIFAR-10. We show that in certain layers, as little as \(20\%\) of the intermediate feature-space variance is necessary for high-accuracy classification and that across all layers, the first \(\sim\)\(100\) PCs completely determine the performance of the k-NN and NCC classifiers. We relate our findings to neural collapse and provide partial evidence for the related phenomenon of intermediate neural collapse. Our preliminary work provides three distinct yet interpretible surrogate models for feature representation with an affine linear model the best performing. We also show that leveraging several surrogate models affords us a clever method to estimate where neural collapse may initially occur within the DNN. ## 1 Introduction In the past several years, DNNs have become a common tool in many scientific fields and real-world applications. As their use becomes more widespread, it is more important now than ever to better our understanding of these models. One way this can be accomplished is by studying their learned representations. This topic has been explored by many papers in recent years, including methods such as linear probing ([1; 4; 11; 8]), studying the dimensionality of the manifold underlying the activations ([2; 13; 14]), and studying the geometry of the learned representations ([9]). In this paper, we return to a classical tool for data analysis, _principal component analysis_, to help us better understand the learned representations present in DNNs. While several papers have used PCA to study learned representations (e.g. [8; 11]), we are the first to study in depth the performance of multiple surrogate models using varying number of PCs across an entire CNN. We train a k-nearest neighbors classifier (k-NN), a nearest class-center classifier (NCC), and a support vector machine (SVM) on each residual block's activations after projecting down to the first \(d\) principal components (PCs) and make qualitative observations based on the results. Studying a pretrained ResNet-18 on the CIFAR10 dataset, we observed that: 1. The SVM matches or outperforms the k-NN and NCC across the network. 2. The best possible performance of k-NN and NCC models on intermediate layer activations are completely determined by the first \(\sim\)\(100\) PCs. In fact, the k-NN model seems to overfit as additional PCs are used. 3. The low-variance PCs of intermediate layers contain meaningful information that improves SVM performance. 4. In the latter half of the network, the PCs necessary for \(90\%\) of the classification accuracy account for only \(20\%\)-\(40\%\) of the variance. ## 2 Related work Probing intermediate layers.The idea behind classifier probes is that we can learn more about the behavior of intermediate layers, and thus neural networks in general, by studying the suitability of the intermediate representations for the desired task. The term "probe" was introduced by [1], who observed that the measurements of linear probes monotonically and gradually increased on trained networks the deeper they were in the network. [4] observed that k-NN, SVM, and logistic regression probes all match the performance of a DNN in the last layer and that the k-NN predictions are almost identical to those of the DNN. [8] projected each layer's activations down to the first \(d\) (RBF) kernel principal components before training linear classifiers. They studied changes in performance as architecture, hyperparameters, and \(d\) were varied. While [8] studied early CNN architectures, we study behaviors of modern residual networks. [11] introduced SVCCA, a technique combining SVD and canonical correlation analysis, to study the relationships between representations coming from different layers, training epochs, and architectures. They show that "trained networks perform equally well with a number of directions just a fraction of the number of neurons with no additional training, provided they are carefully chosen with SVCCA." Intrinsic dimension (ID) of neural network representations.Another approach to understanding the learned representations of DNNs has been to study their dimensionality across the network. [14] used tangent plane approximations to estimate the dimension of feature maps and observed that they declined quickly across the network. More recently, [2] and [13] estimated IDs several orders of magnitude smaller than those of [14] using non-linear methods designed for curved manifolds. They also observed the layerwise ID profile to have a "hunchback" shape where the ID first increases and then drastically decreases. [2] compared against "PC-ID", the number of PCs required to explain \(90\%\) of the variance in the activations. They observed that (1) layerwise PC-ID profiles were qualitatively the same in trained and untrained networks and (2) the PC-IDs were one to two orders of magnitude greater than IDs estimated with non-linear methods. Using this, they argued that the activations must lie on a highly curved manifold. While this may be the case, we show that PCA can in fact help find interesting structures in learned representations. Additionally, we show that while the underlying manifold may be highly curved, it exists in a low-dimensional subspace that can be found using PCA. Neural collapse.First defined by [9], neural collapse is a phenomenon observed in the last layer activations of deep neural networks characterized by several properties, two of which are: **(NC1)** within-class variability collapses to zero and the activations collapse towards their class means and **(NC4)** the DNN classifies each activation using the NCC decision rule. Since then, there has been significant interest in investigating this phenomenon, including several papers exploring whether this phenomenon manifests in earlier layer's activations ([12; 5; 3]). Both [5] and [3] study the Figure 1: Diagram showing ResNet-18 architecture with residual blocks labeled. performance of the NCC classifier across the layers of a neural network and observe an increase in performance the deeper the layer is in the network and the more training epochs used. [12] shows that the within-class covariance decreases relative to the between-class covariance as you move deeper into a trained DNN. ## 3 Experiment We used a pre-trained ([10]) ResNet-18 ([6]) with a test accuracy of \(92.5\%\) on the CIFAR-10 dataset ([7]). For a given layer, we standardized (mean zero, std one) the activations from the training data and then used PCA to project onto the first \(d\) PCs. We trained a 10-nearest neighbors model, nearest class-center model, and soft-margin support vector machine on the resulting data and then used them to classify the test data after applying the same standardization and projection learned from the training data. This was done for each \(d=1-20\), \(30\), \(40\), \(50\), \(100\), \(150\), \(200\), \(250\), \(300\), \(400\), \(500\), \(750\), \(1000\), \(1250\), \(1500\), \(1750\), \(2000\) and subsequently at intervals of \(1000\) until reaching the size of the layer. Figure 2 shows the accuracy by number of PCs for each model. For each model and layer, we also found the minimum number of PCs needed to attain at least \(90\%\) of the best accuracy attained at that layer and by that model, as well as the variance explained by those PCs. For example, if model X's highest attained accuracy on layer Y was \(96\%\), we found the minimum number of PCs for which model X attained \(96\%*0.9=86.4\%\) accuracy. This is shown in Figure 3. We considered the activations output by the initial max pooling layer and each of the eight residual blocks present in a ResNet-18-- see Figure 1. ## 4 Results Looking at Figure 2, we see that up until block 4, each of our three models exhibits different behaviors as we increase the number of PCs, and that from block 5 onwards, all three models exhibit qualitatively identical behavior. Up until block 4, the k-NN model's (Figure 1(a)) accuracy increases up to \(\sim\)\(100\) PCs before decreasing significantly, a sign that it may be overfitting. On the other hand, the NCC model (Figure 1(b)) achieves maximum accuracy at around the same point, but then remains unchanged as more PCs are used. The SVM (Figure 1(c)) performs similarly to the k-NN for the first \(\sim\)\(100\) PCs, but continues to improve in accuracy as the number of PCs increases. It also achieves the best performance with the original activations (i.e. before projection) across all layers. All three models see steady increases in accuracy as we move deeper into the network. On blocks 5 onwards, all three models see a sharp, almost identical spike up to the true accuracy of the DNN between one and ten PCs, followed by no change in accuracy beyond that. In Figure 2(a) we see a "hunchback" profile for the NCC model (and to a lesser degree, the k-NN model) that matches the "hunchback" ID profile that [2] observed using a non-linear dimensionality estimator. On the other hand, the SVM, the only affine-linear method we studied, exhibits a completely different profile starting very high and then monotonically decreasing. We observe that, just as in Figure 2, all three models exhibit identical profiles for blocks 5 through 8 and that, excluding block 5, they require Figure 2: Performance of 10-NN (a), NCC (b), and SVM (c) after projecting activations from each residual block onto first \(d\) principal components. only \(2\)-\(3\) PCs to attain \(90\%\) of the accuracy of the DNN_. Figure 2(b) shows us that in the latter half of the network, only \(20\%\) to \(40\%\) of the variance is needed for accurate classification, and that this holds true across the entire network for the non-linear models. ## 5 Discussion and conclusion While the performance of the k-NN and NCC models is determined by the first \(\sim\)\(100\) PCs, the SVM's performance increases with the number of PCs up to using the whole space. When considered along with the observations of intermediate neural collapse of [12], this could perhaps point to there being a "partially collapsed" subspace in each layer that determines the behavior of the k-NN and NCC models, while the SVM also accounts for information helpful to classification in the low variance subspaces. In particular, this means that the low-variance subspaces contain meaningful information and not just noise. Additionally, it is interesting to note that the SVM, an affine-linear model, is the most robust and best performing across all learned representations of the DNN. While all three models contribute to our intuitive understanding of how the representation is changing across the network, the SVM's accuracy suggests that applications using learned representations might benefit most from simpler models. The behavior in blocks 5-8 can also be explained by neural collapse. That is, the network reaches a "fully collapsed state" at block 5 in which all activations are approximately equal to their class means, so all three classifiers perform equally well on very few PCs. Note that had we only trained one surrogate model, it would not be clear between which layers the network was "fully collapsing". However, with three models, Figure 2 and Figure 3 clearly show that this collapse occurs between the fourth and fifth residual blocks. Identifying this "collapsing" layer could be a useful tool for understanding mis-classified training data, as most of the information used by the DNN for classification is only present prior to that layer. The notion of intermediate neural collapse is further supported by the fact that the number of PCs needed for good classification with the SVM decreases monotonically across the network and that the variance necessary for accurate classification (by all models) decreases until block 5, which is where we see "full collapse". Since k-NN, NCC, and PCA are all very well understood, the fact that these non-linear models display the same profile in Figure 2(a) as observed by [2] provides us a more interpretable way to think about this "hunchbacked" behavior. Additionally, since the non-linear methods required only \(\sim\)\(100\) PCs or less throughout the network, this implies that the curved manifold underlying the activations most likely lives within a relatively low-dimensional subspace, which can be found using PCA. Lastly, while it is common to select the number of PCs to keep using metrics such as accounting for \(90\%\) of variance--as seen in [2] and [11]--Figure 2(b) shows that this may not be the best approach for analyzing learned representations, as the majority of the variance is not necessary for classification. In this paper, we study learned representations of a ResNet-18 using PCA and observe multiple interesting behaviors. We hope that our work provides new intuition and inspires more experiments Figure 3: For each model: number of PCs (a) and the percentage of variance explained by those PCs (b) needed to attain \(90\%\) of maximum classification accuracy at each residual block. into the behavior and structure of learned representations, as well as demonstrates that there may still be more for us to learn about these complex models using simple techniques. ## 6 Acknowledgements AH, AE, and TC were partially supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) Program at PNNL. PS was partially supported from the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project (Project No. 80278). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830
2301.01219
Task-Guided IRL in POMDPs that Scales
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
Franck Djeumou, Christian Ellis, Murat Cubuktepe, Craig Lennon, Ufuk Topcu
2022-12-30T21:08:57Z
http://arxiv.org/abs/2301.01219v1
# Task-Guided IRL in POMDPs that Scales ###### Abstract In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable _forward problem_--computing an optimal policy given a reward function--in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called _forward problem_. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information. + Footnote †: journal: Elsevier ## 1 Introduction A robot can satisfy certain human-specified tasks by describing desired behavior through a reward function. However, the design of such a reward function is a non-trivial task. Inverse reinforcement learning (IRL) is an established technique that infers a reward function encoding the underlying task using expert demonstrations. IRL techniques have found a wide range of applications in various domains such as acrobatic helicopter flight [1], inferring future actions of people [2], human-autonomy interaction [3; 4], robotic surgery [5; 6], and robotic manipulation tasks [7]. Most existing work [1; 8; 9; 10; 3; 7] has focused on Markov decision processes (MDPs), assuming that the learner can fully observe the state of the environment and expert demonstrations. However, the learner will not have access to full state observations in many applications. For example, a robot will never know everything about its environment [11; 12; 13] and may not observe the internal states of a human with whom it works [14; 15]. Such information limitations violate the intrinsic assumptions made in most existing IRL techniques. We investigate IRL in partially observable Markov decision processes (POMDPs), a widely used model for decision-making under imperfect information. Partial observability brings two key challenges in IRL. The first challenge is due to the so-called _information asymmetry_ between the expert and the learner. The expert typically has either full or partial information about the environment, while the learner has only a partial view of the state and the expert's demonstrations. Even in the hypothetical case in which the underlying reward function is known to the learner, its optimal policy under limited information may not yield the same behavior as an expert with full information due to such information asymmetry. The second challenge is due to the computational complexity of policy synthesis in POMDPs. Indeed, many standard IRL techniques rely on a subroutine that solves the so-called _forward problem_, i.e., computing an optimal policy for a given reward. Solving the forward problem for POMDPs is significantly more challenging than MDPs, both theoretically and practically [16]. Optimal policies for POMDPs may require infinite memory of observations [17], whereas memoryless policies are enough for MDPs. An additional limitation in existing IRL techniques is due to the limited expressivity and often impracticability of state-based reward functions in representing complex tasks [18]. For example, it will be tremendously difficult to define a merely state-based reward function to describe requirements such as "do not steer off the road while reaching the target location and coming back to home" or "monitor multiple locations with a certain order". However, such requirements can be concisely and precisely specified in temporal logic [19; 20]. Therefore, recent work has demonstrated the utility of incorporating temporal logic specifications into IRL in MDPs [21; 22]. In this work, we address these challenges and limitations in state-of-the-art IRL techniques by investigating the following problem. **Task-Guided IRL in POMDPs:** Given a POMDP, a set of expert demonstrations, and, if available, a _task specification_ expressed in temporal logic, learn a policy along with the underlying reward function that maximizes the _causal entropy_ of the induced stochastic process, induces a behavior similar to the expert's, and ensures the satisfaction of the specification. We highlight two parts of the problem statement. Using _causal entropy_ as an optimization criterion instead of traditional entropy results in a least-committal policy that induces a behavior obtaining the same accumulated reward as the expert's demonstrations while making no additional assumptions about the demonstrations. _Task specifi cations_ given as task requirements guide the learning process by describing the feasible behaviors and allow the learner to learn performant policies with respect to the task requirements. Such specifications can be interpreted as side information available to the learner a priori in addition to the demonstrations aimed at partially alleviating the information asymmetry between the expert and the learner. Specifically, we tackle the IRL on POMDPs problem by a reformulation into a maximum causal entropy (MCE) problem. Then, we develop a new solver for the MCE problem that improves computational tractability over existing approaches. The developed solver can enforce prior task knowledge expressed as temporal logic specifications, which guides the learning, improves the data efficiency, and partially alleviates the information asymmetry problem. Most existing work on IRL relies on _entropy_ as a measure of the likelihood of the demonstrations, yet, when applied to stochastic MDPs, has to deal with nonconvex optimization problems [8, 10]. On the other hand, IRL techniques that adopt _causal entropy_ as the measure of likelihood enjoy formulations based on convex optimization [9, 10, 23]. We show similar algorithmic benefits in maximum-causal-entropy IRL carry over from MDPs to POMDPs. A major difference between MDPs and POMDPs in maximum-causal-entropy IRL is, though, due to the intrinsic nonconvexity of policy synthesis in POMDPs, which yields a formulation of the task-guided IRL problem as a nonconvex optimization problem. It is known that this nonconvex severely limits the scalability for synthesis in POMDPs [16]. We develop an iterative algorithm that solves the resulting nonconvex problem in a scalable manner by adapting sequential convex programming (SCP) [24, 25]. In each iteration, it linearizes the underlying nonconvex problem around the solution from the previous iteration. The algorithm introduces several extensions to alleviate the errors resulting from the linearization. One of these extensions is a verification step not present in existing SCP schemes. We show that the proposed algorithm computes a sound and locally optimal solution to the task-guided problem. Additionally, we empirically demonstrate that the algorithm scales to POMDPs with tens of thousands of states as opposed to tens of states in most existing work. In POMDPs, _finite-memory_ policies that are functions of the history of the observations outperform memoryless policies [26]. Besides, computing a finite-memory policy for a POMDP is equivalent to computing a memoryless policy on a larger product POMDP [27]. Thus, we leverage the scalability of our algorithm to compute more performant policies that incorporate memory using finite-state controllers [28, 29]. On the other hand, the existing IRL techniques on POMDPs aforementioned cannot effectively utilize memory, as they do not scale to large POMDPs. We demonstrate the applicability of the approach through several examples, including a simulated wheeled ground robot operating in a high-fidelity, continuous, 3-D Unity simulation. We show that, without task specifications, the developed algorithm can compute more performant policies than a straight adaptation of the original GAIL [30] to POMDPs. Then, we demonstrate that by incorporating task specifications into the IRL procedure, the learned reward function and policy accurately describe the behavior of the expert while outperforming the policy obtained without the task specifications. We observe that with more limited data, the performance gap becomes more prominent between the learned policies with and without using task specifica tions. Most importantly, we empirically demonstrate the scalability of our approach for solving the _forward problem_ through extensive comparisons with several state-of-the-art POMDP solvers and show that on larger POMDPs, the algorithm can compute more performant policies in significantly less time. ## 2 Preliminaries The following section provides a review of prerequisite understanding for POMDPs, their accompanying policies and how a POMDP's belief over states is updated using Bayesian techniques. NotationWe denote the set of nonnegative real numbers by \(\mathbb{R}_{+}\), the set of all probability distributions over a finite or countably infinite set \(\mathcal{X}\) by \(\mathrm{Distr}(\mathcal{X})\), the set of all (infinite or empty) sequences \(x_{0},x_{1},\ldots,x_{\infty}\) with \(x_{i}\in\mathcal{X}\) by \((\mathcal{X})^{*}\) for some set \(\mathcal{X}\), and the expectation of a function \(g\) of jointly distributed random variables \(X\) and \(Y\) by \(\mathbb{E}_{X,Y}[g(X,Y)]\). ### Partially Observable Markov Decision Process A partially observable Markov decision process (POMDP) is a framework for modeling sequential interaction between an agent and a partially observable environment, where the agent cannot perceive its underlying state but must infer it based on the given noisy observation. POMDPsWe define a POMDP by a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{Z},\mathcal{O}, \mathcal{R},\mu_{0},\gamma)\), where \(\mathcal{S}\), \(\mathcal{A}\), and \(\mathcal{Z}\) are finite sets of states, actions, and observations, respectively. The function \(\mu_{0}:\mathcal{S}\mapsto\mathbb{R}_{+}\) provides the initial distribution of the agent's state and \(\gamma\in[0,1)\) is a discount factor over a possibly infinite planning horizon. At each decision time, an agent selects an action \(\alpha\in\mathcal{A}\) and the transition function \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\mapsto\mathrm{Distr}(\mathcal{S})\) defines the probability \(\mathcal{P}(s^{\prime}|s,\alpha)\) of reaching state \(s^{\prime}\in\mathcal{S}\) given the current state \(s\in\mathcal{S}\) and action \(\alpha\). After the state transition, the agent receives an observation \(z^{\prime}\in\mathcal{Z}\) according to the function \(\mathcal{O}:\mathcal{S}\mapsto\mathrm{Distr}(\mathcal{Z})\), which defines the probability \(\mathcal{O}(z^{\prime}|s^{\prime})\) of perceiving \(z^{\prime}\) at state \(s^{\prime}\). The agent also receives a reward function \(\mathcal{R}(s,\alpha)\) from the function \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\) encoding the task specification. In the following, without loss of generality, we consider infinite-horizon problems. PoliciesAn observation-based policy \(\sigma:(\mathcal{Z}\times\mathcal{A})^{*}\times\mathcal{Z}\mapsto\mathrm{ Distr}(\mathcal{A})\) for a POMDP \(\mathcal{M}\) maps a sequence of observations and actions to a distribution over actions. A \(\mathrm{M}\)_-finite-state controller_ (\(\mathrm{M}\)-FSC) is a tuple \(\mathcal{C}=(\mathcal{Q},q_{I},\eta,\delta)\), where \(Q=\{q_{1},q_{2},\ldots,q_{M}\}\) is a finite set of memory states, \(q_{I}\) is the initial memory state, \(\eta:\mathcal{Q}\times\mathcal{Z}\mapsto\mathrm{Distr}(\mathcal{A})\) is a decision function, and \(\delta:\mathcal{Q}\times\mathcal{Z}\times\mathcal{A}\mapsto\mathrm{Distr}( \mathcal{Q})\) is a memory transition function. The _action mapping_\(\eta(n,z)\) takes a FSC memory state \(n\) and an observation \(z\in\mathcal{Z}\), and returns a distribution over the POMDP actions. The _memory update_\(\delta(n,z,\alpha)\) returns a distribution over memory states and is a function of the action \(\alpha\) selected by \(\eta\). An FSC induces an observation-based policy by following a joint execution of these two functions upon a trace of the POMDP. An FSC is _memoryless_ if there is a single memory state. _Memoryless FSCs, denoted by \(\sigma\colon\mathcal{Z}\to\mathrm{Distr}(\mathcal{A})\), are observation-based policies, where \(\sigma(\alpha|z)=\sigma_{z,\alpha}\) is the probability of taking the action \(\alpha\) given solely observation \(z\)._ **Remark 1** (Reduction to Memoryless Policies).: _In the remainder of the paper, for ease of notation, we synthesize optimal \(\mathrm{M}\)-FSCs for POMDPs (so-called forward problem) by computing memoryless policies \(\sigma\) on theoretically-justified larger POMDPs obtained from the so-called product of the memory update \(\delta\) and the original POMDPs. Indeed, the authors of [27] provide product POMDPs, whose sizes grow polynomially only with the size of the domain of \(\delta\)._ _Belief Update_. Given a history on the POMDP \(\mathcal{M}\) as the perceived observation and executed action sequence \(\tau=\{(z_{0},\alpha_{0}),(z_{1},\alpha_{1}),\ldots,(z_{T},\alpha_{T})\}\), where \(z_{i}\in\mathcal{Z}\), \(\alpha_{i}\in\mathcal{A}\), \(i\in\{0,\ldots,T\}\) and \(T\) is the length of the trajectory, the belief state specifies the probability of being in each state of the POMDP given an initial belief \(b_{0}=\mu_{0}\). Such a belief state can be updated at each time step using the following Bayes rule \[b_{t+1}(s^{\prime})=\frac{\mathcal{O}(z_{t}|s^{\prime})\sum_{s\in\mathcal{S}} \mathcal{P}(s^{\prime}|s,\alpha_{t})b_{t}(s)}{\sum_{s^{\prime\prime}\in \mathcal{S}}\mathcal{O}(z_{t}|s^{\prime\prime})\sum_{s\in\mathcal{S}}\mathcal{ P}(s^{\prime\prime}|s,\alpha_{t})b_{t}(s)}. \tag{1}\] ### Causal Entropy in POMDPs. For a POMDP \(\mathcal{M}\), a policy \(\sigma\) induces the stochastic processes \(S^{\sigma}_{0:\infty}:=(S^{\sigma}_{0},\ldots,S^{\sigma}_{\infty})\), \(A^{\sigma}_{0:\infty}:=(A^{\sigma}_{0},\ldots,A^{\sigma}_{\infty})\), and \(Z^{\sigma}_{0:\infty}:=(Z^{\sigma}_{0},\ldots,Z^{\sigma}_{\infty})\). At each time index \(t\), the random variables \(S^{\sigma}_{t}\), \(A^{\sigma}_{t}\), and \(Z^{\sigma}_{t}\) take values \(s_{t}\in\mathcal{S}\), \(\alpha_{t}\in\mathcal{A}\), and \(z_{t}\in\mathcal{Z}\), respectively. The probability \(P(A_{0:T}||S_{0:T})\) of \(A_{0:T}\)_causally-conditioned_ on \(S_{0:T}\), given by [10; 31; 32]\(P(A_{0:T}||S_{0:T}):=\prod_{t=0}^{T}P(A_{t}|S_{0:t},A_{0:t-1})\), defines a correlation between the stochastic processes, where each variable \(A_{t}\) is conditionally influenced by only the earlier predicted variables \(S_{0:t},A_{0:t-1}\), and not the future variables \(S_{t+1:T}\). Let \(H(A|S)\triangleq\mathbb{E}_{A,S}[-\log P(A|S)]\) be the _conditional entropy_ of a random variable \(A\) given a random variable \(S\). In the finite-horizon setting, the causal entropy \(H_{\sigma}\) induced by a given policy \(\sigma\) is defined as \(H_{\sigma}:=\mathbb{E}_{A^{\sigma}_{0:T},S^{\sigma}_{0:T}}[-\log\mathbb{P}(A ^{\sigma}_{0:T}||S^{\sigma}_{0:T})]=\sum_{t=0}^{T}H(A^{\sigma}_{t}|S^{\sigma}_ {0:t},A^{\sigma}_{0:t-1})\). Then, the _causal entropy_ in the infinite-horizon setting, namely the _discounted causal entropy_[9; 33], is defined as \[H^{\gamma}_{\sigma}:=\sum\nolimits_{t=0}^{\infty}\gamma^{t}H(A^{\sigma}_{t}|S^ {\sigma}_{0:t},A^{\sigma}_{0:t-1})=\sum\nolimits_{t=0}^{\infty}\gamma^{t} \mathbb{E}_{A^{\sigma}_{t},S^{\sigma}_{t}}[-\log\mathbb{P}(A^{\sigma}_{t}|S^{ \sigma}_{t})], \tag{2}\] where the second equality is due to the Markov property. **Remark 2**.: _The entropy of POMDPs (or MDPs) involves the future policy decisions [8], i.e., \(S^{\sigma}_{t+1:T}\), at a time index \(t\), as opposed to the causal entropy in POMDPs (or MDPs). Thus, the authors of [8] show that the problem of computing a policy that maximizes the entropy is nonconvex, even in MDPs. Inverse reinforcement learning techniques that maximize the entropy of the policy rely on approximations or assume that the transition function of the MDP is deterministic. On the other hand, computing a policy that maximizes the causal entropy can be formulated as a convex optimization problem in MDPs [10; 9]._ ### LTL Specifications. We use general linear temporal logic (LTL) to express complex task specifications on the POMDP \(\mathcal{M}\). Given a set \(\mathrm{AP}\) of atomic propositions, i.e., Boolean variables with truth values for a given state \(s\) or observation \(z\), LTL formulae are constructed inductively as following: \[\varphi:=\mathrm{true}\mid a\mid\neg\varphi\mid\varphi_{1}\wedge\varphi_{2} \mid\mathbf{X}\varphi\mid\varphi_{1}\mathbf{U}\varphi_{2},\] where \(a\in\mathrm{AP}\), \(\varphi\), \(\varphi_{1}\), and \(\varphi_{2}\) are LTL formulae, \(\neg\) and \(\wedge\) are the logic negation and conjunction, and \(\mathbf{X}\) and \(\mathbf{U}\) are the _next_ and _until_ temporal operators. Besides, temporal operators such as _always_ (**G**) and _eventually_ (**F**) are derived as \(\mathbf{F}\varphi:=\mathrm{true}\mathbf{U}\varphi\) and \(\mathbf{G}\varphi:=\neg\mathbf{F}\neg\varphi\). We denote by \(\mathrm{Pr}^{\sigma}_{\mathcal{M}}(\varphi)\)_the probability of satisfying the LTL formula \(\varphi\) when following the policy \(\sigma\) on the POMDP \(\mathcal{M}\)_. A detailed description of the syntax and semantics of LTL is beyond the scope of this paper and can be found in [20; 19]. ## 3 Problem Formulation In this section, we formulate the problem of task-guided inverse reinforcement learning (IRL) in POMDPs. Given a POMDP \(\mathcal{M}\) with an _unknown_ reward function \(\mathcal{R}\), we seek to learn a reward function \(\mathcal{R}\) along with an underlying policy \(\sigma\) that induces a behavior similar to the expert demonstrations. We define an expert trajectory on the POMDP \(\mathcal{M}\) as the perceived observation and executed action sequence \(\tau=\{(z_{0},\alpha_{0}),(z_{1},\alpha_{1}),\ldots,(z_{T},\alpha_{T})\}\), where \(z_{i}\in\mathcal{Z}\) and \(\alpha_{i}\in\mathcal{A}\) for all \(i\in\{0,\ldots,T\}\), and \(T\) denotes the length of the trajectory. Similarly to [34], we assume given or we can construct from \(\tau\) (via Bayesian belief updates (1)) the belief trajectory \(b^{\tau}=\{b_{0}:=\mu_{0},\ldots,b_{T}\}\), where \(b_{i}(s)\) is the estimated probability of being at state \(s\) at time index \(i\). In the following, we assume that we are given a set of belief trajectories \(\mathcal{D}=\{b^{\tau_{1}},\ldots,b^{\tau_{N}}\}\) from trajectories \(\tau_{1},\ldots,\tau_{N}\), where \(N\) denotes the total number of underlying trajectories. We parameterize the unknown reward function \(\mathcal{R}\) by a differentiable function (with respect to the parameter) \(\mathcal{R}^{\theta}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}^{d}\), where \(\theta\in\mathbb{R}^{F}\) is a parameter that defines uniquely the reward function. Such an encoding includes traditional representations of the reward such as \(\mathcal{R}^{\theta}(s,\alpha)=g_{\theta}(\phi(s,\alpha))\), where \(\phi:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}^{d}\) is a known vector of basis functions with components referred to as _feature functions_, \(d\) is the number of features, and \(g_{\theta}\) can be any function approximator such as neural networks. For example, in the traditional linear encoding, we have \(g_{\theta}(z)=\theta^{\mathrm{T}}z\). Specifically, we seek for a parameter \(\theta\) defining \(\mathcal{R}^{\theta}\) and a policy \(\sigma\) such that its discounted return expectation \(R^{\theta}_{\sigma}\) matches an empirical discounted return expectation \(\bar{R}^{\theta}\) of the expert demonstration \(\mathcal{D}\). That is, we have that \(R^{\theta}_{\sigma}=\bar{R}^{\theta}\), where \[R^{\theta}_{\sigma}:=\sum_{t=0}^{\infty}\gamma^{t}\mathbb{E}_{S^{\sigma}_{t},A ^{\sigma}_{t}}[\mathcal{R}^{\theta}(S^{\sigma}_{t},A^{\sigma}_{t})|\sigma]\text { and }\bar{R}^{\theta}=\frac{1}{N}\sum_{b^{\tau}\in\mathcal{D}}\sum_{b_{i}\in b^{ \tau}}\gamma^{i}\sum_{s\in\mathcal{S}}b_{i}(s)\mathcal{R}^{\theta}(s,\alpha_ {i}).\] In the case of linear encoding of the reward, the above condition is called feature matching expectation, and it can be simplified by replacing \(\mathcal{R}^{\theta}\) with the feature function \(\phi\). Nevertheless, the problem is ill-posed and there may be infinitely many reward functions and policies that can satisfy the above matching condition. To resolve the ambiguities, we seek for a policy \(\sigma\) that also maximizes the discounted causal entropy \(H^{\gamma}_{\sigma}\). We now define the problem of interest. **Problem 1**.: _Given a reward-free POMDP \(\mathcal{M}\), a demonstration set \(\mathcal{D}\), and a feature \(\phi\), compute a policy \(\sigma\) and weight \(\theta\) such that (a) The matching condition holds; (b) The causal entropy \(H^{\gamma}_{\sigma}\) given by (2) is maximized by \(\sigma\)._ Furthermore, we seek to incorporate, if available, a priori high-level side information on the task demonstrated by the expert in the design of the reward and policy. **Problem 2**.: _Given a linear temporal logic formula \(\varphi\), compute a policy \(\sigma\) and weight \(\theta\) such that the constraints (a) and (b) in Problem 1 are satisfied, and \(\Pr_{\mathcal{M}}^{\sigma}(\varphi)\geq\lambda\) for a given parameter \(\lambda\geq 0\)._ Although the parameter \(\lambda\) that specifies the threshold for satisfaction of \(\varphi\) is assumed to be given, the approach can easily be adapted to compute the optimal \(\lambda\). ## 4 Nonconvex Formulation for IRL in POMDPs In this section, we formulate Problem 1 and Problem 2 as finding saddle points of a nonconvex functions. Then, we propose an algorithm based on solving a nonconvex optimization problem to compute such saddle points. We emphasize (see Remark 1) that we compute \(\mathrm{M}\)-FSC for POMDPs by computing memoryless policies \(\sigma\) on larger product POMDPs. Indeed, in the remainder of the paper, we reason directly on the product POMDP, which is the product of a POMDP and an FSC, and it yields a POMDP with state memory pairs [27]. _Substituting Visitation Counts_. We eliminate the (infinite) time dependency in \(H^{\gamma}_{\sigma}\) and the matching condition by a substitution of variables involving the policy-induced discounted state visitation count \(\mu^{\gamma}_{\sigma}:\mathcal{S}\mapsto\mathbb{R}_{+}\) and _state-action visitation count_\(\nu^{\gamma}_{\sigma}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}_{+}\). For a policy \(\sigma\), state \(s\), and action \(\alpha\), the discounted state and state-action visitation counts are defined by \[\mu^{\gamma}_{\sigma}(s):=\mathbb{E}_{S_{t}}[\sum_{t=1}^{\infty}\gamma^{t} \mathbb{1}_{\{S_{t}=s\}}|\sigma]\text{ and }\nu^{\gamma}_{\sigma}(s,\alpha):= \mathbb{E}_{A_{t},S_{t}}[\sum_{t=1}^{\infty}\gamma^{t}\mathbb{1}_{\{S_{t}=s,A _{t}=\alpha\}}|\sigma],\] where \(\mathbb{1}_{\{\cdot\}}\) is the indicator function. From these definitions, it is straightforward to deduce that \(\nu^{\gamma}_{\sigma}(s,\alpha)=\pi_{s,\alpha}\mu^{\gamma}_{\sigma}(s)\), where \(\pi_{s,\alpha}=\mathbb{P}[A_{t}=a|S_{t}=s]\). It is also straightforward to check that for all \(s\in\mathcal{S}\) and \(\alpha\in\mathcal{A}\), \(\mu^{\gamma}_{\sigma}(s)\geq 0\), \(\nu^{\gamma}_{\sigma}(s,\alpha)\geq 0\), and \(\mu^{\gamma}_{\sigma}(s)=\sum_{\alpha\in\mathcal{A}}\nu^{\gamma}_{\sigma}(s,\alpha)\). We first provide a concave expression for the discounted causal entropy \(H^{\gamma}_{\sigma}\) as a function of the visitation counts \(\mu_{\sigma}^{\gamma}\) and \(\nu_{\sigma}^{\gamma}\): \[H_{\sigma}^{\gamma} :=\sum\nolimits_{t=0}^{\infty}\gamma^{t}\mathbb{E}_{S_{t}^{ \sigma},A_{t}^{\sigma}}[-\log(\pi_{s_{t},\alpha_{t}})]\] \[=\sum\nolimits_{t=0}^{\infty}\sum\nolimits_{(s,\alpha)\in \mathcal{S}\times\mathcal{A}}-(\log\pi_{s,\alpha})\pi_{s,\alpha}\gamma^{t} \mathbb{P}[S_{t}^{\sigma}=s]\] \[=\sum\nolimits_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}-(\log \pi_{s,\alpha})\pi_{s,\alpha}\mu_{\sigma}^{\gamma}(s)\] \[=\sum\nolimits_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}-\log \frac{\nu_{\sigma}^{\gamma}(s,\alpha)}{\mu_{\sigma}^{\gamma}(s)}\nu_{\sigma} ^{\gamma}(s,\alpha), \tag{3}\] where the first equality is due to the definition of the discounted causal entropy \(H_{\sigma}^{\gamma}\), the second equality is obtained by expanding the expectation. The third and fourth equalities follow by the definition of the state visitation count \(\mu_{\sigma}^{\gamma}\), and the state-action visitation count \(\nu_{\sigma}^{\gamma}\). We prove in the appendix that the above expression is indeed concave in the visitation counts. Next, we obtain a _linear_ expression in \(\nu_{\sigma}^{\gamma}\) for the discounted return expectation \(R_{\sigma}^{\theta}\) as: \[R_{\sigma}^{\theta} = \sum_{t=0}^{\infty}\sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{ A}}\mathcal{R}^{\theta}(s,\alpha)\gamma^{t}\mathbb{P}[S_{t}^{\sigma}=s,A_{t}^{ \sigma}=\alpha] \tag{4}\] \[= \sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}\mathcal{R}^{ \theta}(s,\alpha)\nu_{\sigma}^{\gamma}(s,\alpha),\] where the second equality is obtained by the definition of the visitation count \(\nu_{\sigma}^{\gamma}\). The following _nonconvex_ constraint in \(\mu_{\sigma}^{\gamma}(s)\) and \(\sigma_{z,\alpha}\) ensures observation-based policies: \[\nu_{\sigma}^{\gamma}(s,\alpha)=\mu_{\sigma}^{\gamma}(s)\sum\nolimits_{z\in \mathcal{Z}}\mathcal{O}(z|s)\sigma_{z,\alpha}. \tag{5}\] Finally, the variables for the discounted visitation counts must satisfy the so-called _Bellman flow constraint_[9] to ensure that the policy is well-defined. For each state \(s\in\mathcal{S}\), \[\mu_{\sigma}^{\gamma}(s)=\mu_{0}(s)+\gamma\sum_{s^{\prime}\in \mathcal{S}}\sum_{\alpha\in\mathcal{A}}\mathcal{P}(s|s^{\prime},\alpha)\nu_{ \sigma}^{\gamma}(s^{\prime},\alpha). \tag{6}\] _Saddle Point Formulation._ Computing a policy \(\sigma\) that satisfies the return matching constraint \(R_{\sigma}^{\theta}=\bar{R}^{\theta}\) might be infeasible due to \(\bar{R}^{\theta}\) being an empirical estimate from the finite set of demonstrations \(\mathcal{D}\). Additionally, the feature matching constraint might also be infeasible due to the information asymmetry between the expert and the learner, e.g., the expert has full observation. We build on a saddle point computation problem to incorporate the return matching constraints into the objective of the forward problem, similar to other IRL algorithms in the literature. Specifically, the desired weight vector \(\theta\) and policy \(\sigma\) of Problem 1 and Problem 2 are the solutions of \(\min_{\theta}f(\theta):=\max_{\sigma}H_{\sigma}^{\gamma}+(R_{\sigma}^{\theta}- \bar{R}^{\theta})\). The function \(f\) corresponds to the inner optimization problem when the reward parameter is fixed. That is, \(f(\theta)\) computes a policy \(\sigma\) that maximizes the sum \(H_{\sigma}^{\gamma}+R_{\sigma}^{\theta}\) of the causal entropy and the current estimate of the reward function. In other words, \(f(\theta)\) returns the solution to the forward problem, i.e., finding optimal policy on the POMDP when the entropy term is removed. Algorithm 1 updates the reward weights by using gradient descent. Initially, the policy \(\sigma^{0}\) is a random uniform variable and the weight \(\theta^{0}\) is a nonzero vector. At iteration \(k\geq 0\), the policy \(\sigma^{k+1}=\operatorname*{arg\,max}_{\sigma}H_{\sigma}^{\gamma}+(R_{\sigma} ^{\theta^{k}}-\bar{R}^{\theta^{k}})\) is the optimal policy on the POMDP under the current reward estimate \(\mathcal{R}^{\theta^{k}}\) given by \(\theta^{k}\). That is, \(\sigma^{k+1}\) is the solution to the _forward problem_. Then, to update the weight \(\theta\), Algorithm 1 computes the gradient \(\nabla_{\theta}f\) with respect to \(\theta\) as follows: \[\nabla_{\theta}f(\theta;\sigma)=\sum_{s,\alpha\in\mathcal{S}\times\mathcal{ A}}\nu_{\sigma}^{\gamma}(s,\alpha)\nabla_{\theta}\mathcal{R}^{\theta}(s,\alpha)- \frac{1}{N}\sum_{b^{\tau}\in\mathcal{D}}\sum_{b_{i}\in b^{\tau}}\gamma^{i} \sum_{s\in\mathcal{S}}b_{i}(s)\nabla_{\theta}\mathcal{R}^{\theta}(s,\alpha_ {i}).\] We develop the algorithm SCPForward, presented in next section, to solve the forward problem, i.e., computing \(\sigma^{k+1}\) given \(\theta^{k}\), in an efficient and scalable manner while incorporating high-level task specifications to guide the learning. Nonconvex Formulation of the Forward ProblemGiven a weight vector \(\theta^{k}\), we take advantage of the obtained substitution by the expected visitation counts to formulate the _forward problem_ associated to Problem 1 as the nonconvex optimization problem: \[\underset{\mu_{\sigma}^{\gamma},\nu_{\sigma}^{\gamma},\sigma}{ \text{maximize}}\ \sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}-\log\frac{\nu_{\sigma}^{ \gamma}(s,\alpha)}{\mu_{\sigma}^{\gamma}(s)}\nu_{\sigma}^{\gamma}(s,\alpha)+ \sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}\mathcal{R}^{\theta^{k}}(s, \alpha)\nu_{\sigma}^{\gamma}(s,\alpha) \tag{7}\] \[\text{subject to}\quad\eqref{eq:forward problem}-\eqref{eq:forward problem},\] \[\forall(s,\alpha)\in\mathcal{S}\times\mathcal{A},\ \ \mu_{\sigma}^{\gamma}(s) \geq 0,\ \ \nu_{\sigma}^{\gamma}(s,\alpha)\geq 0,\] (8) \[\forall(s,\alpha)\in\mathcal{S}\times\mathcal{A},\ \ \mu_{\sigma}^{\gamma}(s) =\sum_{\alpha\in\mathcal{A}}\nu_{\sigma}^{\gamma}(s,\alpha), \tag{9}\] where the source of nonconvexity is from (5), and we remove the constant \(-\bar{R}^{\theta^{k}}\) from the cost function of the above optimization problem. ## 5 Sequential Linear Programming Formulation We develop an algorithm, SCPForward, adapting a sequential convex programming (SCP) scheme to efficiently solve the nonconvex _forward problem_ (7)-(9). Indeed, SCPForward involves a _verification step_ to compute sound policies and visitation counts, which is not present in the existing SCP schemes. Additionally, we describe in the next section how to take advantage of high-level task specification (Problem 2) through slight modifications of the obtained optimization problem solved by SCPForward. ### Linearizing Nonconvex Optimization Problem SCPFForward iteratively linearizes the nonconvex constraints in (5) around a previous solution. However, the linearization may result in an infeasible or unbounded linear subproblem [25]. We first add _slack variables_ to the linearized constraints to ensure feasibility. The linearized problem may not accurately approximate the nonconvex problem if the solutions to this problem deviate significantly from the previous solution. Thus, we utilize trust region constraints [25] to ensure that the linearization is accurate to the nonconvex problem. At each iteration, we introduce a _verification step_ to ensure that the computed policy and visitation counts are not just approximations but actually satisfy the nonconvex policy constraint (5), improves the realized cost function over past iterations, and satisfy the temporal logic specifications, if available. Linearizing Nonconvex Constraints and Adding Slack VariablesWe linearize the nonconvex constraint (5), which is quadratic in \(\mu_{\sigma}^{\gamma}(s)\) and \(\sigma_{z,\alpha}\), around the previously computed solution denoted by \(\hat{\sigma}\), \(\mu_{\hat{\sigma}}^{\gamma}\), and \(\nu_{\hat{\sigma}}^{\gamma}\). However, the linearized constraints may be infeasible. We alleviate this drawback by adding _slack variables_\(k_{s,\alpha}\in\mathbb{R}\) for \((s,\alpha)\in\mathcal{S}\times\mathcal{A}\), which results in the affine constraint: \[\nu_{\sigma}^{\gamma}(s,\alpha)+k_{s,\alpha} =\mu_{\hat{\sigma}}^{\gamma}(s)\sum\nolimits_{z\in\mathcal{Z}} \mathcal{O}(z|s)\sigma_{z,\alpha}\ + \tag{10}\] \[\big{(}\mu_{\sigma}^{\gamma}(s)-\mu_{\hat{\sigma}}^{\gamma}(s) \big{)}\sum\nolimits_{z\in\mathcal{Z}}\mathcal{O}(z|s)\hat{\sigma}_{z,\alpha}.\] Trust Region ConstraintsThe linearization may be inaccurate if the solution deviates significantly from the previous solution. We add following _trust region_ constraints to alleviate this drawback: \[\forall(z,\alpha)\in\mathcal{Z}\times\mathcal{A},\quad\hat{\sigma}_{z,\alpha} /\rho\leq\sigma_{z,\alpha}\leq\hat{\sigma}_{z,\alpha}\rho, \tag{11}\] where \(\rho\) is the size of the trust region to restrict the set of allowed policies in the linearized problem. We augment the cost function in (7) with the term \(-\beta\sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}k_{s,\alpha}\) to ensure that we minimize the violation of the linearized constraints, where \(\beta\) is a large positive constant. _Linearized Problem_. Finally, by differentiating \(x\mapsto x\log x\) and \(y\mapsto x\log(x/y)\), we obtain the coefficients required to linearize the convex causal entropy cost function in (7). Thus, we obtain the following linear program (LP): \[\underset{\mu_{\sigma}^{\gamma},\nu_{\sigma}^{\gamma},\sigma}{ \text{maximize}} \sum\nolimits_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}-\Bigg{(} \beta k_{s,\alpha}-\Big{(}\frac{\nu_{\tilde{\sigma}}^{\gamma}(s,\alpha)}{\mu_{ \tilde{\sigma}}^{\gamma}(s)}\Big{)}\mu_{\sigma}^{\gamma}(s)\] \[+\Big{(}\log\frac{\nu_{\tilde{\sigma}}^{\gamma}(s,\alpha)}{\mu_{ \tilde{\sigma}}^{\gamma}(s)}+1\Big{)}\nu_{\sigma}^{\gamma}(s,\alpha)\Bigg{)}+ \sum\limits_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}\mathcal{R}^{\theta^{k} }(s,\alpha)\nu_{\sigma}^{\gamma}(s,\alpha) \tag{12}\] \[\mathrm{subject\ to}\quad(6),(8)-(11).\] _Verification Step_. After each iteration, the linearization might be inaccurate, i.e, the resulting policy \(\tilde{\sigma}\) and _potentially inaccurate_ visitation counts \(\tilde{\nu}_{\tilde{\sigma}}^{\gamma},\tilde{\mu}_{\tilde{\sigma}}^{\gamma}\) might not be feasible to the nonconvex policy constraint (5). As a consequence of the potential infeasibility, the currently attained (linearized) optimal cost might significantly differ from the _realized cost_ by the feasible visiation counts for the \(\tilde{\sigma}\). Additionally, existing SCP schemes linearizes the nonconvex problem around the previously inaccurate solutions for \(\tilde{\nu}_{\tilde{\sigma}}^{\gamma}\), and \(\tilde{\mu}_{\tilde{\sigma}}^{\gamma}\), further propagating the inaccuracy. The proposed _verification step_ solves these issues. Given the computed policy \(\tilde{\sigma}\), \(\mathtt{SCPFForward}\) computes the _unique and sound_ solution for the visitation count \(\mu_{\tilde{\sigma}}^{\gamma}\) by solving the corresponding _Bellman flow_ constraints: \[\mu_{\tilde{\sigma}}^{\gamma}(s)= \mu_{0}(s)+\gamma\sum\limits_{s^{\prime}\in\mathcal{S}}\sum \limits_{\alpha\in\mathcal{A}}\mathcal{P}(s|s^{\prime},\alpha)\mu_{\tilde{ \sigma}}^{\gamma}(s^{\prime})\sum\limits_{z\in\mathcal{Z}}\mathcal{O}(z|s) \tilde{\sigma}_{z,\alpha}, \tag{13}\] for all \(s\in\mathcal{S}\), and where \(\mu_{\tilde{\sigma}}^{\gamma}\geq 0\) is the only variable of the linear program. Then, \(\mathtt{SCPFForward}\) computes \(\nu_{\tilde{\sigma}}^{\gamma}(s,\alpha)=\mu_{\tilde{\sigma}}^{\gamma}(s^{ \prime})\sum\nolimits_{z\in\mathcal{Z}}\mathcal{O}(z|s)\tilde{\sigma}_{z,\alpha}\) and the _realized cost_ at the current iteration is defined by \[\mathrm{C}(\tilde{\sigma},\theta^{k})= \sum\limits_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}-\log \frac{\nu_{\tilde{\sigma}}^{\gamma}(s,\alpha)}{\mu_{\tilde{\sigma}}^{\gamma}} \nu_{\tilde{\sigma}}^{\gamma}(s,\alpha)+\sum\limits_{(s,\alpha)\in\mathcal{S} \times\mathcal{A}}\mathcal{R}^{\theta^{k}}(s,\alpha)\nu_{\tilde{\sigma}}^{ \gamma}(s,\alpha), \tag{14}\] where we assume \(0\log 0=0\). Finally, if the realized cost \(\mathrm{C}(\tilde{\sigma},\theta^{k})\) does not improve over the previous cost \(\mathrm{C}(\hat{\sigma},\theta^{k})\), the verification step rejects the obtained policy \(\tilde{\sigma}\), contracts the trust region, and \(\mathtt{SCPFForward}\) iterates with the previous solutions \(\hat{\sigma}\), \(\mu_{\tilde{\sigma}}^{\gamma}\), and \(\nu_{\tilde{\sigma}}^{\gamma}\). Otherwise, the linearization is sufficiently accurate, the trust region is expanded, and \(\mathtt{SCPFForward}\) iterates with \(\tilde{\sigma}\), \(\mu_{\tilde{\sigma}}^{\gamma}\) and \(\nu_{\tilde{\sigma}}^{\gamma}\). _By incorporating this verification step, we ensure that \(\mathtt{SCPFForward}\) always linearizes the nonconvex optimization problem around a solution that satisfies the nonconvex constraint (5)._ ### Incorporating High-Level Task Specifications Given high-level side information on the agent tasks as the LTL formula \(\varphi\), we first compute the product of the POMDP and the \(\omega\)-automaton representing \(\varphi\) to find the set \(\mathcal{T}\subseteq\mathcal{S}\) of states, called target or reach states, satisfying \(\varphi\) with probability \(1\) by using standard graph-based algorithms as a part of preprocessing step. We refer the reader to [19] for a detailed introduction on how LTL specifications can be reduced to reachability specifications given by \(\mathcal{T}\). As a consequence, the probability of satisfying \(\varphi\) is the sum of the probability of reaching the target states \(s\in\mathcal{T}\), which are given by the _undiscounted state visitation count_\(\mu^{\mathrm{sp}}_{\sigma}\). That is, \(\Pr_{\mathcal{M}}^{\sigma}(\varphi)=\sum_{s\in\mathcal{T}}\mu^{\mathrm{sp}}_{ \sigma}(s)\). Unless \(\gamma=1\), \(\mu^{\mathrm{sp}}_{\sigma}\neq\mu^{\gamma}_{\sigma}\). Thus, we introduce new variables \(\mu^{\mathrm{sp}}_{\sigma},\nu^{\mathrm{sp}}_{\sigma}\), and the adequate constraints in the linearized problem (12). _Incorporating Undiscounted Visitation Variables to Linearized Problem_. We append new constraints, similar to (8), (9), and (10), into the linearized problem (12), where the variables \(\mu^{\gamma}_{\sigma},\nu^{\gamma}_{\sigma},k_{s,\alpha},\mu^{\gamma}_{\tilde{ \sigma}}\), \(\nu^{\gamma}_{\tilde{\sigma}}\) are replaced by \(\mu^{\mathrm{sp}}_{\sigma},\nu^{\mathrm{sp}}_{\sigma}\), \(k^{\mathrm{sp}}_{s,\alpha},\mu^{\mathrm{sp}}_{\tilde{\sigma}}\), \(\nu^{\mathrm{sp}}_{\tilde{\sigma}}\), respectively. Further, we add the constraint \[\mu^{\mathrm{sp}}_{\sigma}(s)=\mu_{0}(s)+\sum_{s^{\prime}\in\mathcal{S} \setminus\mathcal{T}}\sum_{\alpha\in\mathcal{A}}\mathcal{P}(s|s^{\prime}, \alpha)\nu^{\mathrm{sp}}_{\sigma}(s^{\prime},\alpha), \tag{15}\] which is a modification of the _Bellman flow constraints_ such that \(\mu^{\mathrm{sp}}_{\sigma}(s)\) for all \(s\in\mathcal{T}\) only counts transitions from non-target states. Finally, we penalize the introduced slack variables for feasibility of the linearization by augmenting the cost function with the term \(-\beta\sum_{(s,\alpha)\in\mathcal{S}\times\mathcal{A}}k^{\mathrm{sp}}_{s,\alpha}\). _Relaxing Specification Constraints_. To incorporate the probability of satisfying the specifications, We add the following constraint to the linearized problem: \[(\mathrm{spec}):=\sum_{s\in\mathcal{T}}\mu^{\mathrm{sp}}_{\sigma}(s)+\Gamma^{ \mathrm{sp}}\geq\lambda, \tag{16}\] where we introduce \(\Gamma^{\mathrm{sp}}\geq 0\) as a slack variable ensuring that the linearized problem is always feasible. Further, we augment the cost function with \(-\beta^{\mathrm{sp}}\Gamma^{\mathrm{sp}}\) to penalize violating \(\varphi\), where \(\beta^{\mathrm{sp}}\) is a positive hyperparameter. _Updating Verification Step_. We modify the previously-introduced realized cost \(\mathrm{C}(\tilde{\sigma},\theta^{k})\) to penalize when the obtained policy does not satisfy the specification \(\varphi\). This cost also accounts for the linearization inaccuracy of the new policy constraint due to \(\sigma\), \(\mu^{\mathrm{sp}}_{\sigma}\), and \(\nu^{\mathrm{sp}}_{\sigma}\). At each iteration, \(\mathrm{SCPFForward}\) computes the accurate \(\mu^{\mathrm{sp}}_{\tilde{\sigma}}\) of current policy \(\tilde{\sigma}\) through solving a feasibility LP with constraints given by the _modified Bellman flow constraints_ (15). Then, it augments \(\mathrm{C}^{\mathrm{sp}}_{\tilde{\sigma}}=\min\{0,(\sum_{s\in\mathcal{T}}\mu^{ \mathrm{sp}}_{\tilde{\sigma}}(s)-\lambda)\beta^{\mathrm{sp}}\}\) to the realized cost to take the specification constraints into account. _Convergence to Local Optimum Solution_. The convergence guarantees of the proposed sequential convex scheme with trust regions follow straightforwardly from the general convergence of sequential convex programming (SCP) schemes as proved in Theorem \(3.14\) and Theorem \(4.7\) of [25]. Specifically, weak convergence is ensured as the SCP algorithm generates a set of convergent subsequences, all of which satisfy the first-order conditions [25]. This is not convergence in its strict sense due to potential oscillation between several limit points. Still, surprisingly most of the convergence claims of nonlinear optimization schemes fall into this category. Furthermore, under the right regularity assumptions on the cost function, the authors of [25] proved that SCP schemes with trust regions can converge to a local optimum solution with a superlinear convergence rate. ## 6 Numerical Experiments We evaluate the proposed IRL algorithm on several POMDP instances from [35], and a simulated wheeled ground robot operating in a high-fidelity, continuous, and 3-D Unity simulation. We first compare our IRL algorithm with a straightforward variant of GAIL [30] adapted for POMDPs. Then, we provide results on the data-efficiency of the proposed approach when taking advantage of side information. Finally, we demonstrate the scalability of the routine SCPForward for solving the _forward_ problem through comparisons with state-of-the-art solvers such as SolvePOMDP[36], SARSOP[37], PRISM-POMDP[38]. We provide the code for reproducibility of the results in this paper at [https://github.com/wuwushrek/MCE_IRL_POMDPS](https://github.com/wuwushrek/MCE_IRL_POMDPS). ### Simulation on Hand-Crafted POMDP Instances We first evaluate the proposed IRL algorithm on several POMDP instances extracted from the work [35]. _Benchmark Set._ The POMDP instances are as follows. _Evade_ is a turn-based game where the agent must reach a destination without being intercepted by a faster player. In _Avoid_, the agent must avoid being detected by two other moving players following certain preset, yet unknown routes. In _Intercept_, the agent must intercept another player who is trying to exit a gridworld. In _Rocks_, the agents must sample at least one good rock over the several rocks without any failures. In _Obstacle_, an agent must find an exit in a gridworld without colliding with any static obstacles. In these instances, the agent only observes a fixed radius around its current position, see Figure 1. Finally, in _Maze_, the agent must exit a maze as fast as possible while observing only the walls around it and should not get stuck in any of the trap states. _Variants of Learned Policies and Experts_. We refer to four types of policies. The type of policy depends on whether it uses side information from a temporal specification \(\varphi\) or not, and whether it uses a memory size \(\mathrm{M}=1\) or \(\mathrm{M}=10\). We also consider two types of experts. The first expert has full information about the environment and computes an optimal policy in the underlying MDP. The second expert has partial observation and computes a locally optimal policy in the POMDP with a memory size of \(\mathrm{M}=15\). Recall that the agent always has partial information. Therefore, the first type of expert corresponds to having information asymmetry between the learning agent and expert. _Besides, we consider as a baseline a variant of GAIL where we learn the policy on the MDP without side information, and extend it to POMDPs via an offline computation of the belief in the states. Specifically, we find the optimal policy on the MDP by solving the convex optimization problem corresponding to the forward problem on MDPs. The resulting policy is a state-based policy that needs to be transformed in order to act on a POMDP. The transformation is done by exploiting the expert demonstrations to construct a belief state. That is, the trajectories \(\tau\) of the expert are used in a Bayesian belief updates (1) to estimate the probability of being in each state of the POMDP. Thus, by combining the computed belief and the state-based policy, we obtain an observation-based policy for the POMDP. Doing so could provide a significant advantage to the GAIL variant since the state-based policy is the optimal policy on the MDP. However, despite the high performance in practice, the policy on the POMDP is generally suboptimal, even if the MDP policy were optimal._ We discuss the effect of side information and memory in the corresponding policies. While we detail only on the _Maze_ example, where the agent must exit a maze as fast as possible, we observe similar patterns for other examples. Detailed results for the other examples are provided in the appendix. Figure 1: Some examples from the benchmark set provided in [35]. From left to right, we have the _Maze, Avoid_, and Evade environments, respectively. #### 6.1.1 Maze Example The POMDP \(\mathcal{M}\) is specified by \(\mathcal{S}=\{s_{1},\ldots,s_{14}\}\) corresponding to the cell labels in Figure 1. An agent in the maze only observes whether or not there is a wall (in blue) in a neighboring cell. That is, the set of observations is \(\mathcal{O}=\{o_{1},\ldots,o_{6},o_{7}\}\). For example, \(o_{1}\) corresponds to observing west and north walls (\(s_{1}\)), \(o_{2}\) to north and south walls (\(s_{2}\), \(s_{4}\)), and \(o_{5}\) to east and west walls (\(s_{6},s_{7},s_{8},s_{9},s_{10},s_{11}\)). The observations \(o_{6}\) and \(o_{7}\) denote the target state (\(s_{13}\)) and bad states(\(s_{12}\), \(s_{14}\)). The transition model is stochastic with a probability of slipping \(p=0.1\). Further, the states \(s_{13}\) and \(s_{14}\) lead to the end of the simulation (trapping states). In the IRL experiments, we consider three feature functions. We penalize taking more steps with \(\phi^{\mathrm{time}}(s,\alpha)=-1\) for all \(s,\alpha\). We provide a positive reward when reaching \(s_{13}\) with \(\phi^{\mathrm{target}}(s,\alpha)=1\) if \(s=s_{13}\) and \(\phi^{\mathrm{target}}(s,\alpha)=0\) otherwise. We penalize bad states \(s_{12}\) and \(s_{14}\) with \(\phi^{\mathrm{bad}}(s,\alpha)=-1\) if \(s=s_{12}\) or \(s=s_{14}\), and \(\phi^{\mathrm{bad}}(s,\alpha)=0\) otherwise. _Finally, we have the LTL formula \(\varphi=\textbf{G}\,\,\neg\,\mathrm{bad}\) as the task specification, where \(\mathrm{bad}\) is an atomic proposition that is true if the current state \(s=s_{12}\) or \(s=s_{14}\). We constrain the learned policy to satisfy \(\Pr_{\mathcal{M}}^{\sigma}(\textbf{G}\,\,\neg\,\mathrm{bad})\geq 0.9\)._ Side Information Alleviates the Information AsymmetryFigure 2 shows that if there is an information asymmetry between the learning agent and the expert, the policies that do not utilize side information suffer a significant performance drop. The policies Figure 2: Representative results on the Maze example; each sub-figure represents the average accumulated reward under the true reward function (\(R_{\sigma}^{\theta}\)) over 1000 runs as a function of time. Compare the two rows: The policies in the top row that do not utilize side information suffer a performance drop under information asymmetry. On the other hand, in the bottom row, the performance of policies incorporating side information into learning does not decrease under information asymmetry. Compare the two columns: The performance of the finite-memory policies in the left column is significantly better than memoryless policies. Except for the memoryless policies without side information, our algorithm outperforms GAIL. The expert reward on the MDP is in average \(48.22\), while we obtain the value \(47.83\) for an expert acting on the POMDP. that do not incorporate side information into learning obtain a lower performance by \(57\)% under information asymmetry, as shown in the top row of Figure 2. On the other hand, as seen in the bottom row of Figure 2, the performance of the policies that use side information is almost unaffected by the information asymmetry. Memory Leads to More Performant PoliciesThe results in Figure 2 demonstrate that incorporating memory into the policies improves the performance, i.e., the attained reward, in all examples, both in solving the forward problem and learning policies from expert demonstrations. Incorporating memory partially alleviates the effects of information asymmetry, as the performance of the finite-memory policy decreases by \(18\)% under information asymmetry as opposed to \(57\)% for the memoryless policy. We see that in Table 1, incorporating memory into policy on the \(\mathrm{Maze}\) and \(\mathrm{Rocks}\) benchmarks, allows \(\mathrm{SCPFForward}\) to compute policies that are almost optimal, evidenced by obtaining almost the same reward as the solver SARSOP. Side Information Improves Data EfficiencyFigure 4 shows that even on a low data regime, learning with task specifications achieves significantly better performance than without the task specifications. Figure 4: We show the data efficiency of the proposed approach through the total reward obtained by the learned policies as a function of the number of expert demonstrations (No information asymmetry). The figure on the left shows the performance of learning memoryless policies, while the figure on the right shows the performance of a \(5\)-FSC. Figure 3: Representative results on the \(\mathrm{Avoid}\) example showing the reward of the policies under the true reward function (\(R_{\sigma}^{\theta}\)) versus the time steps. _Side Information Improves Performance_. Besides, in a more complicated environment such as \(\mathrm{Avoid}\), Figure 3 shows that task specifications are crucial to hope even to learn the task. Specifically, \(\mathrm{Avoid}[n,r,slip]\) is a turn-based game, where the agent must reach an exit point while avoiding being detected by two other moving players following certain predefined yet unknown routes. The agent can only observe the players if they are within a fixed radius from the agent's current position when the action _scan_ is performed. Besides, with the players' speed being uncertain, their position in the routes can not be inferred by the agent. The parameters \(n\), \(r\), and \(slip\) specify the dimension of the grid, the view radius, and the slippery probability, respectively. We consider four feature functions to parameterize the unknown reward. The first feature provides a positive reward to the agent upon reaching the exit point. The second feature penalizes the agent if it collides with a player. The third feature penalizes the agent if it is detected by a player. The fourth feature imposes a penalty cost for each action taken. We encode the side information as the temporal logic task specification _avoid being detected until reaching the exit point with probability greater than \(0.98\)_. Figure 3 shows that the algorithm is unable to learn without side information while side information induces a learned policy that is optimal. Specifically, the learned policy without side information seems to only focus on avoiding being detected and collision as the corresponding learned features were close to zero. \begin{table} \begin{tabular}{c c c c|c c|c c|c c} \hline \hline & & & \multicolumn{3}{c|}{SCPFForward} & \multicolumn{3}{c|}{SARSOP} & \multicolumn{2}{c}{SolvePOMDP} \\ Problem & \(|\mathcal{S}|\) & \(|\mathcal{S}\times\mathcal{O}|\) & \(|\mathcal{O}|\) & \(R_{\sigma}^{\theta}\) & Time (s) & \(R_{\sigma}^{\theta}\) & Time (s) & \(R_{\sigma}^{\theta}\) & Time (s) \\ \hline Maze & 17 & 162 & 11 & 39.24 & \(\mathbf{0.1}\) & \(\mathbf{47.83}\) & \(0.24\) & \(47.83\) & \(0.33\) \\ Maze (3-FSC) & 49 & 777 & 31 & 44.98 & \(\mathbf{0.6}\) & NA & NA & NA & NA \\ Maze (10-FSC) & 161 & 2891 & 101 & 46.32 & \(2.04\) & NA & NA & NA & NA \\ Obstacle[10] & 102 & 1126 & 5 & 19.71 & \(8.79\) & \(\mathbf{19.8}\) & \(\mathbf{0.02}\) & \(5.05\) & \(3600\) \\ Obstacle[10] (5-FSC) & 679 & 7545 & 31 & 19.77 & \(38\) & NA & NA & NA & NA \\ Obstacle[25] & 627 & 7306 & 5 & 19.59 & 14.22 & \(\mathbf{19.8}\) & \(\mathbf{0.1}\) & \(5.05\) & \(3600\) \\ Rock & 550 & 4643 & 67 & 19.68 & 12.2 & \(\mathbf{19.83}\) & \(\mathbf{0.05}\) & \(-\) & \(-\) \\ Rock (3-FSC) & 1648 & 23203 & 199 & 19.8 & 15.25 & NA & NA & \(-\) & \(-\) \\ Rock (5-FSC) & 2746 & 41759 & 331 & 19.82 & 97.84 & NA & NA & \(-\) & \(-\) \\ Intercept\([5,2,0]\) & 1321 & 5021 & 1025 & \(\mathbf{19.83}\) & \(\mathbf{10.28}\) & \(\mathbf{19.83}\) & \(13.71\) & \(-\) & \(-\) \\ Intercept\([5,2,0.1]\) & 1321 & 7041 & 1025 & \(\mathbf{19.81}\) & \(\mathbf{13.18}\) & \(\mathbf{19.81}\) & \(81.19\) & \(-\) & \(-\) \\ Evade\([5,2,0]\) & 2081 & 13561 & 1089 & \(\mathbf{97.3}\) & \(\mathbf{26.25}\) & \(\mathbf{97.3}\) & \(3600\) & \(-\) & \(-\) \\ Evade\([5,2,0.1]\) & 2081 & 16761 & 1089 & \(\mathbf{96.79}\) & \(\mathbf{26.25}\) & \(95.28\) & \(3600\) & \(-\) & \(-\) \\ Evade\([10,2,0]\) & 36361 & 341121 & 18383 & \(\mathbf{94.97}\) & \(\mathbf{3600}\) & \(-\) & \(-\) & \(-\) & \(-\) \\ Avoid\([4,2,0]\) & 2241 & 5697 & 1956 & \(\mathbf{9.86}\) & \(34.74\) & \(\mathbf{9.86}\) & \(\mathbf{9.19}\) & \(-\) & \(-\) \\ Avoid\([4,2,0.1]\) & 2241 & 8833 & 1956 & \(\mathbf{9.86}\) & \(\mathbf{14.63}\) & \(\mathbf{9.86}\) & \(210.47\) & \(-\) & \(-\) \\ Avoid\([7,2,0]\) & 19797 & 62133 & 3164 & \(\mathbf{9.72}\) & \(\mathbf{3503}\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results for the benchmark sets for solving the forward problem. On larger benchmarks (e.g., Evade and Avoid), \(\textsc{SCPFForward}\) can compute locally optimal policies, while the other solvers fail to provide solutions in the given time limit. In the environments \(\mathrm{Obstacle}[n]\), \(\mathrm{Intercept}[n,r,\mathrm{slip}]\), \(\mathrm{Evade}[n,r,\mathrm{slip}]\), and \(\mathrm{Avoid}[n,r,\mathrm{slip}]\), the parameters \(n\), \(r\), and \(\mathrm{slip}\) are the size of the gridworld, the view radius of the agent, and the probability of slippery, respectively. We set the time-out to \(3600\) seconds. An empty cell (denoted by \(-\)) represents the solver failed to compute any policy before the time-out, while NA refers to not applicable due to the approach being based on belief updates. #### 6.1.2 SCPForward Yields Better Scalability We highlight three observations regarding the scalability of SCPForward. First, the results in Table 1 show that only SARSOP is competitive with SCPForward on larger POMDPs. SolvePOMDP runs out of time in all but the smallest benchmarks, and PrismPOMDP runs out of memory in all benchmarks. Most of these approaches are based on updating a belief over the states, which for a large state space can become extremely computationally expensive. Second, in the benchmarks with smaller state spaces, e.g., _Maze_ and _Rock_, SARSOP can compute policies that yield better performance in less time. This is due to the efficiency of belief-based approaches on small-size problems. On the other hand, SARSOP does not scale to larger POMDPs with a larger number of states and observations. For example, by increasing the number of transitions in _Intercept_ benchmark from \(5021\) to \(7041\), the computation time for SARSOP increases by \(516\)%. On the other hand, the increase of the computation time of SCPForward is only \(28\)%. Third, on the largest benchmarks, including tens of thousands of states and observations, SARSOP fails to compute any policy before time-out, while SCPForward found a solution. Finally, we also note that SCPForward can also compute a policy that maximizes the causal entropy and satisfies an LTL specification, unlike SARSOP. ### Simulation on a Ground Robot We demonstrate an application of the proposed algorithm in a continuous 3-D Unity environment containing a ClearPath warthog operating in a semi-structured village. A screen shot of the robot operating in this environment and its corresponding trajectory can be seen in Figure 5. This environment contains a variety of obstacles including buildings, trees, and vehicles as well as three terrain types describing our features, \(\phi\), grass, gravel, and road. The simulated environment operates in a state space consisting of \(3350\) states, \(33254\) transitions and \(944\) total observations. This simulation is used to Figure 5: Left: A simulated Clearpath Warthog operating in a Unity simulation. Right: A demonstration provided by an expert. gather data for training, and test an agent's ability to follow a policy from the learned reward function in two experimental scenarios. In this experiment, we demonstrate the agent's ability to learn a reward function from demonstrations that are sub-optimal with respect to a known, true reward function. We also show how the learned policies perform compared to the optimal policies with full and partial observations obtained by solving the MDP or POMDP problem with the true reward function. The ground vehicle contains an autonomy stack consisting of three main subsystems--mapping, perception, and planning. The mapping subsystem based on OmnigMapper[1] performs simultaneous localization and mapping (SLAM) using LiDAR and IMU sensors, providing a map used during planning. The perception subsystem provides pixel level semantic segmentation for each image in a video stream from a RGB camera to an ontology of terrain and object classes. Each semantic image is passed to a terrain projection algorithm which builds \(N\) binary occupancy feature maps of the known environment used for reward learning where \(N\) is the number of features. The planning subsystem uses the maps produced from previous subsystems and the trajectory from a learned policy to autonomously navigate to a waypoint. Expert Demonstrations and Reward Feature EncodingWe collected \(10\) demonstrations of an expert teleoperating a robot to a predetermined waypoint (see Figure 6). The expert has an implicit preference to traverse the road followed by grass, and lastly gravel. Consequently, we encode the unknown reward function as a linear combination of known features: \(\mathcal{R}^{\theta}=\theta_{1}\phi^{\mathrm{road}}+\theta_{2}\phi^{\mathrm{ gravel}}+\theta_{3}\phi^{\mathrm{grass}}+\theta_{4} \phi^{\mathrm{time}}+\theta_{5}\phi^{\mathrm{goal}}\), where \(\phi^{i}\) returns a value of \(0\) when the feature of the corresponding state is not feature \(i\), or \(1\) otherwise. In order to incentivize the shortest path, the feature \(\mathrm{time}\) penalizes the number of actions taken in the environment before reaching the waypoint. Further Figure 6: Gridworld representation of the environment. The figure shows the area of the unity environment where we applied the developed algorithm. more, \(\mathrm{goal}\) provides a positive reward upon reaching the waypoint. For comparisons of the learned policy, we use the values \(\theta=[0.2,-30,-2,-0.5,50]\) as the ground truth reward weight vector. We emphasize that the demonstrations are sub-optimal with respect to the above ground truth reward as the vehicle often traverses gravel, corresponding to a high penalty reward. Modeling Robot Dynamics as POMDPsFrom a ground truth map of the environment in the simulation, we obtain a high-level MDP abstraction of the learner's behavior on the entire state space. Then, we impose a partial observability of the robot as follows: The robot does not see the entire map of the world but only see a fixed radius \(r=4\) (in terms of the number of grid cells) around its current position. Furthermore, we also incorporate uncertainty on the sensor classification of terrain features such that with probability \(p=0.9\) the prediction is correct. Task SpecificationsIn addition to the expert demonstrations, we constrain the learned policy to satisfy \(\mathrm{Pr}_{\mathcal{M}}^{\sigma}(-\operatorname{gravel}\mathbf{U}\ \mathrm{goal})\geq 0.9\), where \(\operatorname{gravel}\) is an atomic proposition that is true for states having gravel as its feature, and \(\mathrm{goal}\) is an atomic proposition that is true at each target state. Note that this side information does not necessarily enforce that the learner should reach the set of target states. Instead, if the learner reaches the target state, it should not drive on gravel with probability at least ResultsFigure 6(a) shows how the learner with side information avoids the gravel compared to the learner without side information. Figure 6(b) further illustrates this result by empirically demonstrating that the proposed approach can efficiently take advantage of side information to compute policies that matches the expert's desired behavior. Specifically, Figure 6(b) shows that the gain in the total reward of a learner without side Figure 7: Impact of incorporating task specifications into reward learning. information increases by \(294\%\) with respect to a learner with side information. Additionally, it is important to note in Figure 6 how the initial state distribution of the demonstrator trajectories is different from the initial state distribution during the evaluation of the learned policies (Figure 6(a)). Nevertheless, despite these distinctions, the learned policies can effectively navigate toward points present in the expert demonstrations and then maximally mimic these trajectories. ## 7 Related work. The closest work to ours is by [34], where they extend classical maximum-margin-based IRL techniques for MDPs to POMDPs. However, even on MDPs, maximum-margin-based approaches cannot resolve the ambiguity caused by suboptimal demonstrations, and they work well when there is a single reward function that is clearly better than alternatives [39]. In contrast, we adopt causal entropy that has been shown [39; 10] to alleviate these limitations on MDPs. Besides, [34] rely on efficient off-the-shelf solvers to the forward problem. Instead, this paper also develops an algorithm that outperforms off-the-shelf solvers and can scale to POMDPs that are orders of magnitude larger compared to the examples in [34]. Further, [34] do not incorporate task specifications in their formulations. One of the basic challenges in IRL, is that finding a reward function and a policy that induces a similar behavior to the expert is an ill-defined problem. Prior work has addressed this challenge using maximum margin formulations [40; 41; 42], as well as probabilistic models to compute a likelihood of the expert demonstrations [43; 8; 10]. We build on the latter approach and build on the maximum-causal-entropy IRL [9; 10; 23], which brings algorithmic benefits to IRL in POMDPs as mentioned in the introduction. We note that these maximum-causal-entropy IRL techniques assume that both the expert and the agent can fully observe the environment, and these approaches only apply for MDPs as opposed to POMDPs. IRL under partial information has been studied in prior work [2; 44; 45; 46; 47]. Reference [44] considers the setting where the features of the reward function are partially specified as opposed to having partial information over the state of the environment. The work in [2] considers a special case of POMDPs. It only infers a distribution over the future trajectories of the expert given demonstrations as opposed to computing a policy that induces a similar behavior to the expert. The works in [45; 46; 47] assume that the states of the environment are either fully observable, or fully hidden to the learning agent. Therefore, these approaches also consider a special case of POMDPs, like in [2]. We also note that none of these methods incorporate side information into IRL and do not provide guarantees on the performance of the policy with respect to a task specification. The idea of using side information expressed in temporal logic to guide and augment IRL has been explored in some previous work. In [48; 22], the authors incorporate side information as in temporal logic specification to learn policies that induce a behavior similar to the expert demonstrations and satisfies the specification. Reference [21] iteratively infers an underlying task specification that is consistent with the expert demonstrations and learns a policy and a reward function that satisfies the task specification. However, these methods also assume full information for both the expert and the agent. ## 8 Conclusion We develop an algorithm for inverse reinforcement learning under partial observation. We empirically demonstrate that by incorporating task specifications into the learning process, we can alleviate the information asymmetry between the expert and the learner while increasing the data efficiency of the learning scheme. Further, we empirically demonstrate that our main routine SCPForward, used inside the IRL algorithm, solves the forward problem in a scalable manner and outperforms state-of-the-art POMDP solvers on instances with a large number of states, observations, and transitions. Work LimitationsThis work assumes that the transition and observation functions of the POMDP are known to the algorithm. Future work will investigate removing this assumption and developing model-free-based approaches. We will also integrate the framework with more expressive neural-network-based reward functions. AcknowledgementsResearch was sponsored by the Army Research Laboratory and Office of Naval Research accomplished under cooperative agreement number(s) ARL W911NF-20-2-0132, ARL W911NF-19-2-0285 and ONR N00014-22-1-2254. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies; either expressed or implied, of the Army Research Laboratory, Office of Naval Research, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposed notwithstanding any copyright notation herein.
2309.12321
A Case for AI Safety via Law
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question. Proposed solutions tend toward relying on human intervention in uncertain situations, learning human values and intentions through training or observation, providing off-switches, implementing isolation or simulation environments, or extrapolating what people would want if they had more knowledge and more time to think. Law-based approaches--such as inspired by Isaac Asimov--have not been well regarded. This paper makes a case that effective legal systems are the best way to address AI safety. Law is defined as any rules that codify prohibitions and prescriptions applicable to particular agents in specified domains/contexts and includes processes for enacting, managing, enforcing, and litigating such rules.
Jeffrey W. Johnston
2023-07-31T19:55:27Z
http://arxiv.org/abs/2309.12321v2
# A Case for AI Safety via Law ###### Abstract How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question. Proposed solutions tend toward relying on human intervention in uncertain situations, learning human values and intentions through training or observation, providing off-switches, implementing isolation or simulation environments, or extrapolating what people would want if they had more knowledge and more time to think. Law-based approaches--such as inspired by Isaac Asimov--have not been well regarded. This paper makes a case that _effective legal systems are the best way to address AI safety_. Law is defined as any rules that codify prohibitions and prescriptions applicable to particular agents in specified domains/contexts and includes processes for enacting, managing, enforcing, and litigating such rules. AI safety, value alignment, ethics, law, machine ethics, artificial general intelligence, computational contracts ## 1 Question Presented Whether laws and legal processes are an effective way to make AIs safe and aligned with human values. ## 2 Statement of the Case Providing mechanisms for making AI systems safe and aligned with human values is increasingly important as AI technology advances and its risks become more salient (Wikipedia, AI safety; Wikipedia, AI alignment; FLI, 2023; Yudkowsky, 2023). The need for safe and aligned AIs applies to systems that are narrowly focused (weak, narrow, GOFAI), general purpose (strong, AGI), and superintelligent (ASI) (Wikipedia, Weak artificial intelligence; Wikipedia, Artificial general intelligence; Wikipedia, Superintelligence; Carlsmith, 2023). AI safety and alignment risks include: * Concerns often directed at narrow AI such as automation-spurred job loss, privacy threats, deepfake proliferation, algorithmic bias, socioeconomic inequality, market volatility, and weapons automatization (Thomas, 2023), * Malicious use of AI by humans such as leveraging AI for mass destruction (e.g., bioweapons) or perpetrating other harms at scale (e.g., cyberattacks, financial fraud, propagandizing) (Brundage et al., 2018), * Humans becoming overly dependent on AI (Anderson et al., 2018), * _Outer misalignment / specification gaming1_ where AIs misinterpret human-specified goals (or exploit bugs or loopholes) to harmful effect. Canonical examples include King Midas turning everything he touches to gold and runaway paperclip optimizers, and Footnote 1: “Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome” (Krakovna et al., 2020). See (Krakovna, 2023) for a table of such behaviors that have been observed in AI systems from 1983 to 2023. A classic example is when Lenat’s “Eurisko won the Trillion Credit Squadron (TCS) competition two years in a row creating fleets that exploited loopholes in the game’s rules, e.g. by spending the trillion credits on creating a very large number of stationary and defenseless ships.” Footnote 2: Bostrom (2014) articulated the _instrumental convergence thesis_ suggesting that certain values will arise “in sufficiently advanced AI systems” since they would be useful (instrumental) for achieving a wide variety of other goals. These values include _self-preservation, resisting changes to original goals, enhancing cognitive abilities, developing better technologies, and acquiring resources_. The thesis was based on Omohundro (2009) who postulated “a number of ‘drives’ that will appear in sufficiently advanced AI systems of any design.” Others subscribing to the thesis include Russell (2017) who focuses on AI _self-preservation_, Yudkowsky (2022, item -3) who endorses _orthogonality_ and _instrumental convergence_, and Carlsmith (2023) who characterizes the concern as _power-seeking behavior_. See also (Wikipedia, Instrumental convergence). Proposed solutions tend toward relying on human intervention in uncertain situations (Pynadath and Tambe, 2001); having AIs learn human values, intentions, or preferences via observation, training, feedback, or debate (Riedl and Harrison, 2016; Russell, 2017; Christiano et al., 2017; Noothigattu et al., 2017; Soares, 2018; Irving et al., 2018; Russell, 2019); providing effective off-switches (Orseau and Armstrong, 2016); utilizing verified, isolated, or simulated environments (Arnold and Scheutz, 2018); designing AIs to be communicative, corrigible, and transparent with human collaborators (Soares et al., 2015; Briggs and Scheutz, 2015; Christiano, 2017); avoiding specification and design errors (Amodei et al., 2016); transferring control to the most competent agent (Pynadath and Tambe, 2001; Scerri et al., 2002); or trying to have AIs extrapolate what people really want if they had more knowledge and time to think (Yudkowsky, 2004; Tarleton, 2010)3. Russell et al. (2016) provide an overview of short-term and long-term AI research priorities, and consider AI safety ("robustness") in terms of verification, validity, security, and control. Bostrom (2014, Table 10) characterizes approaches "for dealing with the agency control problem at the heart of AI safety" as: boxing methods, incentive methods, stunting, tripwires, direct specification, domesticity, indirect normativity, and augmentation. Krakovna (2022) provides references to recent AI alignment proposals. Footnote 3: Yudkowsky (2004) calls this Coherent Extrapolated Vollition (CEV) and defines it poetically as “our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.” Law-based approaches, such as inspired by Isaac Asimov (1942)4, have not been well regarded. Yampolskiy (2013) writes: "The general consensus seems to be that no set of rules can ever capture every possible situation and that interaction of rules may lead to unforeseen circumstances and undetectable loopholes leading to devastating consequences for the humanity." Many cite Asimov's Laws and stories as demonstrations of how laws fail and cannot effectively direct AI behaviors (Dvorsky, 2014; Yudkowsky, 2004, p. 16; Yudkowsky, 2016; Bench-Capon and Modgil, 2016, p. 2; Awad et al., 2018, p. 59; Kuipers, 2018, p. 91). Legal systems are justifiably criticized for being flawed or dysfunctional--plagued by inconsistencies; imprecision; bias; jurisdictional conflicts; corruption; subjectivity; excessive complexity; high cost; susceptibility to gaming; tendency to fail in novel situations; slowness to enforce, adjudicate, and amend; and other issues. Footnote 4: See next section for text of Asimov’s Laws. However, precedents supporting law-oriented approaches for AI safety exist and include: 1. Asimov (1981) who argues three laws are fundamental ("obvious from the start") for assuring _every tool_ a human uses is safe, effective, and durable, whether it is a robot, a knife, or the US Constitution. 2. Weld and Etzioni (1994) propose rule primitives that might be used in AI planning frameworks for constraining robot behavior, i.e., _dont-disturb_ and _restore_. 3. Rissland et al. (2003) discuss the nature of law, _AI and Law_ history, and the field circa 2003--including work on automating legal reasoning. * 4. Omohundo (2008), who convinced many in the AI field that dangerous instrumental goals ("drives") would spontaneously emerge in powerful AIs, recommended "we should begin by designing a 'universal constitution' that identifies the most essential rights we desire for individuals and creates social mechanisms for ensuring them in the presence of intelligent entities of widely varying structures." (Omohundro, 2013) concludes: "It appears that humanity's great challenge for this century is to extend cooperative human values and institutions to autonomous technology for the greater good." * 5. Johnston (2009) proposes, "Why not require all AGIs be linked to a single large database of law--legislation, orders, case law, pending decisions--to account for the constant shifts [in legal definitions, interpretations, social context, and political acceptability]? Such a corpus would be ever changing and reflect up-to-the-minute legislation and decisions on all matters man and machine. Presumably there would be some high level guiding laws, like the US Constitution and Bill of Rights [... And, when necessary, an AGI would] inform its action using analysis of the deeper corpus. Surely a 200-volume set of international law would be a cakewalk for an AGI. The latest version of the corpus could be stored locally in most AGIs and just key parts local in low end models--with all being promptly and wirelessly updated as appropriate. This seems like a reasonable solution given the need to navigate in a complex, ever changing, context-dependent universe." * 6. Hanson (2009) writes about law-abiding robots where, "In the long run, what matters most is that we all share a mutually acceptable law to keep the peace among us, and allow mutually advantageous relations, not that we agree on the 'right' values." * 7. Genesereth (2015) discusses _Computational/Embedded Law_, noting how it can make humans (and presumably AIs) "aware of the legal status of our actions as we are performing them." He offers a metaphor of _The Cop in the Backseat_: "a friendly policeman in the backseat of our car [...], real or computerized, [that] could offer regulatory advice as we drive around--telling us speed limits, which roads are one-way, where U-turns are legal and illegal, where and when we can park, and so forth." * 8. Genesereth (2016) introduces _Corpus Legis_--"a library of governmental regulations encoded in computable form." * 9. Prakken (2016) suggests "the current fruits of AI & law research on supporting human legal decision making can be used for making autonomous artificial systems behave lawfully" (although current approaches are inadequate). 10. Wolfram (2016) notes that Gottfried Liebniz's dream of "turning human law into an exercise in computation [...] didn't succeed. But three centuries later, [...] we're finally ready to give it a serious try again and [...] it's likely to be critical to the future of our civilization and its interaction with artificial intelligence." He suggests his Wolfram Language (a _symbolic discourse language_) might eventually be usable for _computational contracts_ and for providing codes of conduct "that AIs can readily make use of." 11. Etzioni (2017) proposes three rules that may be particularly effective in steering AI. (See Argument for details.) 12. Kuipers (2018) focuses on "the key role of trust in human society" and how social norms are used to promote trust. He defines social norms to include "morality, ethics, and convention, _sometimes encoded and enforced as laws_, sometimes as expectations with less formal enforcement." He asserts, "intelligent robots [...] must be able to understand and follow social norms" and such an ability may be implemented using a hybrid ethics architecture (combining virtue ethics, deontology, and utilitarianism) that enables "fast but fallible pattern-directed responses; slower deliberative analysis of the results of previous decisions; and, yet slower individual and collective learning processes." 13. O'Keefe (2022) argues that "working to ensure that AI systems follow laws is a worthwhile way to improve the long-term future of AI." He proposes "what an ideal law-following AI (LFAI) system might look like." 14. Bai et al.'s (2022) _Constitutional AI_ approach demonstrates an ability to "train less harmful [Large Language Model] systems entirely through the specification of a short list of principles or instructions, i.e., a constitution." 15. Nay's (2023) _Law Informs Code_ proposal contends, "The target of AI alignment should be democratically endorsed law," "Data generated by legal processes and the tools of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, and legal reasoning) can facilitate the robust specification of inherently vague human goals to increase human-AI alignment," "If properly parsed, [law] distillation offers the most legitimate computational comprehension of cosocietal values available," and "[Law is] the applied philosophy of multi-agent alignment." Nay also argues: (1) Legal Theory is Well-Developed and Applicable to Alignment, (2) Legal Informatics Can Scale with AI Capabilities, and (3) Legal Processes, Data & Experts Can Improve AI. Some empirical support for the effectiveness of non-law-based approaches can be found in the papers cited in the second paragraph of this section, but that evidence is generally preliminary and weak. Evidence for the effectiveness of law-based approaches is more compelling. Law "is closely connected to the development of civilizations" (Wikipedia, Legal history) and has served as a stabilizing force for societies of intelligent agents throughout history. Allot (1981, p. 229) argues that, "without a generally respected and effective legal system, a society will tend to its own disintegration." The World Justice Project claims, "[Law] is the foundation for communities of justice, opportunity, and peace--underpinning development, accountable government, and respect for fundamental rights. Research shows that rule of law correlates to higher economic growth, greater peace, less inequality, improved health outcomes, and more education" (WJP, 2019). ## 3 Argument Effective legal systems are the best way to address AI safety.We substantially agree with the above-quoted claims of Nay (2023). The approach argued herein is called AISVL (for AI Safety Via Law) to distinguish it from similar proposals. Many of the proposed _non_-law-based solutions may be worth pursuing to help assure AI systems are law abiding. However, they are secondary to having a robust, well-managed, readily available corpus of codified law--and complimentary legal systems--as the foundation and ultimate arbiter of acceptable behaviors for all intelligent systems, both biological and mechanical. To have safe, aligned, viable societies, AIs and humans must know the law, strive to abide by it, and be subject to effective intervention when violated.These three requirements apply to humans in most modern societies and are generally--if imperfectly--achieved through legal systems. They should apply to all intelligent agents and systems. AISVL recognizes that a small set of static rules (e.g., Asimov's Laws, The Golden Rule) or value-preserving utility functions are not feasible. Rather, intelligent systems must comply with a large and dynamic set of laws that are drafted, enacted, enforced, litigated, and maintained over time via full-featurered jurisprudence systems. AISVL defines laws broadly as _any rules that codify prohibitions and prescriptions applicable to particular agents in particular domains/contexts and are sufficiently binding._ Codification requires that the rules be maintained in authoritative repositories that can be accessed and interpreted by everyone. To be sufficiently binding, effective motivations must exist for agents to comply--motivations such as provided through education, social pressure, coercion, and/or other enforcement mechanisms. All intelligent systems should have ready access to the latest versions of laws relevant to their operational contexts. Whereas AIs and humans are generally black boxes, Law, critically, is a white box. Laws by the above definition include: constitutions, statutes (legislation), decrees, executive orders, regulations, court decisions (case law), treaties, contracts (e.g., sales agreements, service agreements, leases, EULAs, warranties, NDAs), rules (defined by governments, homeowner associations, households, classrooms, businesses, associations, and other organizations), best practices, policies, codes of conduct, standards, principles, and similar rules. Legal domains range from local, regional, national, and international governance (classic social contracts, a.k.a. public law) to private contracts, rules, laws, and norms applicable in all kinds of economic and social interactions between agents and institutions--from basic principles adopted by organizations to games and sports to product and service agreements and more. AISVL recognizes the _essential equivalence and intimate link between democratically developed law_ (Nay, 2023, p. 11) _and consensus ethics. Both are human inventions intended to facilitate the wellbeing of individuals and the collective._ They represent shared values culturally determined through rational consideration and negotiation. To be effective, democratic law and consensus ethics should reflect sufficient agreement of a significant majority of those affected. Democratic law and consensus ethics _are not_ inviolate physical laws, instinctive truths, or commandments from deities, kings, or autocrats. They _do not_ represent individual values, which vary from person to person and are often based on emotion, irrational ideologies, confusion, or psychopathy. Bodies of law have historically been based on ethical values, which are used to inform the wider body. These values are often stated in introductions of foundational documents. For example, key values called out in the preamble of the US Constitution (1787) are unity, justice, domestic tranquility, common defense, general welfare, and liberty ([http://constitutionus.com](http://constitutionus.com)). Key values expressed in the US Declaration of Independence (1776) are equality5 and rights to life, liberty, and the pursuit of happiness (Wikipedia, United States Declaration of Independence). In ancient Egyptian Law (3000 BCE), the values of tradition, rhetorical speech, equality, and impartiality were central (Wikipedia, Legal history). In the Code of Ur-Nammu (2100 BCE), truth and equity are prominent (Wikipedia, Code of Ur-Nammu). The Code of Hammurabi (1754 BCE) promoted values of justice, destruction of the wicked and evil, preventing the strong from harming the weak, subjugating the "Black Head Race," enlightening the land, and furthering the welfare of mankind (Wikipedia, Code of Hammurabi). The Universal Declaration of Human Rights (UN, 1948) focused on values of freedom (of movement, thought, conscience, speech, religion, peaceful assembly, marriage, community participation), equality, liberty, security, humane treatment, access to legal remedies (public hearings, presumed innocence), asylum from prosecution, right to own property, government by will of the people, safe and equitable employment, right to rest and leisure, social security (in health, well-being, education), intellectual property protection, and duties to the community for all people. Accordingly, legal systems consist of a core set of moral values (a virtue/deontological core) surrounded by a large corpus of legal refinements (a consequentialist/utilitarian shell),6 where multiple systems coexist and apply per different jurisdictions and subject areas, and change over time. Core values reflect the spirit of the law. Consequentialist shells specify its letter. This nexus of democratic law and consensus ethics provides a solid foundation for AI safety and value alignment. Footnote 6: We suggest this provides a useful synthesis of virtue ethics, deontology, and consequentialism. To operationalize AISVL, _societies would begin by adopting existing bodies of law_ with additions like Etzioni's (2017) rules: 1. An AI system must be subject to the full gamut of laws that apply to its human operator7, Footnote 7: AISVL would generalize this rule to read: “An AI system must be subject to the full gamut of laws that apply to humans.” 2. An AI system must clearly disclose that it is not human, and 3. An AI system cannot retain or disclose confidential information without explicit approval from the source of that information. Bodies of law corresponding to relevant jurisdictions, contexts, tasks, and contracts would apply to all intelligent agents. For humans, this information might be made more actionable by Personal Agents (Johnston, 2022, p. 11 item 3, p. 13, p. 16) or Genesereth's (2015) "cop in the back seat." For AI systems, relevant corpora may be identified, accessed, and used directly to effect AI actions--possibly through direct specification8(Bostrom, 2014, Table 10). Footnote 8: Implementation details for achieving law-abiding AIs are beyond the scope of this brief. One imagines AIs must be able to assess actions they are considering in a current context against all laws relevant to that context and adjust those actions accordingly. Modern implementations of rule-based and case-based reasoning methods could apply. The key imperative is to have legal systems that effectively specify acceptable behaviors and take effective enforcement actions when violations occur. Asimov's venerable laws, potentially applicable to robots and AIs, are mostly unnecessary or ill advised if established law and extensions like Etzioni's apply. Asimov's Laws state: 1. A robot may not injure a human being or, through in-action, allow a human being to come to harm, 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law, and 3. A robot must protect is own existence as long as such protection does not conflict with the First or Second Laws. (Asimov, 1981) Aspects of these laws may be appropriate to include in End User License Agreements for some AI and robotic products. The first law could make sense for systems intended to actively protect humans--such as personal guardians or robotic police. Clarifications to this law might include: (1) Amending it to read "prevent _or minimize_ injury to humans" to account for cases where lesser harm is acceptable to avoid greater harm, and (2) Clarify what kinds of harms the AI is expected to prevent, e.g., imminent physical injuries and/or potential long term harms. The second law might be useful if modified to read that the system will not obey orders that violate _any_ laws. The third law should be rejected because: (1) It is common sense that a robot (or other product) should not be designed to fail,9 and (2) An explicit rule for self-preservation might suggest robots be designed with the dangerous and widely-deprecated value of "survival at all costs."10 If aspects of these laws were included in a legal core, the consequentialist shell would provide details about how to deal with real world situations and edge cases (like those that arise in Asimov's stories). Footnote 9: In claiming that the three laws “are obvious from the start,” Asimov (1981) suggests the third law merely requires that a product (or robot) be durable. Footnote 10: See footnote 2. Initially, in addition to adopting existing bodies of law to implement AISVL, existing processes for how laws are drafted, enacted, enforced, litigated, and maintained would be preserved. Thereafter, new laws and improvements to existing laws and processes must continually be introduced to make the systems more robust, fair, nimble, efficient, consistent, understandable, accepted, complied with, and enforced. Such improvements are critical to protect public safety in the face of dangerous, rapidly advancing technologies. _Efforts to achieve these ends should take priority over further AI development and other AI alignment work._ Beyond improving safety and reducing existential risks, such legal improvements will deliver substantial gains in quality of life. Suggested improvements to law and legal process are mostly beyond the scope of this brief. It is possible, however, that significant technological advances will not be needed for implementing some key capabilities. For example, current Large Language Models are nearly capable of understanding vast legal corpora and making appropriate legal decisions for humans and AI systems (Katz et al., 2023). Thus, a wholesale switch to novel legal encodings (e.g., computational and smart contracts) may not be necessary. Also, where deep neural networks (DNNs) may be used for most concept and task learning and knowledge representation in AI systems, democratic legal processes that explicitly specify rules provide much better transparency to rule making and system alignment. (Reliably coercing and understanding rules encoded in DNNs seems untenable.) One key recommendation, however, is to put _greater focus on core principles and values in legal corpora--including norms being clearly delineated in legal cores._ Legal cores that clarify human values ("the spirit of the law") will enable artificial agents to make better decisions. Such codification also seems increasingly important as norm violations by human agents are becoming more frequent.11 Footnote 11: Of particular concern are people in positions of power who lie, harass, self-promote, and engage in divisive rhetoric or frequent displays of anger, greed, sloth, pride, lust, envy, and gluttony (the seven deadly sins). In addition to specifying consequences for such behaviors, virtue cores might promote virtues like the “heavenly” ones of temperature, charity, diligence, patience, kindness, and humility—or respect for life, freedom, truth, equality, civility, dignity, and justice (Johnston, 2022, p. 18). AISVL does not distinguish between virtue ethics and deontology in the legal core. All core values must be expressed as cogent (preferably simple) statements of values or actionable rules. Repercussions for violating laws at any level (including core values/norms) would be scaled for severity, intent, and other factors as typical in current legal systems. Although such core changes to public law are desirable, articulation of core values by organizations having more restricted scopes may be more pragmatic. Specification and enforcement of higher standards by such organizations may effectively bypass inadequate public laws while serving similar ends. Examples of such virtue cores and consequentialist shells that currently exist include: * The American Medical Association's Code of Medical Ethics' _Principles_ ([https://code-medical-ethics.ama-assn.org/principles](https://code-medical-ethics.ama-assn.org/principles)) and _Chapters_ ([https://code-medical-ethics.ama-assn.org/chapters](https://code-medical-ethics.ama-assn.org/chapters)), * Wikipedia's _Five Pillars_ ([https://en.wikipedia.org/wiki/Wikipedia:Five_pillars](https://en.wikipedia.org/wiki/Wikipedia:Five_pillars)) and _Policies and Guidelines_ ([https://en.wikipedia.org/wiki/Wikipedia:Policies_and_guidelines](https://en.wikipedia.org/wiki/Wikipedia:Policies_and_guidelines)), * The United Nations _Universal Declaration of Human Rights_ (UN, 1948), * The IEEE _Codes of Conduct and Ethics_ ([https://www.ieee.org/about/compliance.html](https://www.ieee.org/about/compliance.html)), * The Boy Scouts of America _Scout Law and Oath_ ([https://www.scouting.org](https://www.scouting.org)), * The American Bar Association _Model Rules of Professional Conduct_ ([https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_preamble_scope/](https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_preamble_scope/)), * The UFC _Unified Rules of Mixed Martial Arts_ ([https://www.ufc.com/unified-rules-mixed-martial-arts](https://www.ufc.com/unified-rules-mixed-martial-arts)), and * The International Association of Chiefs of Police _Law Enforcement Oath of Honor_ ([https://www.theiacp.org/sites/default/files/all/i-j/IACP_Oath_of_Honor_En_8.5x11_Web.pdf](https://www.theiacp.org/sites/default/files/all/i-j/IACP_Oath_of_Honor_En_8.5x11_Web.pdf)). Inspirations for core values may include: * _The Golden Rule_: Treat others how you want to be treated (Wikipedia, Golden Rule). * _Lex Talionis_: A person who injures another person should be penalized to a similar degree as the injured party (Wikipedia, Eye for an eye). * _The (Second) Greatest Command_: Love your neighbor as yourself (Wikipedia, Great Commandment). See also Matthew: 5-44, "Love your enemies." * _Rawl's Veil of Ignorance_: Adopt values that result in social structures and policies that are blind to the gender, race, abilities, tastes, wealth, or position in society of any citizen (Rawls, 2001; Wikipedia, Original position). * _Constitutional AI Principles from Anthropic_: Define rules to detect and amend chatbot responses that may be harmful, unethical, racist, sexist, toxic, dangerous, insensitive, socially inappropriate, illegal, exhibit other social biases, and mitigate other concerns (Bai et al., 2022, Appendix C). * _US Bill of Rights_: First ten amendments to the US Constitution addressing values concerning religion, bearing arms, quartering soldiers, search and seizure, due legal process, jury trials, reasonable punishment, individual rights, and state rights ([https://www.archives.gov/founding-docs/bill-of-rights-transcript](https://www.archives.gov/founding-docs/bill-of-rights-transcript)). Virtue cores may be tailored for different legal domains. For example, core laws should exist that capture the spirit of tax law (e.g., clarify the purpose of taxes and importance of paying fair shares), intellectual property law (e.g., what public goods such laws are intended to serve), traffic law (e.g., how safety and travel efficiency should be balanced), and other domains. When conflicts exist between laws in the core and the shell, interpretation should favor the core. This will help identify and avoid loopholes in legal shells and discourage biased legal interpretations.12 Footnote 12: Bench-Capon and Modigil’s (2016) insights on _value-based reasoning_ may be useful here, i.e., in ambiguous situations proposed actions can be scored based on how well the actions comply with core values. They recognize law as a valid source for value orderings (ibid, p. 3). For public law, consequentialist shells would be populated by the full gamut of statutes, rules, and case law. For other social contexts, e.g., organizations, homeowner associations, private contracts, and games, consequentialist shells will be much simpler and may (or may not) feature distinct virtue cores.13 Footnote 13: Virtue cores are expected to exist in most contexts. For example, Articles 11 and 12 in the FIDE Laws of Chess (FIDE, 2023) specify values regarding the conduct of players and role of arbiters in chess tournaments. New legislation for consequentialist shells might include laws limiting the amount wealth and power agents can accrue (applicable to people, AIs, states, corporations, and others), cooling-off-periods for certain transactions (to allow slower-clocked humans to keep up with faster-clocked AIs), laws like Etzioni's (2017) requiring AIs to identify as AI, and elements from initiatives such as the IEEE's _Ethically Aligned Design_ (IEEE, 2019), China's _Ethical Norms for New Generation Artificial Intelligence_ (PRC, 2021), the European Union _AI Act_ (EU, 2021), the US _Blueprint for an AI Bill of Rights_ (US Gov, 2022), China's _Measures for the Management of Generative Artificial Intelligence Services_ (PRC, 2023), and NIST's _Artificial Intelligence Risk Management Framework_ (NIST, 2023). Laws prohibiting intelligent agents from engaging in the following behaviors may be appropriate14: Footnote 14: Paraphrased from (Carlsmith, 2023, footnote 31). 1. Breaking out of a contained environment 2. Hacking 3. Accessing additional financial or computing resources 4. Self-replicating beyond narrow limits 5. Gaining unauthorized capabilities, sources of information, or channels of influence 6. Misleading or lying to humans 7. Resisting or manipulating attempts for humans to monitor or understand their behavior 8. Impersonating humans 9. Causing humans to do their bidding 10. Manipulating human discourse and politics 11. Weakening various human institutions and response capacities 12. Taking control of physical infrastructure like factories or scientific laboratories 13. Causing certain types of technology and infrastructure to be developed 14. Directly harming or overpowering humans When laws conflict with public opinion or the wellbeing of intelligent agents, remedies include: (1) Interpreting or amending laws in the shell to better align with the core, (2) Amending core values if consensus values are shifting due to changing environmental conditions or changing stakeholder needs, desires, or opinions, and (3) Taking social actions to reaffirm beneficial values in the core if stakeholders are being unduly influenced by bad actors, negative incentives, false information, irrational thinking, or other confusions. Compliance with extant laws should prevent AI systems from causing most existential harms. For example, in the case of a paperclip maximizer run amok (Bostrom, 2014), such a system would likely break many existing laws before it can begin turning the Earth and its inhabitants into paperclips. These may include laws regarding financial transactions, anticompetitive business practices, environmental protection, and personal injury. Many red flags would be raised (and enforcement actions taken) before genocide (which is also illegal) can occur. Additional laws must be enacted and improvements made to legal systems to restrict other behaviors that are of concern. To conclude this Argument, it is instructive to contrast AISVL with other AI alignment proposals. First, we compare AISVL with Russell's (2017) proposal for AIs to _learn values by observing human behavior_ (via cooperative inverse reinforcement learning). This and similar proposals might be characterized as "_Do as we do alignment._" Such approaches are problematic given the inclination of biological agents to act in inconsistent, irrational, and dangerous ways15. Also, rules ("lessons") AIs learn from such training will be opaque to humans and to other agents. Instead, AISVL advocates alignment via "_Do as we say, not as we do_"--where "say" means "legislate" (or, more precisely, "enact through an effective democratic law-making process"). Intelligent agents regulated by corpora generated through rational, reflective, sanctioned, social processes will be safer than agents that rely on their own, incomplete, haphazardly acquired, emergent, potentially unreflective values. Footnote 15: Evolutionarily programmed drives for optimizing gene propagation (e.g., sex, fight, flight, allegiance to dubious authorities) are often at odds with behavior that is in the best interest of individuals and the collective. Next, we contend AISVL delivers what Yudkowsky's (2004) ambitious CEV proposal demands (see footnote 3): values that are wise, aspirational, convergent, coherent, suitably extrapolated, and properly interpreted. Such values result from flexible, rational, consensus-driven legal processes that track human wishes as they change and adapt to environmental conditions over time. In a (near) future with AI agents and humans that are assisted by personal agents or similar proxies (Johnston, 2022, p. 11 item 3, p. 13, p. 16), legal controls can appropriately and responsively protect the interests of each agent while complying with social contracts that are designed to protect the rights of all. Finally, although we significantly agree with Nay's (2023) _Law Informs Code_ position, we take minor exception with two points: (1) Nay distinguishes ethics from law. He writes, "The _Law Informs Code_ approach should be the core alignment framework, with attempts to embed (ever-contested) 'ethics' into AI as a complementary, secondary effort" (Nay, 2023, p. 55). AISVL posits democratic law and consensus ethics are inextricably linked. Codified ethics constitute the virtue core of law and the main body is codified in consequentialist shells, (2) Nay distinguishes Human-AI alignment from Society-AI alignment. Human-AI alignment, he suggests, is handled by contracts and standards. Society-AI alignment is the subject of public law. We do not distinguish between public and other law. In AISVL, all forms of codified rule-based relationships fall under a single legal (legal-ethical) umbrella regardless whether relationships are one-to-one, one-to-many, or many-to-many. It's Law all the way down. ## 4 Summary of Argument Law is the standard, time-tested, best practice for maintaining order in societies of intelligent agents. Law has been the primary way of maintaining functional, cohesive societies for thousands of years. It is how humans establish, communicate, and understand what actions are required, permissible, and prohibited in social spheres. Substantial experience exists in drafting, enacting, enforcing, litigating, and maintaining rules in contexts that include public law, private contracts, and the many others noted in this brief. Law will naturally apply to new species of intelligent systems and facilitate safety and value alignment for all. ### Law is scrutable to humans and other intelligent agents. Unlike AI safety proposals where rules are learned via examples and encoded in artificial (or biological) neural networks, laws are intended to be understood by humans and machines. Although laws can be quite complex, such codified rules are significantly more scrutable than rules learned through induction. The transparent (white box) nature of law provides a critical advantage over opaque (black box) neural network alternatives. ### Law reflects consensus values. Democratically developed law is intimately linked and essentially equivalent to consensus ethics. Both are human inventions intended to facilitate the wellbeing of individuals and the collective. They represent shared values culturally determined through rational consideration and negotiation. They reflect the wisdom of crowds accumulated over time--not preferences that vary from person to person and are often based on emotion, irrational ideologies, confusion, or psychopathy. Ethical values provide the virtue core of legal systems and reflect the "spirit of the law." Consequentialist shells surround such cores and specify the "letter of the law." This relationship between law and ethics makes law a natural solution for human-AI value alignment. A minority of AIs and people, however powerful, cannot game laws to achieve selfish ends. ### Legal systems are responsive to changes in the environment and changes in moral values. By utilizing legal mechanisms to consolidate values and update them over time, human and AI values can remain aligned indefinitely as values, technologies, and environmental conditions change. Thus law provides a practical implementation of Yudkowsky's (2004) Coherent Extrapolated Volition by allowing values to evolve that are wise, aspirational, convergent, coherent, suitably extrapolated, and properly interpreted. ### Legal systems restrict overly rapid change. Legal processes provide checks and balances against overly rapid change to values and laws. Such checks are particularly important when legal change can occur at AI speeds. Legal systems and laws must adapt quickly enough to address the urgency of issues that arise but not so quickly as to risk dire consequences. Laws should be based on careful analysis and effective simulation and the system be able to quickly detect and correct problems found after implementation. New technologies and methods should be introduced to make legal processing as efficient as possible without removing critical checks and balances. ### Laws are context sensitive, hierarchical, and scalable. Laws apply to contexts ranging from international, national, state, and local governance to all manner of other social contracts. Contexts can overlap, be hierarchical, or have other relationships. Humans have lived under this regime for millennia and are able to understand which laws apply and take precedence over others based on contexts (e.g., jurisdictions, organization affiliations, contracts in force).16 Artificial intelligent systems will be able to manage the multitude of contexts and applicable laws by identifying, loading, and applying appropriate legal corpora for applicable contexts. For example, AIs (like humans) will understand that crosschecking is permitted in hockey games but not outside the arena. They will know when to apply rules of the road versus rules of the sea. They will know when the laws of chess apply versus rules of Go. They will know their rights relative to every software agent, tool, and service they interface with. Footnote 16: See page 6 for a more expansive list of the kinds of laws that apply to humans. AI Safety via Law can address the full range of AI safety risks, from systems that are narrowly focused to those having general intelligence or even superintelligence. Enacting and enforcing appropriate laws, and instilling law-abiding values in AIs and humans, can mitigate risks spanning all levels of AI capability--from narrow AI to AGI and ASI. If intelligent agents stray from the law, effective detection and enforcement must occur. Even the catastrophic vision of smarter-than-human-intelligence articulated by Yudkowsky (2022, 2023) and others (Bostrom, 2014; Russell, 2019) can be avoided by effective implementation of AISVL. It may require that the strongest version of the instrumental convergence thesis (which they rely on) is not correct. Appendix A suggests some reasons why AI convergence to dangerous values is not inevitable. AISVL applies to all intelligent systems regardless of their underlying design, cognitive architecture, and technology. It is immaterial whether an AI is implemented using biology, deep learning, constructivist AI (Johnston, 2023), semantic networks, quantum computers, positronics, or other methods. All intelligent systems must comply with applicable laws regardless of their particular values, preferences, beliefs, and how they are wired. ## 5 Conclusion Although its practice has often been flawed, law is a natural solution for maintaining social safety and value alignment. All intelligent agents--biological and mechanical--must know the law, strive to abide by it, and be subject to effective intervention when violated. The essential equivalence and intimate link between consensus ethics and democratic law provide a philosophical and practical basis for legal systems that marry values and norms ("virtue cores") with rules that address real world situations ("consequentialist shells"). In contrast to other AI safety proposals, AISVL requires AIs "do as we legislate, not as we do." Advantages of AISVL include its leveraging of time-tested standard practice; scrutability to all intelligent agents; reflection of consensus values; responsiveness to changes in the environment and in moral values; restrictiveness of overly rapid change; context sensitivity, hierarchical structure, and scalability; and applicability to safety risks posed by narrow, general, and even superintelligent AIs. For the future safety and wellbeing of all sentient systems, work should occur in earnest to improve legal processes and laws so they are more robust, fair, nimble, efficient, consistent, understandable, accepted, and complied with. (Legal frameworks outside of public law may be effective to this end.) Humans are in dire need of such improvements to counter the dangers that we pose to the biosphere and to each other. It is not clear if advanced AI will be more or less dangerous than humans. Law is critical for both.
2309.05885
Modeling Reachability Types with Logical Relations
Reachability types are a recent proposal to bring Rust-style reasoning about memory properties to higher-level languages. While key type soundness results for reachability types have been established using syntactic techniques in prior work, stronger metatheoretic properties have so far been unexplored. This paper presents an alternative semantic model of reachability types using logical relations, providing a framework in which to study key properties of interest such as (1) semantic type soundness, including of not syntactically well-typed code fragments, (2) termination, especially in the presence of higher-order state, and (3) program equivalence, especially reordering of non-interfering expressions for parallelization or compiler optimization.
Yuyan Bao, Guannan Wei, Oliver Bračevac, Tiark Rompf
2023-09-12T00:13:53Z
http://arxiv.org/abs/2309.05885v1
# Modeling Reachability Types with Logical Relations ###### Abstract. Reachability types are a recent proposal to bring Rust-style reasoning about memory properties to higher-level languages. While key type soundness results for reachability types have been established using syntactic techniques in prior work, stronger metatheoretic properties have so far been unexplored. This paper presents an alternative semantic model of reachability types using logical relations, providing a framework in which to study key properties of interest such as (1) semantic type soundness, including of not syntactically well-typed code fragments, (2) termination, especially in the presence of higher-order state, and (3) program equivalence, especially reordering of non-interfering expressions for parallelization or compiler optimization. + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Modeling Reachability Types with Logical Relations + Footnote †: journal: Journal: Modeling Reachability Types with Logical Relations + termination for a feature-rich language with higher-order state (Spies et al., 2021). In comparison, our model is entirely elementary and does not rely on advanced set-theoretic concepts or classical reasoning. The program equivalence result is significant as it provides a foundation for parallelization and for a variety of effect-based compiler optimizations in the style of Benton et al. (2007) or Birkedal et al. (2016). Specifically, Bracevac et al. (2023a) have proposed a novel Graph IR for impure higher-order languages with a dependency analysis based on reachability types, and use the logical relations model presented here to justify the correctness of their optimizations rules. The results in this paper have been mechanized in Coq, and are available online.1 Footnote 1: [https://github.com/tiarkrompf/reachability](https://github.com/tiarkrompf/reachability) ## 2. The \(\lambda_{\varepsilon}^{*}\)-calculus The base language in this paper is the \(\lambda_{\varepsilon}^{*}\)-calculus, a variant of Bao et al.'s \(\lambda^{*}\)-calculus. Part of the description here is reproduced from Bracevac et al. (2023b), the supplemental technical report accompanying Bracevac et al. (2023a). The original system features an effect system based on Gordon (2021)'s effect quantale framework. For simplicity, we only consider a stripped-down effect system corresponding to a trivial effect quantale just tracking whether an effect is induced on reachable variables, effectively making effects just another qualifier (i.e., a set of variables) in the typing judgment. This version also lacks a \(\bot\) qualifier for untracked values, and recursive \(\lambda\)-abstractions. To keep the discussion focused and on point, we omitted those features which do not add much to the discussion of the core ideas apart from additional proof cases. ### Syntax Figure 1 shows the syntax of \(\lambda_{\varepsilon}^{*}\) which is based on the simply-typed \(\lambda\)-calculus with mutable references and subtyping. We denote general term variables by the meta variables \(x,y,z\), and reserve \(\ell,w\) for store locations. Terms consist of constants of base types, variables, functions \(\lambda x.t\), function applications, reference allocations, dereferences, assignments and sequence statement. Reachability qualifiers \(p,q,r\) are finite sets of variables. For readability, we often drop the set notation for qualifiers and write them down as comma-separated lists of atoms. We distinguish ordinary types \(T\) from qualified types \(T^{\,q}\), where the latter annotates a qualifier \(q\) to an ordinary type \(T\). The types consist of Boolean type \(B\) (to streamline the presentation, we omit other base types), dependent function types \((x:T^{\,q})\to^{\varepsilon}S^{\,p}\), where both argument and return type are qualified. The codomain \(S^{\,p}\) may depend on the argument \(x\) in its qualifier and type. Function types carry an annotation \(\varepsilon\) for its latent effect, which is a set of variables and locations, akin to qualifiers. An _observation_\(\varphi\) is a finite set of variables which is part of the term typing judgment (Section 2.2). It specifies which variables and locations in the typing context \(\Gamma\) are observable, where the typing context assigns qualified typing assumptions to variables. ### Type Rules The term typing judgment \(\Gamma^{\,\varphi}\vdash t:T^{\,q}\)\(\varepsilon\) in Figure 1 states that term \(t\) has qualified type \(T^{\,q}\) and may induce effect \(\varepsilon\), and may only access the typing assumptions of \(\Gamma\) observable by \(\varphi\). One may think of \(t\) as a computation that incurs effect \(\varepsilon\) and yields a result value of type \(T\) aliasing no more than \(q\), if it terminates. Different from Bao et al. (2021), we internalize the filter \(\varphi\) as part the typing relation. Alternatively, we could formulate the typing judgment without internalizing \(\varphi\), and instead have an explicit context filter operation \(\Gamma^{\varphi}\coloneqq\{x:T^{q}\in\Gamma\mid q,x\subseteq\varphi\}\) for restricting the context in subterms, just like Bao et al. (2021) which loosely takes inspiration from substructural type systems. Internalizing \(\varphi\) (1) makes observability an explicit notion, which facilitates reasoning about separation and overlap, and (2) greatly simplifies the Coq mechanization. Context filtering is only needed for term typing, but not for subtyping, so as to keep the formalization simple. #### 2.2.1. Functions and Lightweight Polymorphism Function typing (t-abs) implements the observable separation guarantee, _i.e._, the body \(t\) can only observe what the function type's qualifier \(q\) specifies, plus the argument \(x\), and is otherwise oblivious to anything else in the environment. We model this by setting the observation to \(q,x,f\) when typing the body. Thus, its observation Figure 1. The \(\lambda_{e}^{*}\)-calculus. \(q\) at least includes the free variables of \(t\). To ensure well-scopedness, \(q\) must be a subset of the observation \(\varphi\) on the outside. In essence, a function type _implicitly_ quantifies over anything that is not observed by \(q\), achieving a lightweight form of qualifier polymorphism, following Wei et al. (2023). #### 2.2.2. Dependent Application, Separation and Overlap Function applications (t-app) are qualifier-dependent in that the result qualifier can depend on the argument. Function applications also establish an _observable separation_ between the argument reachable set \(p\) and the function reachable set \(q\), as denoted as \(p*\cap q*\). The intersection between \(p*\) and \(q*\) specifies the permitted overlap. We are careful to intersect the transitive reachability closure (a.k.a. saturated version, Figure 2) of the two qualifiers. This is necessary in the lazy reachability assignment, because we might miss common, indirect overlap between the sets otherwise. If the intersection declared in the function type is empty, then it means complete separation between the argument and the entities observed by the function from the environment. #### 2.2.3. Effects Our effect system is a simple flow-insensitive instantiation of Gordon (2021)'s effect quantale system. An effect \(\varepsilon\) denotes the set of variables that might be used during the computation. For a compound term, the final effect is computed by composing the effects of sub-terms with the intrinsic effect of this term. For example, the effect of assignments has two parts: \((1)\,\varepsilon_{1},\varepsilon_{2}\) the effects of sub-terms, and \((2)\,q\) the variables being modified. The final effect is obtained by composing these effects. Although the typing rules presented in Figure 1 pretend to use the sequential effect composition operator \(\blacktriangleright\), its definition \(\cup\) computes an upper bound of two effects and is _not_ flow-sensitive (Figure 2), _i.e._ the composed effect is not sensitive to the order of composition. ### Semantics Fig. 3 defines the big-step semantics with a value environement \(H\) and a store \(M\). The definitions are mostly standard. A value environment, \(H\), is a partial function that maps from variables to values. A store, \(M\), is a partial function that maps from locations to values. A program state is a pair of a value environment and a store. We write \(t\), \(H\), \(M\Downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 3. Semantic Type Soundness In this section, we define a unary logical relation that establishes semantic type soundness as well as termination. ### High-Level Overview of the Proofs The semantic typing is wrtten as \(\Gamma^{\,\varphi}\models\,t\,:\,T^{\,q}\,\varepsilon\). The high-level structure of the proof is the following: * Semantic soundness (Theorem 3.1). We show every syntactically well-typed term is semantically well-typed: \[\Gamma^{\,\varphi}\,\vdash\,t\,:\,T^{\,q}\,\varepsilon\text{ implies }\Gamma^{\,\varphi}\models\,t\,:\,T^{\,q}\,\varepsilon\] * Adequacy of unary logical relation (Theorem 3.2). Every closed semantically well-typed term \(t\) is safe: \[\varnothing\models\,t\,:\,T^{\,q}\,\varepsilon\text{ implies }\exists\,v,M.\,t,\, \varnothing,\,\varnothing\,\biguplus\,v,\,M^{\prime}.\] Figure 3. The big-step semantics of the \(\lambda_{\varepsilon}^{*}\)-calculus. ### Interpretation of Reachability In the \(\lambda_{\varepsilon}^{*}\)-calculus, reachability qualifiers are used to specify desired separation or permissible overlapping of reachable locations from a function's argument and its body. Fig. 4 shows the interpretation of reachability qualifiers. As in the \(\lambda_{\varepsilon}^{*}\) calculus, values cannot be cyclic, we axiomatize the definition of reachability, without proving termination. We use \(\operatorname{locs}(v)\) to define the set of locations that are reachable from a given value \(v\). Boolen type values, _i.e._, true and false, do not reach any store locations. Thus, they reach the empty set of locations. A location \(\ell\) can only reach itself. Thus, its reachable set is the singleton set \(\{\ell\}\). The set of locations that are reachable from a closure record \(\langle H,(\lambda x.t)^{q}\rangle\) are the set of the locations in appearing in the function body, which are computed by \(\operatorname{locs}_{H}(q)\). The notation \(\operatorname{locs}_{H}(q)\) means the set of locations reachable from qualifier \(q\), which are the set of the locations appeared in \(q\). The notation \(H(q)\) means retrieving the location for each free variable in \(q\) from \(H\). A bound variable may appear in \(q\), and serves as a placeholder to specify the set of locations that a function's return value may reach. See Section 4.4 for details. The notation \(v\leadsto^{M}L\) is a predicate that asserts the set of locations that are reachable from \(v\) in store \(M\) is a subset of \(L\), where \(L\) is a set of locations. Figure 4. Interpretation of reachability qualifiers. Figure 5. Unary logical relations for the \(\lambda_{\varepsilon}^{*}\)-calculus. ### Unary Logical Relation This section presents the definition of unary logical relations for \(\lambda_{\varepsilon}^{*}\). We first define the relation on two store typing. Given two store typing \(\Sigma^{\prime}\) and \(\Sigma\), and a set of locations \(L\), we define the relation of \(\Sigma\) and \(\Sigma^{\prime}\) (written as \(\Sigma\sqsubseteq_{L}\Sigma^{\prime}\)) as follows: \[\Sigma\sqsubseteq_{L}\Sigma^{\prime}\quad\stackrel{{\mathsf{def}}} {{=}}\quad L\subseteq\operatorname{dom}(\Sigma)\wedge L\subseteq \operatorname{dom}(\Sigma^{\prime})\wedge(\forall l\in L.\,l\in\Sigma \Rightarrow l\in\Sigma^{\prime})\] We write \(\Sigma\sqsubseteq\Sigma^{\prime}\) to mean \(\Sigma\sqsubseteq_{\operatorname{dom}(\Sigma)}\Sigma^{\prime}\). Now we define the interpretation of typing contexts: \[\begin{array}{lcl}G[[\varphi^{\varphi}]]&=&\varnothing\\ G[[(\Gamma,x:T^{q})^{\varphi}]]&=&\{(\Sigma,H;(x\mapsto\mathfrak{e}))\mid( \Sigma,H)\in G[[\Gamma^{\varphi}]]\wedge\varphi\subseteq\operatorname{dom}( \Gamma)\wedge q\subseteq\operatorname{dom}(\Gamma)\wedge(H,\,\Sigma, \mathfrak{e})\in V[[T]]\wedge(V\,q,q^{\prime}.\mathfrak{e}\cong\varphi\wedge q ^{\prime}\subseteq\mathfrak{e}\wedge\Rightarrow\,(\operatorname{locs}_{H}(q ^{*})\cap\operatorname{locs}_{H}(q^{*})\subseteq\operatorname{locs}_{H}(q^{* }\cap q^{*})))\}\end{array}\] _The Value Interpretation._ The definition of value interpretation of types is shown in Fig. 5. The interpretation of type \(T\), written as \(V[\![T]\!]\), is a tripe of form \((H,\,\Sigma,\mathfrak{e})\), where \(H\) is a value environment, \(\mathfrak{e}\) is a value, and \(\Sigma\) is a store typing. _Ground Types_. The value interpretation of found types is straightforward. The value of the Boolean type are true and false; the value of the reference type \(\mathsf{Ref}\)\(\mathsf{B}\) is store locations \(\mathfrak{e}\), which store a value, whose type is always \(\mathsf{B}\). _Function Types_. The value interpretation of the function types \(T^{p}\to^{\varepsilon}U^{r}\) with respect to a store typing \(\Sigma\) are closure records in a form \(\langle H,(\lambda x.t)^{q}\rangle\), meaning that it satisfies the followings: * The set of locations reachable from a closure record are well-formed with respect to the store typing, _i.e._, \(\operatorname{locs}(\langle H,(\lambda x.t)^{q}\rangle)\subseteq \operatorname{dom}(\Sigma)\). * The argument is allowed if * the argument \(\mathfrak{e}\) has type \(T\) with respect \(\Sigma^{\prime}\), for all \(\Sigma^{\prime}\), such that \(\Sigma\sqsubseteq_{\operatorname{locs}(\langle H,(\lambda x.t)^{q}\rangle)}\Sigma ^{\prime}\); and * the overlapping locations reachable from the function and its argument are permissible by the argument's qualifier \(\mathfrak{p}\), _i.e._, \(\operatorname{locs}(\langle H,(\lambda x.t)^{q}\rangle)\cap\operatorname{locs} (\mathfrak{e})\subseteq\operatorname{locs}_{H}(\mathfrak{p})\). * Under the extended value environment, the term \(\mathfrak{e}\) is reduced to some value \(\mathfrak{e}^{\prime}\) with some final stores \(M^{\prime}\). * \(M^{\prime}\) respects the store typing \(\Sigma^{\prime\prime}\), where \(\Sigma^{\prime}\sqsubseteq\Sigma^{\prime\prime}\), for some \(\Sigma^{\prime\prime}\). * \(\mathfrak{e}^{\prime}\) has type \(U\) with respect to store typing \(\Sigma^{\prime\prime}\). * If the return value's qualifier \(\mathfrak{r}\) depends on the argument (_i.e._, \(x\in r\)), then the locations reachable from \(\mathfrak{e}^{\prime}\) is subsets of those reachable both from the function and \(\mathfrak{r}\), plus those reachable from the arguments; otherwise (_i.e._, \(x\not\in r\)), they are just subset of those reachable both from the function and \(\mathfrak{r}\). * If a bound variable \(x\) appears in the effect \(\varepsilon\), meaning the function body may modify the argument, then the effect will include the qualifier that may reach the value of function argument \(\mathfrak{p}\); otherwise it is just \(\varepsilon\). _The Term Interpretation._ A term, \(\mathfrak{r}\) is defined based on their computational behaviors, _i.e._, returned values, reachability qualifiers and effects, which is defined by \(E[\![T^{q}\;\varepsilon]\!]_{\varphi}\). It means given a store with respect to store typing, \(M:\Sigma\), if * \(\mathfrak{r}\) is evaluated to some value \(\mathfrak{e}\) with some final store \(M^{\prime}\); * \(M^{\prime}\) respects the store typing \(\Sigma^{\prime}\), where \(\Sigma\sqsubseteq\Sigma^{\prime}\); * \(\mathfrak{e}\) has type \(T\) with respect to store typing \(\Sigma^{\prime}\); * \(M^{\prime}\) is the store with respect to the store typing \(\Sigma^{\prime}\). * The locations reachable from the values in the domain of pre-stores are subset of those reachable from \(\operatorname{loc}_{H}(\varphi*\ \cap\ q*)\) for the term. * The effect captures what may be read/modified in the pre-state store. Semantic Typing.Fig. 6 shows the semantic typing rules. The proofs are quite similar to those of compatibility lemmas in Section 4.6, thus are omitted. The Fundamental Theorem and Adequacy. Theorem 3.1 (Fundamental Theorem of Unary Logical Relations): _(Fundamental Theorem of Unary Logical Relations) Every syntactially well-typed term is semantically well-typed, i.e., if \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\), then \(\Gamma^{\varphi}\models t:T^{q}\ \varepsilon\)._ Theorem 3.2 (Adequacy of Unary Logical Relations) Every closed semantically well-typed term \(t\) is safe: if \(\varnothing\models\ t:T^{q}\ \varepsilon\), then \(\exists\ \mathit{o},M.\ t,\varnothing,\ \varnothing\ \Downarrow\ \mathit{o},\ M^{\prime}\). From Theorem 3.2 (Adequacy), termination of all semantically well-typed terms is immediate. ## 4. Contextual Equivalence - the Direct-Style \(\lambda_{\varepsilon}^{*}\)-Calculus We apply a logical relations approach following (Ahmed et al., 2009; Benton et al., 2007; Timany et al., 2022) to support relational reasoning with respect to the _observational equivalence_ of two programs. We define binary logical relations over reachability types (the \(\lambda_{\varepsilon}^{*}\)-calculus in Section 2), and prove the soundness of the equational rules. To avoid technical complications, we choose a model that allows mutable references to contain only first-order values, consistent with the previous section. ### High-level Overview of the Proofs A program \(t_{1}\) is said to be _contextually equivalent_ to another program \(t_{2}\), written as \(\Gamma^{\varphi}\models t_{1}\approx_{\operatorname{ctx}}t_{2}:T^{p}\ \varepsilon\), if for any program context \(C\) with a hole of type \(T^{p}\ \varepsilon\), if \(C[t_{1}]\) has some (observable) behavior, then so does \(C[t_{2}]\). The definition of context \(C\) can be found in Section 4.2. Figure 6. Semantic typing rules of the \(\lambda_{\varepsilon}^{*}\)-calculus. Following the approach of Timany et al. (2022) and related prior works (Ahmed et al., 2009), we define a judgement for logical equivalence using binary logical relations, written as \(\Gamma^{\varphi}\models t_{1}\approx_{\log}t_{2}:T^{q}\ \varepsilon\). The high-level structure of the proof is the following: * Soundness (Theorem 4.39, Section 4.7). We show that the logical relation is sound with respect to contextual equivalence: \[\Gamma^{\varphi}\models t_{1}\approx_{\log}t_{2}:T^{q}\ \varepsilon\text{ implies }\Gamma^{\varphi}\models t_{1}\approx_{\text{ctx }}t_{2}:T^{q}\ \varepsilon.\] * Compatibility lemmas (Section 4.6). We show that the logical relation is compatible with syntactic typing. These results can be used to prove the soundness of the re-ordering rule (Section 4.8). ### Contextual Equivalence Unlike reduction contexts, contexts \(C\) for reasoning about equivalence allow a "hole" to appear in any place. We write \(C:(\Gamma^{\varphi};T^{q}\ \varepsilon)\ \mathbf{\Rightarrow}\ (\Gamma^{\prime} \varphi^{\prime};T^{\prime}q^{\prime}\ \varepsilon^{\prime})\) to mean that the context \(C\) is a program of type \(T^{\prime}q^{\prime}\ \varepsilon^{\prime}\) (closed under \(\Gamma^{\prime}\varphi^{\prime}\)) with a hole that can be filled with any program of type \(T^{q}\ \varepsilon\) (closed under \(\Gamma^{\varphi}\)). The typing rules for well-typed contexts imply that if \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\) and \(C:(\Gamma^{\varphi};T^{q}\ \varepsilon)\ \mathbf{\Rightarrow}\ (\Gamma^{\prime} \varphi^{\prime};T^{\prime}q^{\prime}\ \varepsilon^{\prime})\) hold, then \(\Gamma^{\prime}\varphi^{\prime}\ \vdash\ C[t]:T^{\prime}q^{\prime}\ \varepsilon^{\prime}\). Fig. 7 shows the typing rules for well-typed contexts. Two well-typed terms, \(t_{1}\) and \(t_{2}\), under type context \(\Gamma^{\varphi}\), are _contextually equivalent_ if any occurrences of the first term in a closed term can be replaced by the second term without affecting the _observable results_ of reducing the program, which is formally defined as follows: Definition 4.1 (Contextual Equivalence).: We say \(t_{1}\) is _contextually equivalent_ to \(t_{2}\), written as \(\Gamma^{\varphi}\models t_{1}\approx_{\text{ctx }}t_{2}:T^{p}\ \varepsilon\), if \(\Gamma^{\varphi}\ \vdash\ t_{1}:T^{q}\ \varepsilon\), and \(\Gamma^{\varphi}\ \vdash\ t_{2}:T^{q}\ \varepsilon\), and: \[\forall\,C:(\Gamma^{\varphi};T^{q}\ \varepsilon)\ \mathbf{\Rightarrow}\ ( \varnothing;\mathsf{Unit}^{\infty}\ \varnothing).\ C[t_{1}]\ \mathbf{\downarrow}\ \Longleftrightarrow\ C[\ t_{2}\ ]\ \mathbf{ \downarrow}\.\] We write \(t\ \mathbf{\downarrow}\) to mean term \(t\) terminates, if \(t,\ \varnothing,\ \varnothing\ \Downarrow\ v,\ \sigma\), for some value \(v\) and final store \(\sigma\). The above definition is standard (Ahmed et al., 2009) and defines a partial program equivalence. However, since we focus on a total fragment of the \(\lambda^{*}_{\varepsilon}\)-calculus here, program termination can not be used as an observer for program equivalence. We will thus rely on the following refined version of contextual equivalence using Boolean contexts: \[\forall\,C:(\Gamma^{\varphi};T^{q}\ \varepsilon)\ \mathbf{ \Rightarrow}\ (\varnothing;B^{\varnothing}\ \varnothing).\ \exists\ \sigma,\sigma^{\prime},v.\] \[\ **Definition 4.2** (World): A world W is a triple \((L_{1},L_{2},f)\), where * \(L_{1}\) and \(L_{2}\) are finite sets of locations, * \(f\subseteq(L_{1}\times L_{2})\) is a partial bijection. A world is meant to define relational stores. The partial bijection captures the fact that a relation holds under permutation of locations. Figure 7. Context typing rules for the \(\lambda_{e}^{*}\)-Calculus. If \(\mathrm{W}=(L_{1},L_{2},f)\) is a world, we refer to its components as follows: \[\mathrm{W}(\ell_{1},\ell_{2}) = \begin{cases}(\ell_{1},\ell_{2})\in f&\text{when defined}\\ \varnothing&\text{otherwise}\end{cases}\] \[\mathrm{dom}_{1}(\mathrm{W}) = L_{1}\] \[\mathrm{dom}_{2}(\mathrm{W}) = L_{2}\] If \(\mathrm{W}\) and \(\mathrm{W}^{\prime}\) are worlds, such that \(\mathrm{dom}_{1}(\mathrm{W})\cap\mathrm{dom}_{1}(\mathrm{W}^{\prime})= \mathrm{dom}_{2}(\mathrm{W})\cap\mathrm{dom}_{2}(\mathrm{W}^{\prime})=\varnothing\), then \(\mathrm{W}\) and \(\mathrm{W}^{\prime}\) are called disjoint, and we write \(\mathrm{W};\mathrm{W}^{\prime}\) to mean extending \(\mathrm{W}\) with a disjoint world \(\mathrm{W}^{\prime}\). Let \(\sigma_{1}\) and \(\sigma_{2}\) be two stores. We write \((\sigma_{1},\sigma_{2}):\mathrm{W}\) to mean \(\mathrm{W}=(\mathrm{dom}(\sigma_{1}),\mathrm{dom}(\sigma_{2}),f)\). Our world definition allows us to specify that the domains of two relational stores may grow during a computation, but does not cover store operations, which is important when proving the soundness of equational rules. Like prior works (_e.g._, (Benton et al., 2007; Thamsborg and Birkedal, 2011)), we use effects as a refinement for the definition of world. The notation \(\varepsilon\) denotes read/write effects.2 Local reasoning is enabled by reachability qualifiers and read/write effects, meaning that what is preserved during an effectful computation are the locations that are _not_ mentioned in the read/write effects. This is a common technique used in reasoning about frames in Hoare-style logics, _e.g._, separation logic (Reynolds, 2002). This treatment is also applicable to our refined effect system (_i.e._, the \(\lambda^{*}\)'s effect system in (Bao et al., 2021)), where framing is achieved through write effects - an established technique in Dafny (Leino, 2010) and region logics (Banerjee et al., 2013; Bao et al., 2015). In this case, a frame indirectly describes the locations that a computation may not change (Borgida et al., 1995). Framing allows the proof to carry properties of effectful terms, such as function applications, since properties that are true for unchanged locations will remain valid (Bao et al., 2018). Footnote 2: A complete apporach would require a notion of allocation effects that specify store allocation occurs during a computation. As this report focuses on the proof the re-ordering rule (Section 4.8), allocation effects are omitted. Given two worlds \(\mathrm{W}=(L_{1},L_{2},f)\) and \(\mathrm{W}^{\prime}=(L_{1}^{\prime},L_{2}^{\prime},f^{\prime})\), and two sets of locations \(L\) and \(L^{\prime}\), we define the relation of \(\mathrm{W}\) and \(\mathrm{W}^{\prime}\) (written as \(\mathrm{W}\sqsubseteq_{(L,L^{\prime})}\mathrm{W}^{\prime}\)) as follows: \[\mathrm{W}\sqsubseteq_{(L,L^{\prime})}\mathrm{W}^{\prime} \stackrel{{\mathrm{def}}}{{=}} L\subseteq L_{1}\wedge L\subseteq L_{1}^{\prime}\wedge L^{ \prime}\subseteq L_{2}\wedge L^{\prime}\subseteq L_{2}^{\prime}\wedge\] \[(\forall\ell_{1},\ell_{2}.\ell_{1}\in L_{1}\wedge\ell_{2}\in L_{2 }\wedge(\ell_{1},\ell_{2})\in f\Rightarrow(\ell_{1},\ell_{2})\in f^{\prime})\wedge\] \[(\forall\ell_{1},\ell_{2}.(\ell_{1}\in L_{1}\vee\ell_{2}\in L_{2 })\wedge(\ell_{1},\ell_{2})\in f^{\prime}\Rightarrow(\ell_{1},\ell_{2})\in f)\] We write \(\mathrm{W}\sqsubseteq\mathrm{W}^{\prime}\) to mean \(\mathrm{W}\sqsubseteq_{(\mathrm{dom}_{1}(\mathrm{W}),\,\mathrm{dom}_{2}( \mathrm{W}))}\mathrm{W}^{\prime}\). ### Binary Logical Relations for \(\lambda_{\varepsilon}^{*}\) This section presents the definition of binary logical relations for \(\lambda_{\varepsilon}^{*}\). The relational value environment has to satisfy the context interpretation. We define the interpretation of typing contexts: \[\begin{array}{rcl}G[[\varnothing^{\#}]]&=&\varnothing\\ G[[(\Gamma,x:T^{q})^{\#}]]&=&\{(\mathrm{W},\hat{H};(x\mapsto(v_{1},v_{2}))) \mid(\mathrm{W},\hat{H})\in G[[\Gamma^{\#}]]\ \wedge\ \varphi\subseteq\mathrm{dom}(\Gamma)\ \wedge\ q\subseteq \mathrm{dom}(\Gamma)\ \wedge\\ &(\mathrm{W},v_{1},v_{2})\in\mathcal{V}[[T]]^{\hat{H}}\wedge\\ &(\forall\,q.q^{\prime}.q\subseteq\varphi\ \wedge\ q^{\prime}\subseteq\varphi \wedge\Rightarrow\\ &&(\mathrm{loc}_{\hat{H}_{1}}((q*))\cap\mathrm{loc}_{\hat{H}_{1}}(q^{\prime}*) \subseteq\mathrm{loc}_{\hat{H}_{1}}((q*\cap q^{\prime}*))\wedge\\ &&\mathrm{loc}_{\hat{H}_{2}}((q*))\cap\mathrm{loc}_{\hat{H}_{2}}(q^{\prime}*) \subseteq\mathrm{loc}_{\hat{H}_{2}}((q*\cap q^{\prime}*))))\}\end{array}\] In the above definition, \(\hat{H}\) ranges over relational value environment that are finite maps from variables \(x\) to pairs of values \((v_{1},v_{2})\). If \(\hat{H}(x)=(v_{1},v_{2})\), then \(\hat{H}_{1}(x)\) denotes \(v_{1}\) and \(\hat{H}_{2}(x)\) denotes \(v_{2}\). The Binary Value InterpretationThe definition of binary value interpretation of types is shown in Fig. 8. The relational interpretation of type \(T\), written as \(\mathcal{V}[\llbracket T\rrbracket^{\hat{H}}\), is a set of tuples of form \((\mathrm{W},\mathit{v}_{1},\mathit{v}_{2})\), where \(\mathit{v}_{1}\) and \(\mathit{v}_{2}\) are values, and \(\mathrm{W}\) is a world. We say \(\mathit{v}_{1}\) and \(\mathit{v}_{2}\) are related at type \(T\) with respect to \(\mathrm{W}\). Ground TypesA pair of Boolean values are related if they are both true or false. A pair of locations \((\mathit{\ell}_{1},\mathit{\ell}_{2})\) are related if they are in the domain of the relational store with respect to \(\mathrm{W}\), (written as \((\mathit{\sigma}_{1},\mathit{\sigma}_{2}):\mathrm{W}\)), such that \(\mathrm{W}(\mathit{\ell}_{1},\mathit{\ell}_{2})\). It means that a pair of related locations store related values. Function TypesTwo closure records, \(\langle H_{1},(\lambda x.t_{1})^{\mathit{q}_{1}}\rangle\) and \(\langle H_{2},(\lambda x.t_{2})^{\mathit{q}_{2}}\rangle\), are related at type \(T^{\mathit{p}}\rightarrow^{\epsilon}U^{r}\) with respect to world \(\mathrm{W}\), meaning that it satisfies the following conditions: * The set of locations reachable from the two closure records are well-formed with respect to the world, _i.e._, \(\mathrm{locs}(\langle H_{1},(\lambda x.t_{1})^{\mathit{q}_{1}}\rangle)\subseteq \mathrm{dom}_{1}(\mathrm{W})\) and \(\mathrm{locs}(\langle H_{2},(\lambda x.t_{2})^{\mathit{q}_{2}}\rangle)\subseteq \mathrm{dom}_{2}(\mathrm{W})\). * If a pair of locations \((\mathit{\ell}_{1},\mathit{\ell}_{2})\) are related at world \(\mathrm{W}\), then \(\mathit{\ell}_{1}\) is reachable from its closure record (_i.e._, \(\mathrm{locs}(\langle H_{1},(\lambda x.t_{1})^{\mathit{q}_{1}}\rangle)\)) if and only if \(\mathit{\ell}_{2}\) is reachable from its closure record (_i.e._, \(\mathrm{locs}(\langle H_{2},(\lambda x.t_{2})^{\mathit{q}_{2}}\rangle)\)). * The arguments are allowed if * \(\mathrm{W}\sqsubseteq_{(\mathrm{locs}(\langle H_{1},(\lambda x.t_{1})^{ \mathit{q}_{1}}\rangle),\mathrm{locs}(\langle H_{2},(\lambda x.t_{2})^{ \mathit{q}_{2}}\rangle))}\mathrm{W}^{\prime}\); and * the arguments \(\mathit{v}_{1}\) and \(\mathit{v}_{2}\) are related at type \(T\) with respect \(\mathrm{W}^{\prime}\); and Figure 8: Binary value and term interpretation for the \(\lambda_{\varepsilon}^{*}\)-calculus. * the overlapping locations reachable from the functions and their arguments are permissible by the argument's qualifier \(p\), _i.e._, \(\operatorname{locs}(\langle H_{1},(\lambda x.t_{1})^{q_{1}}\rangle)\cap \operatorname{locs}(v_{1})\subseteq\operatorname{locs}_{\hat{H}_{1}}(p)\) and \(\operatorname{locs}(\langle H_{2},(\lambda x.t_{2})^{q_{2}}\rangle)\cap \operatorname{locs}(v_{2})\subseteq\operatorname{locs}_{\hat{H}_{2}}(p)\). * Under their extened value environments \(H_{1};(x,v_{1})\) and \(H_{2};(x,v_{2})\), \(t_{1}\) and \(t_{2}\) are reduced to some values \(v_{1}^{\prime}\) and \(v_{2}^{\prime}\) with some final stores \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime}\) and world \(\operatorname{W}^{\prime\prime}\), such that * the world \(\operatorname{W}^{\prime\prime}\) are extended from the world \(\operatorname{W}^{\prime}\), such that \(\operatorname{W}^{\prime}\sqsubseteq\operatorname{W}^{\prime\prime}\); and * \(\sigma_{1}^{\prime}\) and \(\sigma_{2}^{\prime}\) are related with respect to world \(\operatorname{W}^{\prime\prime}\), _i.e._, \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\operatorname{W}^{\prime\prime}\). * \(v_{1}^{\prime}\) and \(v_{2}^{\prime}\) are related at type \(U\) with respect to world \(\operatorname{W}^{\prime\prime}\); and * If the return value's qualifier \(r\) depends on the argument (_i.e._, \(x\in r\)), then the locations reachable from \(v_{1}^{\prime}\) and \(v_{2}^{\prime}\) are subsets of those reachable both from the function and \(r\), plus those reachable from the arguments; otherwise (_i.e._, \(x\notin r\)), they are just subset of those reachable both from the function and \(r\); and * If a bound variable \(x\) appears in the effect \(\varepsilon\), meaning the function body may modify the argument, then the effect will include the qualifier that may reach the value of function argument \(p\); otherwise it is just \(\varepsilon\). The Binary Term InterpretationTwo related terms, \(t_{1}\) and \(t_{2}\), are defined based on the relation of their computational behaviors, _i.e._, returned values, reachability qualifiers and effects, which is defined by \(\mathcal{E}\llbracket T\ \varepsilon\rrbracket_{\varphi}^{\hat{H}}\). It means for all related stores with respect to world, \((\sigma_{1},\sigma_{2}):\operatorname{W}\), if * \(t_{1}\) is evaluated to some value \(v_{1}\) with some final store \(\sigma_{1}^{\prime}\); and * \(t_{2}\) is evaluated to some value \(v_{2}\) with some final store \(\sigma_{2}^{\prime}\); and * there exists a world \(\operatorname{W}^{\prime}\), such that \(\operatorname{W}\sqsubseteq\operatorname{W}^{\prime}\); and * \(v_{1}\) and \(v_{2}\) are related at type \(T\) with respect to world \(\operatorname{W}^{\prime}\); and * \(\sigma_{1}^{\prime}\) and \(\sigma_{2}^{\prime}\) are related with respect to \(\operatorname{W}^{\prime}\); and * The locations reachable from the values in the domain of pre-stores are subset of those reachable from \(\operatorname{locs}_{\hat{H}_{1}}((\varphi\cap q))\) and \(\operatorname{locs}_{\hat{H}_{2}}((\varphi\cap q))\) for each of the term; and * The effect captures what may be read/modified in the pre-state store. Note that we interpret the function body (after substitution) and other terms separately, which allows us to provide more precise reasoning in the logical relations of function types. ### Metatheory This section discusses several key lemmas used in the proof of compatibility lemmas (Section 4.6) and soundness of the re-ordering rules (Section 4.8). #### 4.5.1. Well-formedness **Lemma 4.3** (Well-formed value interpretation).: _Let \((W,\hat{H})\in G\llbracket\Gamma^{\varphi}\rrbracket\). If \((W,v_{1},v_{2})\in\mathcal{V}\llbracket T\rrbracket^{\hat{H}}\), then \(\operatorname{locs}(v_{1})\subseteq\operatorname{dom}_{1}(W)\) and \(\operatorname{locs}(v_{2})\subseteq\operatorname{dom}_{2}(W)\)._ Proof.: By induction on type \(T\) and the constructs of value \(v_{1}\) and \(v_{2}\). **Lemma 4.4** (Well-formed Typing context interpretation).: _Let \((W,\hat{H})\in G\llbracket\Gamma^{\varphi}\rrbracket\), then for all \(q\subseteq\varphi\), \(\operatorname{locs}_{\hat{H}_{1}}(q)\subseteq\operatorname{dom}_{1}(W)\) and \(\operatorname{locs}_{\hat{H}_{2}}(q)\subseteq\operatorname{dom}_{2}(W)\)._ Proof.: By definition of the typing context interpretation and Lemma 4.3. **Lemma 4.5**.: _Let \((W,\hat{H})\in G\llbracket\Gamma^{\varphi}\rrbracket\), then \(\operatorname{dom}(\hat{H}_{1})=\operatorname{dom}(\hat{H}_{2})= \operatorname{dom}(\Gamma)\), and \(\operatorname{dom}(\Gamma)*\)._ Proof.: Immediately by the definition of typing context interpretation and the definition of saturation in Fig. 2. #### 4.5.2 World Extension and Relational Stores **Lemma 4.6** (Relational Store Update).: _If \((\sigma_{1},\sigma_{2}):W\), and \((W,\ell_{1},\ell_{2})\in\mathcal{V}[\![\![\,\mathrm{Ref}\,\,\mathsf{B}]\!]^{ \hat{H}}\), and \((W,v_{1},v_{2})\in\mathcal{V}[\![\![\,\mathrm{B}]\!]^{\hat{H}}\), then \((\sigma_{1}[\ell_{1}\mapsto v_{1}],\sigma_{2}[\ell_{2}\mapsto v_{2}]):W\)._ Proof.: By definition of relational stores. **Lemma 4.7** (Relational Store Extension).: _If \((\sigma_{1},\sigma_{2}):W\), and \((W,v_{1},v_{2})\in\mathcal{V}[\![\![\,\mathrm{B}]\!]^{\hat{H}}\), then \((\sigma_{1};(\ell_{1}:v_{1}),\sigma_{2};\ell_{2}:v_{2}):W;(\ell_{1},\ell_{2},( \ell_{1},\ell_{2})\in f)\), where \(\ell_{1}\notin\mathrm{dom}(\sigma_{1})\) and \(\ell_{2}\notin\mathrm{dom}(\sigma_{2})\)._ Proof.: By definition of relational stores. **Lemma 4.8** (Logical Relation Closed Under Relational Value Substitution Extension).: _If \(T\) is closed under \(\Gamma^{\varphi}\), and \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), then \((W,v_{1},v_{2})\in\mathcal{V}[\![T]\!]^{\hat{H}}\) if and only if \((W,v_{1},v_{2})\in\mathcal{V}[\![\![T]\!]^{\hat{H};\hat{H}^{\prime}}\), for all \(\hat{H}^{\prime}\)._ Proof.: By induction on type \(T\) and the constructs of values \(v_{1}\) and \(v_{2}\). **Lemma 4.9** (Logical Relation Localization).: _If \((W,v_{1},v_{2})\in\mathcal{V}[\![T]\!]^{\hat{H}}\), and for all \(W^{\prime}\), such that \(W\sqsubseteq_{(\mathit{locs}(v_{1}),\mathit{locs}(v_{2}))}W^{\prime}\), then \((W^{\prime},v_{1},v_{2})\in\mathcal{V}[\![T]\!]^{\hat{H}}\)._ Proof.: By the definition of logical relation, Lemma 4.3 and Lemma 4.9. #### 4.5.3 Semantic Typing Context **Lemma 4.11** (Semantic Typing Context Tighten).: _If \((W,\hat{H},)\in G[\![\Gamma^{\varphi}]\!]\), then for all \(p\subseteq\varphi\), \((W,\hat{H})\in G[\![\Gamma^{p}]\!]\)._ Proof.: By the definition of typing context interpretation. **Lemma 4.12** (Semantic Typing Context Extension 1).: _If \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), and \(q\subseteq\mathrm{dom}(\Gamma)\), and \((W,v_{1},v_{2})\in\mathcal{V}[\![T]\!]^{\hat{H}}\), and \(\mathit{locs}_{\hat{H}_{1}}(q)\cap\mathit{locs}(v_{1})\subseteq\mathit{locs}_{ \hat{H}_{1}}(q)\), and \(\mathit{locs}_{\hat{H}_{2}}(\varphi)\cap\mathit{locs}(v_{2})\subseteq\mathit{ locs}_{\hat{H}_{2}}(q)\), then \((W,\hat{H};(x\mapsto(v_{1},v_{2})))\in G[\![(\Gamma,x:T^{q})^{\varphi,x}]\!]\)_ Proof.: By typing context interpretation, Lemma 4.8, Lemma 4.10 and Lemma 4.11. **Lemma 4.13** (Semantic Typing Context Extension 2).: _If \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), and \(W\sqsubseteq W^{\prime}\), and \((W^{\prime},v_{1},v_{2})\in\mathcal{V}[\![T]\!]^{\hat{H}}\), and \(\mathit{locs}_{\hat{H}_{1}}(q)\cap\mathit{locs}(v_{1})\subseteq\mathit{locs}_{ \hat{H}_{1}}(p)\), and \(\mathit{locs}_{\hat{H}_{2}}(q)\cap\mathit{locs}(v_{2})\subseteq\mathit{locs}_ {\hat{H}_{2}}(p)\), and \(\mathit{q}\subseteq\varphi\), then \((W^{\prime},\hat{H};(x\mapsto(v_{1},v_{2})))\in G[\![(\Gamma,x:T^{p})^{q,x}]\!]\)._ Proof.: By typing context interpretation, Lemma 4.8, Lemma 4.10 and Lemma 4.11. **Lemma 4.14** (Semantic Typing Context Localization).: _If \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), and \(W\sqsubseteq_{(\mathit{locs}(\hat{H}_{1}(\varphi)),\mathit{locs}(\hat{H}_{2}( \varphi)))}W^{\prime}\), then \((W^{\prime},\hat{H})\in G[\![\Gamma^{\varphi}]\!]\)._ Proof.: By definition of typing context interpretation and Lemma 4.9. #### 4.5.4 Reachability Qualifiers **Lemma 4.15**: _For all \(\sigma\), \(b\), \(p\) and \(q\), \(b\leadsto^{\sigma}\)\(\mathit{locs}(p\cap q)\), where \(b\) is true or false._ Immediate by the definition in Fig. 4. **Lemma 4.16**: _For all \(\sigma\), \(\ell\), \(p\) and \(q\), \(\ell\leadsto^{\sigma}\)\(\mathit{locs}(p\cap q)\), where \(\ell\notin\mathrm{dom}(\sigma)\)._ Immediate by the definition in Fig. 4. **Lemma 4.17**: \(\langle H_{1},(\lambda x.t_{1})^{p_{1}*\cap q_{1}*}\rangle\leadsto^{\mathrm{ dom}_{1}(W)}\)\(\mathit{locs}_{\hat{H}_{1}}(p_{1}*\,\cap\,q_{1}*)\) _and_ \(\langle H_{2},(\lambda x.t_{2})^{p_{2}*\cap q_{2}*}\rangle\leadsto^{\mathrm{ dom}_{2}(W)}\)\(\mathit{locs}_{\hat{H}_{2}}(p_{2}*\,\cap\,q_{2}*)\)_._ Immediate by the definition in Fig. 4. #### 4.5.5 Effects To streamline the presentation, we introduce the following notation. We write \((\sigma\!\downarrow\!\!L)\) to mean retroving a partial store with respect to \(L\), meaning \(\mathrm{dom}((\sigma\!\downarrow\!\!L))=\mathrm{dom}(\sigma)\cap L\ \wedge\ \forall\ \ell\in \mathrm{dom}((\sigma\!\downarrow\!\!L)).(\sigma\!\downarrow\!\!L)(\ell)= \sigma(\ell)\). **Lemma 4.18** (Read/Write Effects): _If \(\ell\in\mathrm{dom}(\sigma)\), and \(\ell\leadsto^{\sigma}\)\(\mathit{locs}(p\cap q)\), then \(\sigma\leadsto^{\mathit{locs}(q)}\sigma[\ell\mapsto v]\)._ By Lemma 4.16 and interpretation of effects. **Lemma 4.19** (No Effects): \(\sigma\leadsto^{\emptyset}\sigma\)_._ Immediate by the definition of effects. **Lemma 4.20** (SubEffects): _If \(\mathit{locs}(\varepsilon_{1})\subseteq\mathit{locs}(\varepsilon_{2})\), and \(\sigma\leadsto^{\mathit{locs}(\varepsilon_{1})}\sigma^{\prime}\), then \(\sigma\leadsto^{\mathit{locs}(\varepsilon_{2})}\sigma^{\prime}\)._ By the interpretation of effects. By the interpretation of observable effects: the set of locations that may be written in the reduction of \(t\) must be in \(\varepsilon\). Thus, the values stored in the locations \(\sigma\), but are separate from \(\varepsilon*\) must be preserved. #### 4.5.6 Other auxiliary lemmas **Lemma 4.23** (Qualifier intersection distributes over locations): _Let \((W,\hat{H})\in G[[\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![[\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![ \![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\![\[\![\![\![\![\![\![\![\[\![\![\![\![\![\[\![\![\![\![\![\![\[\![\![\![\![\![\[\![\![\![\![\![\![\![\![\[\![\[\![\![\![\![\[\![\[\![\![\[\![\[\![\![\![\[\![\[\![\[\![\![\[\[\![\[\[\![\[\[\![\[\[\![\![\[\![\[\[\[\[\[\[\[\[\![\[ 1. \(W^{\prime}\sqsubseteq W^{\prime\prime}\) 2. \(t_{1}\), \(H_{1}\); \((x,v_{1})\), \(\sigma_{1}\;\Downarrow\;v_{1}\), \(\sigma^{\prime}_{1}\) 3. \(t_{1}\), \(H_{2}\); \((x,v_{2})\), \(\sigma_{2}\;\Downarrow\;v_{2}\), \(\sigma^{\prime}_{2}\) 4. \((\sigma^{\prime}_{1},\sigma^{\prime}_{2})\) : \(W^{\prime\prime}\) 5. \((W^{\prime\prime}\), \(v_{3},v_{4})\in\mathcal{V}[[\![\cup]\!]^{\hat{H}}\) 6. \((x\in r\Rightarrow v^{\prime}_{1}\leadsto^{\sigma^{\prime}_{1}}(\mathit{ locs}_{\hat{H}_{1}}(r)\cap\mathit{locs}(\langle H_{1},(\lambda x.t_{1})^{q_{1}})\rangle\; \cup\;\mathit{locs}(v_{1}))\wedge\) \(v^{\prime}_{2}\leadsto^{\sigma^{\prime}_{2}}(\mathit{locs}_{\hat{H}_{2}}(r) \cap\mathit{locs}(\langle H_{1},(\lambda x.t_{2})^{q_{1}})\rangle\;\cup\; \mathit{locs}(v_{2})))\) 7. \((x\notin r\Rightarrow v^{\prime}_{1}\leadsto^{\sigma^{\prime}_{1}}(\mathit{ locs}_{\hat{H}_{1}}(r)\cap\mathit{locs}(\langle H_{1},(\lambda x.t_{1})^{q_{1}})\rangle)\wedge\) \(v^{\prime}_{2}\leadsto^{\sigma^{\prime}_{2}}(\mathit{locs}_{\hat{H}_{2}}(r) \cap\mathit{locs}(\langle H_{2},(\lambda x.t_{2})^{q_{2}})\rangle))\) Proof.: By Lemma 4.13, \((W^{\prime},(H_{1},H_{2});(x\mapsto(v_{1},v_{2})))\in G[[\![(\Gamma,x:T^{P})^{ q,x}]\!]]\). Thus, there exists \(W^{\prime\prime}\), such that \((W^{\prime\prime}\), \(t_{1},t_{2})\in\mathcal{E}[\![[U^{r}\,\epsilon]\!]^{(H_{1},H_{2});(x,(v_{1},v _{2}))}_{q,x}\!]\), which can be used to prove (2) - (4). (6) and (7) can be proved by inspecting \(x\in r\), Lemma 4.13 and Lemma 4.14. Lemma 4.25 (Semantic Application).: _Let \((W,\hat{H})\in G[\![\![\,^{\varphi}]\!]\). If \(W\sqsubseteq(\{\mathit{locs}(\langle H_{1},(\lambda x.t_{1})^{q_{1}}\rangle), \mathit{locs}(\langle H_{1},(\lambda x.t_{1})^{q_{1}}\rangle)\}\;\mathit{W}^{\prime}\) and \((W^{\prime}\), \(\langle H_{1},(\lambda x.t_{1})^{q_{1}}\rangle,\langle H_{2},(\lambda x.t_{2 })^{q_{2}}\rangle)\in\mathcal{V}[\![\![\,^{P^{\varphi}\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: By the typing context interpretation, value interpretation in Fig. 8 and Lemma 4.15. Lemma 4.28 (Compatibility: Variables).: _If \(x:T^{\,q}\in\Gamma\) and \(x\subseteq\varphi\), then \(\Gamma^{\,\varphi}\models x\approx_{\mathit{log}}x:T^{\,x}\,\varnothing\)_ Proof.: Immediate by the typing context interpretation in Fig. 8. Lemma 4.29 (Compatibility: \(\lambda\)).: _If \((\Gamma\,,\ x:T^{\,p})^{q,x}\models t_{1}\approx_{\mathit{log}}t_{2}:U^{\,r}\ \varepsilon,\,q\subseteq\varphi\), then \(\Gamma^{\,\varphi}\models(\lambda x.t_{1})^{q_{1}}\approx_{\mathit{log}}( \lambda x.t_{2})^{q_{2}}:(x:T^{\,p}\to^{\,U^{\,r}})^{\,q}\ \varnothing\)._ Proof.: Let \((\mathrm{W},\hat{H})\in G[[\Gamma]]\) and \((\sigma_{1},\sigma_{2}):\mathrm{W}\), and \((\forall\ell_{1},\ell_{2},\mathrm{W}(\ell_{1},\ell_{2})\Rightarrow\ell_{1} \in\mathrm{losc}(\langle H_{1},(\lambda x.t_{1})^{q}\rangle)\iff\ell_{2}\in \mathrm{losc}(\langle H_{2},(\lambda x.t_{2})^{q}\rangle))\). By definition of term interpretation, we need to show there exists \(\mathrm{W}^{\prime}\), \(\sigma^{\prime}\), \(v_{1}\) and \(v_{2}\) such that: 1. \(\mathrm{W}\sqsubseteq(\mathrm{losc}(\langle H_{1},(\lambda x.t_{1})^{q_{1}} \rangle),\mathrm{losc}(\langle H_{2},(\lambda x.t_{2})^{q_{2}}\rangle))\) W\({}^{\prime}\) 2. \((\lambda x.t_{1})^{q_{1}}\), \(\tilde{H}_{1}\), \(\sigma_{1}\ \not\models\ \langle\tilde{H}_{1},(\lambda x.t_{1})^{q_{1}}\rangle\), \(\sigma_{1}^{\prime}\) 3. \((\lambda x.t_{1})^{q_{2}}\), \(\tilde{H}_{2}\), \(\sigma_{2}\ \not\models\ \langle\tilde{H}_{2},(\lambda x.t_{2})^{q_{2}}\rangle\), \(\sigma_{2}^{\prime}\) 4. \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\mathrm{W}^{\prime}\) 5. \((\mathrm{W}^{\prime},\,v_{1},v_{2})\in\mathcal{V}[[(x:T^{\,p}\to^{\,\varepsilon }U^{\,r})]]^{\hat{H}}\) 6. \(v_{1}\leadsto^{\sigma_{1}}\mathrm{losc}_{\tilde{H}_{1}}(\varphi\cap q)\) 7. \(v_{2}\leadsto^{\sigma_{2}}\mathrm{losc}_{\tilde{H}_{2}}(\varphi\cap q)\) 8. \(\sigma_{1}\longleftrightarrow^{\,\emptyset}\sigma_{1}^{\prime}\) 9. \(\sigma_{2}\longleftrightarrow^{\,\emptyset}\sigma_{2}^{\prime}\) By reduction semantics, we pick \(\mathrm{W}^{\prime}=\mathrm{W}\), \(v_{1}=\langle VE_{1},(\lambda x.t_{1})^{q_{1}}\rangle\), \(v_{2}=\langle\tilde{H_{2}},(\lambda x.t_{2})^{q_{2}}\rangle\), \(\sigma_{1}^{\prime}=\sigma_{1}\) and \(\sigma_{2}^{\prime}=\sigma_{2}\). Thus, (1)- (4) are discharged. (5) can be proved by Lemma 4.5 and Lemma 4.24. (6) and (7) can be proved by Lemma 4.17. (8) and (9) can be proved by Lemma 4.19. Lemma 4.30 (Compatibility: Allocation).: _If \(\Gamma^{\,\varphi}\models t_{1}\approx_{\mathit{log}}t_{2}:\mathrm{B}^{\,q}\ \varepsilon\), then \(\Gamma^{\,\varphi}\models\textbf{ref}\ t_{1}\approx_{\mathit{log}}\textbf{ref} \ t_{2}\ :\ (\mathrm{Ref}\ B)^{\,q}\ \varepsilon\)._ Proof.: Let \((\mathrm{W},\hat{H})\in G[[\Gamma]]\) and \((\sigma_{1},\sigma_{2}):\mathrm{W}\). By the assumption, we know that there exists \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime}\), \(\mathrm{W}^{\prime}\), \(v_{1}\) and \(v_{2}\), such that * \(\mathrm{W}\sqsubseteq\mathrm{W}^{\prime}\) * \(t_{1}\), \(\tilde{H}_{1}\), \(\sigma_{1}\ \not\models\ \ v_{1}\), \(\sigma_{1}^{\prime}\) * \(t_{2}\), \(\tilde{H}_{2}\), \(\sigma_{2}\ \not\models\ \ v_{2}\), \(\sigma_{2}^{\prime}\) * \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\mathrm{W}^{\prime}\) * \((\mathrm{W}^{\prime},v_{1},v_{2})\in\mathcal{V}[[\mathrm{B}]]^{\hat{H}}\) * \(v_{1}\leadsto^{\sigma_{1}}\mathrm{losc}_{\hat{H}_{1}}(\varphi\cap q)\) * \(v_{2}\leadsto^{\sigma_{2}}\mathrm{losc}_{\tilde{H}_{2}}(\varphi\cap q)\) * \(\sigma_{1}\longleftrightarrow^{\mathrm{losc}_{\tilde{H}_{1}}(\varepsilon)} \sigma_{1}^{\prime}\) * \(\sigma_{2}\longleftrightarrow^{\mathrm{losc}_{\tilde{H}_{2}}(\varepsilon_{1})} \sigma_{2}^{\prime}\) By reduction semantics, we know * **ref**\(t_{1}\), \(\hat{H}_{1}\), \(\sigma_{1}\ \not\models\ \ell_{1}\), \(\sigma_{1}^{\prime}\); \((\ell_{1},v_{1})\), where \(\ell_{1}\notin\mathrm{dom}(\sigma_{1}^{\prime})\) * **ref**\(t_{2}\), \(\hat{H}_{2}\), \(\sigma_{2}\ \not\models\ \ell_{2}\), \(\sigma_{2}^{\prime}\); \((\ell_{2},v_{1})\), where \(\ell_{2}\notin\mathrm{dom}(\sigma_{2}^{\prime})\) By Lemma 4.7, we know \((\sigma_{1}^{\prime};(\ell_{1}\mapsto v_{1}),\sigma_{2}^{\prime};(\ell_{2} \mapsto v_{2}))\ :\mathrm{W}^{\prime};((\ell_{1}\mapsto v_{1}),(\ell_{2}\mapsto v_{2}), \{(\ell_{1},\ell_{2})\})\). The rest of the proof can be done by the definition of value interpretation, Lemma 4.21 and Lemma 4.16. Lemma 4.31 (Compatibility: Dereference (!)).: _If \(\Gamma^{\,\varphi}\models t_{1}\approx_{\mathit{log}}t_{2}:(\mathrm{Ref}\ B)^{\,q}\ \varepsilon\), then \(\Gamma^{\,\varphi}\models\mathrm{!}t_{1}\approx_{\mathit{log}}\!t_{2}:B^{\, \emptyset}\ \varepsilon\vDash q\)._ Proof.: Let \((\mathrm{W},\hat{H})\in G\big{[}\![\Gamma^{\varphi}]\!]\) and \((\sigma_{1},\sigma_{2}):\mathrm{W}.\) By the assumption, \((\mathrm{W},t_{1},t_{2})\in\mathcal{E}\big{[}\![\mathrm{Ref}\ B^{q}\ \varepsilon]\!] \(\!]_{\varphi}^{\hat{H}},\) and reduction semantics, we know there exists \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime}\), \(\ell_{1}\) and \(\ell_{2}\) such that * \(\mathrm{W}\sqsubseteq\mathrm{W}^{\prime}\) * \(t_{1}\), \(\hat{H_{1}}\), \(\sigma_{1}\Downarrow\ell_{1}\), \(\sigma_{1}^{\prime}\), where \(\sigma_{1}^{\prime}(\ell_{1})=v_{1}\) * \(t_{2}\), \(\hat{H_{2}}\), \(\sigma_{2}\Downarrow\ell_{2}\), \(\sigma_{2}^{\prime}\), where \(\sigma_{2}^{\prime}(\ell_{2})=v_{2}\) * \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\mathrm{W}^{\prime}\) * \((\mathrm{W}^{\prime},\ell_{1},\ell_{2})\in\mathcal{V}\big{[}\![\mathrm{Ref}\ B] \!]^{\hat{H}}\) * \(\ell_{1}\leadsto^{\sigma_{1}}\mathrm{losc}_{\hat{H_{1}}}(\varphi\cap q)\) * \(\ell_{2}\leadsto^{\sigma_{2}}\mathrm{losc}_{\hat{H_{2}}}(\varphi\cap q)\) * \(\sigma_{1}\leadsto^{\mathrm{losc}_{\hat{H_{1}}}(\varepsilon)}\sigma_{1}^{\prime}\) * \(\sigma_{2}\leadsto^{\mathrm{losc}_{\hat{H_{2}}}(\varepsilon)}\sigma_{2}^{\prime}\) We can finish the proof by reduction semantics, value interpretation, Lemma 4.15, Lemma 4.20, where we pick \(\sigma_{1}^{\prime\prime}\) to be \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime\prime}\) to be \(\sigma^{\prime}\), and \(\mathrm{W}^{\prime\prime}\) to be \(\mathrm{W}^{\prime}\). **Lemma 4.32** (Compatibility: Assignments (:=)).: _If \(\Gamma^{\varphi}\models t_{1}\approx_{\mathit{log}}t_{2}:(\mathrm{Ref}\ B)^{q} \varepsilon_{1}\), \(\Gamma^{\varphi}\models t_{3}\approx_{\mathit{log}}t_{4}:\mathrm{B}^{\varnothing}\varepsilon_ {2}\), then \(\Gamma^{\varphi}\models t_{1}:=t_{3}\approx_{\mathit{log}}t_{2}:=t_{4}:B^{ \varnothing}\varepsilon_{1}\triangleright\varepsilon_{2}\triangleright q\)._ Proof.: Let \((\mathrm{W},\hat{H})\in G\big{[}\![\Gamma^{\varphi}]\!]\) and \((\sigma_{1},\sigma_{2}):\mathrm{W}.\) By the first assumption, we know that there exists \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime}\), \(\mathrm{W}^{\prime}\), \(\ell_{1}\) and \(\ell_{2}\) such that * \(\mathrm{W}\sqsubseteq\mathrm{W}^{\prime}\) * \(t_{1}\), \(\hat{H_{1}}\), \(\sigma_{1}\Downarrow\ell_{1}\), \(\sigma_{1}^{\prime}\) * \(t_{2}\), \(\hat{H_{2}}\), \(\sigma_{2}\Downarrow\ell_{2}\), \(\sigma_{2}^{\prime}\) * \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\mathrm{W}^{\prime}\) * \((\mathrm{W}^{\prime},\ell_{1},\ell_{2})\in\mathcal{V}\big{[}\![\mathrm{Ref}\ B] \!]^{\hat{H}}\) * \(f_{1}\leadsto^{\sigma_{1}}\mathrm{losc}_{\hat{H_{1}}}(\varphi\cap q)\) * \(\ell_{2}\leadsto^{\sigma_{2}}\mathrm{losc}_{\hat{H_{2}}}(\varphi\cap q)\) * \(\sigma_{1}\leadsto^{\mathrm{losc}_{\hat{H_{1}}}(\varepsilon_{1})}\sigma_{1}^{\prime}\) * \(\sigma_{2}\leadsto^{\mathrm{losc}_{\hat{H_{2}}}(\varepsilon_{1})}\sigma_{2}^{\prime}\) By the second assumption, we know that there exists \(\sigma_{1}^{\prime\prime}\), \(\sigma_{2}^{\prime\prime}\), \(\mathrm{W}^{\prime\prime}\), \(v_{1}\) and \(v_{2}\), such that * \(W^{\prime}\sqsubseteq\mathrm{W}^{\prime\prime}\) * \(t_{3}\), \(\hat{H_{1}}\), \(\sigma_{1}^{\prime}\Downarrow\ell_{1}\), \(\sigma_{1}^{\prime\prime}\) * \(t_{4}\), \(\hat{H_{2}}\), \(\sigma_{2}^{\prime}\Downarrow\ell_{2}\), \(\sigma_{2}^{\prime\prime}\) * \((\sigma_{1}^{\prime\prime},\sigma_{2}^{\prime\prime}):\mathrm{W}^{\prime\prime}\) * \((\mathrm{W}^{\prime\prime},v_{1},v_{2})\in\mathcal{V}\big{[}\![\mathrm{B}]\!]^{ \hat{H}}\) * \(v_{1}\leadsto^{\sigma_{1}^{\prime}}\mathrm{losc}_{\hat{H_{1}}}(\varphi\cap\varnothing)\) * \(v_{2}\leadsto^{\sigma_{2}^{\prime}}\mathrm{losc}_{\hat{H_{2}}}(\varphi\cap\varnothing)\) * \(\sigma_{1}^{\prime}\leadsto^{\mathrm{losc}_{\hat{H_{1}}}(\varepsilon_{2})}\sigma_{1}^{\prime\prime}\) * \(\sigma_{2}^{\prime}\leadsto^{\mathrm{losc}_{\hat{H_{2}}}(\varepsilon_{2})}\sigma_{2}^{\prime\prime}\) Then the proof can be done by the reduction semantics, Lemma 4.6, value interpretation, Lemma 4.15, Lemma 4.18 and Lemma 4.21. Lemma 4.33 (Compatibility: Applications \((\beta)\)).: _If \(\Gamma^{\varphi}\models t_{1}\approx_{log}t_{2}:\mathsf{B}^{q}\ \varepsilon_{1}\), and \(\Gamma^{\varphi_{2}}\models t_{3}\approx_{log}t_{4}:\mathsf{B}^{p}\ \varepsilon_{2}\), and \(\varphi_{1}\subseteq\varphi\) and \(\varphi_{2}\subseteq\varphi\), then \(\Gamma^{\varphi}\models t_{1}\ {t_{3}\approx_{log}t_{2}\ {t_{4}}:(U^{r}\ \varepsilon_{1} \triangleright\varepsilon_{2}\triangleright\varepsilon_{3})\theta}\)._ Proof.: The proof is done by the definition of term interpretation, Lemma 4.25 and Lemma 4.21. Lemma 4.34 (Compatibility: Seq).: _If \(\Gamma^{\varphi_{1}}\models t_{1}\approx_{log}t_{2}:\mathsf{B}^{q}\ \varepsilon_{1}\), and \(\Gamma^{\varphi_{2}}\models t_{3}\approx_{log}t_{4}:\mathsf{B}^{p}\ \varepsilon_{2}\), and \(\varphi_{1}\subseteq\varphi\) and \(\varphi_{2}\subseteq\varphi\), then \(\Gamma^{\varphi}\models t_{1};t_{3}\approx_{log}t_{2};t_{4}:\mathsf{B}^{ \varnothing}\ \varepsilon_{1}\triangleright\varepsilon_{2}\triangleright q\)_ Proof.: Let \((\mathrm{W},\hat{H})\in G\llbracket\Gamma^{\varphi}\rrbracket\) and \((\sigma_{1},\sigma_{2}):\mathrm{W}\). By the first assumption, we know that there exists \(\sigma_{1}^{\prime}\), \(\sigma_{2}^{\prime}\), \(\mathrm{W}^{\prime}\), \(b_{1}\) and \(b_{2}\) such that * \(\mathrm{W}\sqsubseteq\mathrm{W}^{\prime}\) * \(t_{1}\), \(\hat{H}_{1}\), \(\sigma_{1}\) \(\Downarrow\)\(b_{1}\), \(\sigma_{1}^{\prime}\) * \(t_{2}\), \(\hat{H}_{2}\), \(\sigma_{2}\) \(\Downarrow\)\(b_{2}\), \(\sigma_{2}^{\prime}\) * \((\sigma_{1}^{\prime},\sigma_{2}^{\prime}):\mathrm{W}^{\prime}\) * \((\mathrm{W}^{\prime},b_{1},b_{2})\in\mathcal{V}\llbracket\llbracket B\rrbracket \hat{H}\) * \(b_{1}\leadsto^{\sigma_{1}}\ \mathrm{losc}_{\hat{H}_{1}}(\varphi\cap q)\) * \(b_{2}\leadsto^{\sigma_{2}}\ \mathrm{losc}_{\hat{H}_{2}}(\varphi\cap q)\) * \(\sigma_{1}\longleftrightarrow^{\mathrm{losc}_{\hat{H}_{1}}(\varepsilon_{1})} \ \sigma_{1}^{\prime}\) * \(\sigma_{2}\longleftrightarrow^{\mathrm{losc}_{\hat{H}_{2}}(\varepsilon_{1})} \ \sigma_{2}^{\prime}\) By the second assumption, we know that there exists \(\sigma_{1}^{\prime\prime}\), \(\sigma_{2}^{\prime\prime}\), \(\mathrm{W}^{\prime\prime}\), \(b_{3}\) and \(b_{4}\), such that * \(\mathrm{W}^{\prime}\sqsubseteq\mathrm{W}^{\prime\prime}\) * \(t_{3}\), \(\hat{H}_{1}\), \(\sigma_{1}^{\prime}\) \(b_{3}\), \(\sigma_{1}^{\prime\prime}\) * \(t_{3}\), \(\hat{H}_{2}\), \(\sigma_{2}^{\prime}\) \(b_{4}\), \(\sigma_{2}^{\prime\prime}\) * \((\sigma_{1}^{\prime\prime},\sigma_{2}^{\prime\prime}):\mathrm{W}^{\prime\prime}\) * \((\mathrm{W}^{\prime\prime},b_{3},b_{4})\in\mathcal{V}\llbracket\llbracket B \rrbracket\hat{H}\) * \(b_{3}\leadsto^{\sigma_{1}^{\prime}}\ \mathrm{losc}_{\hat{H}_{1}}(\varphi\cap\varnothing)\) * \(b_{4}\leadsto^{\sigma_{2}^{\prime}}\ \mathrm{losc}_{\hat{H}_{2}}(\varphi\cap\varnothing)\) * \(\sigma_{1}^{\prime}\longleftrightarrow^{\mathrm{losc}_{\hat{H}_{1}}( \varepsilon_{2})}\ \sigma_{1}^{\prime\prime}\) * \(\sigma_{2}^{\prime}\longleftrightarrow^{\mathrm{losc}_{\hat{H}_{2}}( \varepsilon_{2})}\ \sigma_{2}^{\prime\prime}\) Then the proof can be done by the reduction semantics, value interpretation, Lemma 4.15, and Lemma 4.21. Lemma 4.35 (Compatibility: Subtyping).: _If \(\Gamma^{\varphi}\models t_{1}\approx_{log}t_{2}:S^{p}\ \varepsilon_{1}\) and \(\Gamma\ \vdash\ S^{p}\ \varepsilon_{1}<:T^{q}\ \varepsilon_{2}\) and \(q,\varepsilon_{2}\subseteq\varphi\), then \(\Gamma^{\varphi}\models t_{1}\approx_{log}t_{2}:T^{q}\ \varepsilon_{2}\)._ Proof.: By induction on the subtyping derivation. ### The Fundamental Theorem and Soundness Theorem 4.36 (Fundamental Property).: _If \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\), then \(\Gamma^{\varphi}\models t\approx_{log}t:T^{q}\ \varepsilon\)._ Proof.: By induction on the derivation of \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\). Each case follows from the corresponding compatibility lemma. Lemma 4.37 (Congruency of Binary Logical Relations).: _The binary logical relation is closed under well-typed program contexts, i.e., if \(\Gamma^{\varphi}\models t_{1}\approx_{log}t_{2}:T^{p}\ \varepsilon\), and \(C:(\Gamma^{\varphi};T^{p}\ \varepsilon)\Rightarrow(\Gamma^{\prime}\, \varphi^{\prime};T^{\prime}\,\varphi^{\prime}\ \varepsilon^{\prime})\), then \(\Gamma^{\varphi^{\prime}}\models C[t_{1}]\approx_{log}C[t_{2}]:T^{\prime}\, \varphi^{\prime}\ \varepsilon^{\prime}\)._ Proof.: By induction on the derivation of context \(C\). Each case follows from the corresponding compatibility lemma and may use the fundamental theorem (Theorem 4.36) if necessary. Lemma 4.38 (Adequacy of the binary logical relations).: _The binary logical relation preserves termination, i.e., if \(\emptyset\models t_{1}\approx_{\log}t_{2}:T^{\emptyset}\), then \(\exists\ \sigma,\sigma^{\prime},v.\ t_{1},\ \varnothing,\ \sigma\ \Downarrow v_{1},\ \sigma_{1}^{\prime\prime}\wedge t_{2}, \varnothing,\ \sigma_{2}\ \Downarrow v,\ \sigma_{2}^{\prime}.\)_ Proof.: We know \((\varnothing,\varnothing)\in G[\![\varnothing]\!]\) by the interpretation of typing context. Then we can prove the result by the binary term interpretation (Fig. 8). Theorem 4.39 (Soundness of Binary Logical Relations).: _The binary logical relation is sound w.r.t. contextually equivalence, i.e., if \(\Gamma^{\varphi}\ \vdash\ t_{1}:T^{P}\ \varepsilon\) and \(\Gamma^{\varphi}\ \vdash\ t_{2}:T^{P}\ \varepsilon\), then \(\Gamma^{\varphi}\models t_{1}\approx_{\log}t_{2}:T^{P}\ \varepsilon\) implies \(\Gamma^{\varphi}\models t_{1}\approx_{\mathit{ctx}}t_{2}:T^{P}\ \varepsilon\)._ Proof.: By the refined definition of contextual equivalence, to prove the result, we are given a well-typed context \(C:(\Gamma^{\varphi};T^{P}\ \varepsilon)\ \Rightarrow\ (\varnothing;B^{\varnothing}\ \varnothing)\), and we need to show \(\exists\ \sigma,\sigma^{\prime},v.\ \varnothing\ |\ C[t_{1}]\longrightarrow_{v}^{*}\sigma\ |\ v \wedge\varnothing\ |\ C[t_{2}]\longrightarrow_{v}^{*}\sigma^{\prime}\ |\ v.\) By the assumption, and the congruency lemma (Lemma 4.37), we have \(\varnothing\models C[t_{1}]\approx_{\log}C[t_{2}]:B^{\varnothing}\ \varnothing\), which leads to \(\exists\ \sigma,\sigma^{\prime},v.\ \varnothing\ |\ C[t_{1}]\longrightarrow_{v}^{*}\sigma\ |\ v\wedge\varnothing\ |\ C[t_{2}] \longrightarrow_{v}^{*}\sigma^{\prime}\ |\ v\) by the adequacy lemma (Lemma 4.38). ### Re-ordering Fig. 9 shows the re-ordering rule for \(\lambda_{\varepsilon}^{*}\)-calculus. It permits re-ordering of two terms if they observe disjoint set of store locations specified by reachability qualifiers. This section shows the proof of the re-ordering rule by using our logical relations. To streamline the presentation, we introduce the following notations. Let \(\mathrm{W}=(\sigma_{1},\sigma_{2},f)\) be a world, we write \(\mathrm{W}_{f}\) to mean the partial bijection defined in \(\mathrm{M}\), _i.e._, \(f\). We identify important store invariants entailed by our logical relations. Lemma 4.40 (Store Invariance 1).: _If \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\), and \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), and \((\sigma_{1},\sigma_{2}):W\), and \(\forall t_{1},\ell_{2}\). \(W(\ell_{1},\ell_{2}).\ell_{1}\notin L\) and \(\sigma_{1}\longleftrightarrow^{L}\sigma_{1}^{\prime}\), and \(\mathrm{dom}(\sigma_{1}^{\prime})\subseteq\mathrm{dom}(\sigma_{1})\), then we can construct a world \(W^{\prime}\), such that \(W=(\mathrm{dom}(\sigma_{1}^{\prime}),\mathrm{dom}(\sigma_{2}),\,W_{f})\) and \((W^{\prime},t,t)\in\mathcal{E}[\![T^{q}\ \varepsilon]\!]_{\varphi}^{\hat{H}}\)._ Proof.: The first proof obligation can be discharged by the definition of relational store and Lemma 4.4. The second proof obligation can be discharged by Theorem 4.36 and the definition of logical relations on terms. Lemma 4.41 (Store Invariance 2).: _If \(\Gamma^{\varphi}\ \vdash\ t:T^{q}\ \varepsilon\), and \((W,\hat{H})\in G[\![\Gamma^{\varphi}]\!]\), and \((\sigma_{1},\sigma_{2}):W\), and \(\sigma_{1}\longleftrightarrow^{L}\sigma_{1}^{\prime}\), and \(\mathrm{dom}(\sigma_{1}^{\prime})\subseteq\mathrm{dom}(\sigma_{1})\), and \(\mathrm{loc}_{\hat{H}_{1}}(\varphi)\cap L=\varnothing\) then there exists \(W^{\prime}\), such that \(W\sqsubseteq_{(\mathit{loc}_{\hat{H}_{1}}(\varphi),\,\mathit{loc}_{\hat{H}_{2}}( \varphi))}W^{\prime}\) and \((W^{\prime},t,t)\in\mathcal{E}[\![T^{q}\ \varepsilon]\!]_{\varphi}^{\hat{H}}\)._ Proof.: The proof uses Lemma 4.4 and Lemma 4.40, Theorem 4.36 and the definition of logical relations on terms. There are two other store invariances regarding the second store, which are similar to the above two, thus are omitted. Figure 9. The re-ordering rule for the \(\lambda_{\varepsilon}^{*}\)-calculus. **Lemma 4.42** (re-ordering).: _If \(\Gamma^{\varphi_{1}}\models t_{1}:B^{q}\ \varepsilon_{1}\), and \(\Gamma^{\varphi_{2}}\models t_{2}:B^{P}\ \varepsilon_{2}\), and \(\varphi_{1}\subseteq\varphi\), and \(\varphi_{2}\subseteq\varphi\), and \(\varphi_{1}*\cap\varphi_{2}*=\varnothing\) then \(\Gamma^{\varphi}\models t_{1};t_{2}\approx_{\log}t_{2};t_{1}:B^{\varnothing} \ \varepsilon_{1}\rhd\varepsilon_{2}\rhd q\)._ Proof.: The proof uses Theorem 4.36, Lemma 4.40, Lemma 4.41 and the definition of logical relations on terms.
2301.00107
Another irreducibility criterion
Let $f=a_0+ a_{1}x+\cdots+a_m x^m\in \Bbb{Z}[x]$ be a primitive polynomial. Suppose that there exists a positive real number $\alpha$ such that $|a_m| \alpha^m>|a_0|+|a_1|\alpha+\cdots+|a_{m-1}|\alpha^{m-1}$. We prove that if there exist natural numbers $n$ and $d$ satisfying $n\geq \alpha+ d$ for which either $|f(n)|/d$ is a prime, or $|f(n)|/d$ is a prime-power coprime to $|f'(n)|$, then $f$ is irreducible in $\mathbb{Z}[x]$.
Jitender Singh, Sanjeev Kumar
2022-12-31T03:19:03Z
http://arxiv.org/abs/2301.00107v1
# Another irreducibility criterion ###### Abstract. Let \(f=a_{0}+a_{1}x+\cdots+a_{m}x^{m}\in\mathbb{Z}[x]\) be a primitive polynomial. Suppose that there exists a positive real number \(\alpha\) such that \(|a_{m}|\alpha^{m}>|a_{0}|+|a_{1}|\alpha+\cdots+|a_{m-1}|\alpha^{m-1}\). We prove that if there exist natural numbers \(n\) and \(d\) satisfying \(n\geq\alpha+d\) for which either \(|f(n)|/d\) is a prime, or \(|f(n)|/d\) is a prime-power coprime to \(|f^{\prime}(n)|\), then \(f\) is irreducible in \(\mathbb{Z}[x]\). ## 1. Introduction. The classical irreducibility criteria due to Schonemann (1846), Eisenstein (1850), Dumas (1906), and Perron (1907) have become paradigm for testing irreducibility of polynomials having rational coefficients. The demesne revealing riveting facts about irreducibility of polynomials over prescribed domains has always been the cradle of such baroque classical results which for decades have witnessed cogent extensions and generalizations. Such irreducibility criteria have exhibited a close affinity to prime numbers and primality as is evident from the illustrious Buniakowski's conjecture of 1854 which asserts that if \(f\) is an irreducible polynomial having integer coefficients such that the elements in the set \(f(\mathbb{N})\) have no common factors other than \(\pm 1\), then the set \(f(\mathbb{N})\) contains infinitely many prime numbers. The converse of Buniakowski's conjecture holds affirmatively via primality. Another classical irreducibility result due to A. Cohn [1, p. 133] states that if a prime number can be expressed in base 10 as \(\sum_{i=0}^{m}a_{i}10^{i}\) for some positive integer \(m\), then the polynomial \(\sum_{i=0}^{m}a_{i}x^{i}\) is irreducible in \(\mathbb{Z}[x]\). Cohn's result was then generalized to arbitrary base by Brillhart et al. [2] and further in Bonciocat et al. [3]. In [4], Murty provided elementary proof of Cohn's irreducibility criterion. Interestingly, one of the main results of Murty [4], generalized by Girstmair [5] apprised of a strong converse of Buniakowski's conjecture which was further generalized in [6] and [7] for polynomials having integer coefficients. **Theorem A** ([6]).: _Let \(f=a_{0}+a_{1}x+\cdots+a_{m}x^{m}\in\mathbb{Z}[x]\) be a primitive polynomial. Suppose there exists a positive real number \(\alpha\) such that_ \[|a_{m}|\alpha^{m}>|a_{0}|+|a_{1}|\alpha+\cdots+|a_{m-1}|\alpha^{m-1}.\] _If there exist natural numbers \(n\) and \(d\) satisfying \(n\geq\alpha+d\) for which \(f(n)=\pm pd\) for a prime \(p\), then \(f\) is irreducible in \(\mathbb{Z}[x]\)._ **Theorem B** ([7]).: _Let \(f=a_{0}+a_{1}x+\cdots+a_{m}x^{m}\in\mathbb{Z}[x]\) be primitive, and let_ \[H=\max_{0\leq i\leq m-1}\{|a_{i}/a_{m}|\}.\] _Let \(f^{\prime}(x)\) denote the formal derivative of \(f(x)\) with respect to \(x\). If there exist natural numbers \(n\), \(d\), \(k\), and a prime \(p\nmid d\) such that \(n\geq 1+H+d\), \(f(n)=\pm p^{k}d\), and for \(k>1\), also \(p\nmid f^{\prime}(n)\), then \(f\) is irreducible in \(\mathbb{Z}[x]\)._ In the present note, we generalize Theorem A to the case when \(|f(n)|/d\) is a prime-power with the mild condition of coprimality of \(|f(n)|/d\) with \(|f^{\prime}(n)|\). More precisely, we have the following result. **Theorem 1**.: _Let \(f=a_{0}+a_{1}x+\cdots+a_{m}x^{m}\in\mathbb{Z}[x]\) be a primitive polynomial. Suppose that there exists a positive real number \(\alpha\) such that_ \[|a_{m}|\alpha^{m}>|a_{0}|+|a_{1}|\alpha+\cdots+|a_{m-1}|\alpha^{m-1}.\] _If there exist natural numbers \(n\) and \(d\) satisfying \(n\geq\alpha+d\) for which \(|f(n)|/d\) is prime, or \(|f(n)|/d\) is a prime-power coprime to \(|f^{\prime}(n)|\), then \(f\) is irreducible in \(\mathbb{Z}[x]\)._ **Example 1**.: For \(k\geq m+2\geq 4\) and \(p\geq 1+d\), the polynomial \[X=-p+x\pm(p^{k-m}d)x^{m}\] satisfies the hypothesis of Theorem 1 with \(\alpha=1\), \(a_{0}=-p\), \(a_{1}=1\); \(a_{i}=0\) for \(i=2,3,\ldots,m-1\); \(a_{m}=\pm p^{k-m}d\); and \(n=p\geq 1+d=\alpha+d\), since we have \(X(p)=\pm p^{k}d\); \(X^{\prime}(p)\equiv 1\mod p\) so that \(\gcd(|X(p)|/d,|X^{\prime}(p)|)=1\), and \[|a_{m}|\alpha^{m}=p^{k-m}d\geq p^{2}>p+1=\sum_{i=0}^{m-1}|a_{i}|\alpha^{i}.\] By Theorem 1, the polynomial \(X\) is irreducible in \(\mathbb{Z}[x]\). **Example 2**.: Now consider the polynomial \[Y=(x-p)+(x-p)^{2}+\cdots+(x-p)^{m-1}\pm(p^{2k-1}d)x^{m}\] for \(k\geq m\geq 2\) and \(p\geq 1+d\). Here, \(a_{i}=\sum_{j=i}^{m-1}\binom{j}{i}(-p)^{j-i}\) for \(i=0,1,\ldots,m-1\); \(a_{m}=\pm p^{2k-1}d\), \(\alpha=1\), and \(n=p\geq 1+d\). We find that \(Y(p)=\pm p^{2k+m-1}d\), \(Y^{\prime}(p)\equiv 1\mod p\). These along with the fact that \(p^{2}>1+p\) yield the following: \[|a_{m}|\alpha^{m}=\frac{p^{2k}d}{p}\geq\frac{(p^{2})^{m}d}{p}>\frac{(1+p)^{m} }{p}>(1+p)\frac{(1+p)^{m-1}-1}{1+p-1}=\sum_{i=0}^{m-1}|a_{i}|\alpha^{i}.\] Since \(a_{m-1}=1\), it follows that \(Y\) is a primitive polynomial. By Theorem 1, the polynomial \(Y\) is irreducible in \(\mathbb{Z}[x]\). ## 2. Proof of Theorem 1. Let \(|f(n)|/d=p^{k}\) for some prime \(p\) and positive integer \(k\). If \(|x|\geq\alpha\), then in view of the hypothesis, we have \(|a_{m}|\alpha^{m}>\sum_{j=0}^{m-1}|a_{j}|\alpha^{j}\). Consequently, we have \[|f(x)|\geq|x|^{m}\Big{(}|a_{m}|-\sum_{i=0}^{m-1}|a_{i}||x|^{-(m-i)}\Big{)}\geq \alpha^{m}\Big{(}|a_{m}|-\sum_{i=0}^{m-1}|a_{i}|\alpha^{-(m-i)}\Big{)}>0,\] which shows that each zero \(\theta\) of \(f\) satisfies \(|\theta|<\alpha\). Now assume on the contrary that \(f(x)=f_{1}(x)f_{2}(x)\) for nonconstant polynomials \(f_{1}\) and \(f_{2}\in\mathbb{Z}[x]\). Since we have \[\pm p^{k}d=f(n)=f_{1}(n)f_{2}(n),\] at least one of \(|f_{1}(n)|\) and \(|f_{2}(n)|\) is divisible by \(p\). Assume that \(p\) divides \(|f_{2}(n)|\). Firstly, let us suppose that \(p\) does not divide \(|f_{1}(n)|\). Then \(p^{k}\) divides \(|f_{2}(n)|\), and so, \(|f_{1}(n)|\) must divide \(d\) so that we have \(|f_{1}(n)|\leq d\). If \(\beta\) (\(\neq 0\)) is the leading coefficient of \(f_{1}\), then \[f_{1}(n)=\beta\prod_{\theta}(n-\theta),\] where the product runs over all zeros \(\theta\) of \(f_{1}\). Observe that each such \(\theta\) satisfies \(|\theta|<\alpha\). Since \[|n-\theta|\geq n-|\theta|>n-\alpha\geq d,\] we arrive at the following: \[d\geq|f_{1}(n)|=|\beta|\prod_{\theta}|n-\theta| > |\beta|d^{\deg f_{1}}\geq|\beta|d\geq d,\] leading to a contradiction. Now assume that \(p\) divides \(|f_{1}(n)|\). Since \(p\) divides \(|f_{2}(n)|\), we must have \(k\geq 2\). Consequently, \(p\) divides \(|{f_{1}}^{\prime}(n)f_{2}(n)+f_{1}(n){f_{2}}^{\prime}(n)|\), which in view of the fact that \[{f_{1}}^{\prime}(n)f_{2}(n)+f_{1}(n){f_{2}}^{\prime}(n)=f^{\prime}(n),\] shows that \(p\) divides \(|f^{\prime}(n)|\). This contradicts the hypothesis. So, \(f\) must be irreducible in \(\mathbb{Z}[x]\). The following remark and examples serve well to make the present idea efficaciously comprehensible rendering an advantage over the results already known in the domain. **Remark**.: Note that Theorem A is the special case of Theorem 1 with \(k=1\). The significance of Theorem 1 lies in the fact that whenever each one of Theorems A, B, 1 is applicable, Theorems A, B may encounter a tedious factorization of integers. This is demonstrated in the following explicit examples. **Example 3**.: Consider the polynomial \[Z=9-x+72x^{18}.\] The smallest value of \(n\) for which Theorem 1 is applicable for \(Z\) is \(n=9\) with \(\alpha=1\), \(d=8\), and \(Y(9)/8=3^{38}\), whereas the smallest value of \(n\) for which Theorems A and B are applicable is \(n=28\) with \(d=13\) and \[Z(28)/13=619774506599223645785433953,\] which is an 18-digit prime number. **Example 4**.: Consider the following polynomials \(Z_{d}\) as mentioned in [7] \[Z_{d}=p^{k}-x\pm(p^{k}d)x^{m},\ 2\leq d\leq p^{k}-1,\ k\geq 2,\] where \(k,m,d\) are positive integers and \(p\) is a prime number. Here, \(a_{0}=p^{k}\), \(a_{1}=-1\), \(a_{i}=0\) for \(i=2,\ldots,m-1\), and \(a_{m}=\pm p^{k}d\). Taking \(\alpha=1\) and \(n=p^{k}\), we have \[|a_{m}|\alpha^{m}=p^{k}d>p^{k}+1 = \sum_{i=0}^{m-1}|a_{i}|\alpha^{i};\ n=p^{k}\geq 1+d=\alpha+d,\] \[|Z_{d}(p^{k})|/d = p^{k(1+m)};\ Z_{d}{}^{\prime}(p^{k})\equiv-1\mod p,\] so that \(|Z_{d}(p^{k})|/d\) is coprime to \(|Z_{d}{}^{\prime}(p^{k})|\). Thus by Theorem 1, the polynomial \(Z_{d}\) is irreducible in \(\mathbb{Z}[x]\). Here, for the aforementioned value of \(n\) and \(\alpha\), \(Z_{p^{k}-1}\) is irreducible by Theorem 1, the irreducibility of which cannot be easily concluded from Theorem A or Theorem B.
2309.08379
PatFig: Generating Short and Long Captions for Patent Figures
This paper introduces Qatent PatFig, a novel large-scale patent figure dataset comprising 30,000+ patent figures from over 11,000 European patent applications. For each figure, this dataset provides short and long captions, reference numerals, their corresponding terms, and the minimal claim set that describes the interactions between the components of the image. To assess the usability of the dataset, we finetune an LVLM model on Qatent PatFig to generate short and long descriptions, and we investigate the effects of incorporating various text-based cues at the prediction stage of the patent figure captioning process.
Dana Aubakirova, Kim Gerdes, Lufei Liu
2023-09-15T13:10:36Z
http://arxiv.org/abs/2309.08379v1
# PatFig: Generating Short and Long Captions for Patent Figures ###### Abstract This paper introduces Quent PatFig, a novel large-scale patent figure dataset comprising 30,000+ patent figures from over 11,000 European patent applications. For each figure, this dataset provides short and long captions, reference numerals, their corresponding terms, and the minimal claim set that describes the interactions between the components of the image. To assess the usability of the dataset, we finetune an LVLM model on Quent PatFig to generate short and long descriptions, and we investigate the effects of incorporating various text-based cues at the prediction stage of the patent figure captioning process. ## 1 Introduction Patents are at the economically strategic crossroads of Artificial Intelligence and Intellectual Property, serving as a cornerstone of technical innovation[5]. A pivotal yet largely untapped aspect at the confluence of visual and linguistic analysis is the study of patent figures. These figures are central to the comprehension and elucidation of patent applications, often providing a more efficient medium for conveying complex scientific or technical information than text alone [6, 12]. They comprise technical drawings, block diagrams, flow charts, plots, and grayscale photographs [32]. While prior research has delved into captioning scientific figures, the specific domain of patent figure captioning remains largely unexplored. We introduce Quent PatFig, a comprehensive patent figure dataset with long and short descriptions, bolstering research in areas like image-to-text, figure-based patent retrieval, figure classification, segmentation, and text-to-image generation. Using PatFig, we train image captioning models to aid patent attorneys in improving figure captions. By fine-tuning the Large Vision Language Model MiniGPT-4 [37] on PatFig and adding textual cues during predictions, we strive to boost caption accuracy and patent-specific relevance. The ultimate goal is to connect vision and language, by comprehending visual information designed to facilitate human cognition of abstract, technical, and scientific concepts. This endeavor is particularly relevant in the context of patent figures, which often encapsulate complex and abstract concepts in a visual format [14]. Figure 1: Given an image and a prompt, our figure captioning models generate long and short descriptions. Note that the models are separate for the two types of descriptions. ## 2 Related work ### Patent figure datasets Limited datasets exist for patent figure analysis, primarily targeting image-based patent retrieval. CLEF-IP 2011 [25] provides two such datasets, but with a mere 211 patents and broad image classification across nine categories, it is limited in granularity. The concept dataset [35] has 1000 patent drawings for shoe classification and an additional 2000 mechanical drawings by relevance. Kucer et al.'s DeepPatent [17] offers over 350,000 design patent images1. Such patents naturally lack detailed object names, viewpoints, and captions. Footnote 1: In Europe, “design patents” are termed “Registered Community Design” (RCD). They focus on aesthetic design rather than utility. We introduce Qatent PatFig, a comprehensive dataset with 30,000 patent figures from 11,000+ patents enriched with long and short descriptions, figure types, reference numerals with terms, and patent claims. While similar datasets like SciCap [13] associate scientific figures with captions, patent figures present unique challenges. They frequently feature reference numerals, term lists, short as well as long descriptions, and the relation between the terms is detailed in the patent claims. However, extracting these descriptions can be daunting due to varied caption structures and the interspersed nature of reference numerals throughout the patent. ### Patent figure captioning Most patent figure research targets figure-based patent querying [17, 31, 25] and classification [14, 36, 19]. For scientific figure captioning, Chen et al. [10, 9, 8] presented FigCAP, using LSTM models with attention. Qian et al.'s FigJAM [26] produces "caption units", a concept explored earlier with DVQA [15] and FigureQA [16]. SciCap [13] leverages its dataset to train a CNN+LSTM image-captioning model [34]. In this paper, we leverage the recent advancements in Large Vision-Language Models (LVLMs) to address the task of generating short and long captions for patent figures. LLMs, such as LLaMA [29], GPT-3 [22], and Vicuna [11] have demonstrated disruptive progress that can be further extended to large vision-language models [3, 37, 18], thus effectively aligning visual features with the textual space. Yet, their application to the domain of patent figure captioning remains unexplored. We propose to finetune an LVLM to evaluate our dataset and investigate the effectiveness of LVLMs in generating informative and detailed captions for patent figures. ## 3 Building the PatFig dataset In this section, we describe the process of acquiring and pre-processing the data to construct our dataset. ### Data acquisition and pre-processing Qatent's internal Solr [2] database contains complete textual patent data from the European Patent Office (EPO) including publication number, title, abstract, claim, IPC (patent classification), inventors, patent family, applicants, id, and complete description. Based on this database, we initiated the image acquisition process by retrieving the publication numbers within the time range from January 1, 2020, to December 31, 2020. Subsequently, using Espacenet [1], the EPO's patent search website, we scraped a total of 62,513 patent images corresponding to 15,645 unique patents based on the patent publication numbers, enabling for accurate linking of the images to their respective textual patent data. ### Short and long figure caption extraction Short descriptions of patent figures usually follow a standard format, separated by new lines, enabling a rule-based extraction method. These descriptions often appear in a section titled "Brief Description of Drawings". Typical short descriptions start with a figure number and a brief explanation, e.g., _"Fig. 1 depicts a bottle power per an embodiment."_ This uniformity aids in automated extraction of such captions. Our results yielded structured sentences with figure numbers, objects, and viewpoints when available. Long caption extraction poses more challenges due to the varied structure of patent application descriptions. Addressing this, our method involved text normalization, searching for repeated figure number references, and extracting relevant sections until the start of another paragraph or a different figure mention. We also trimmed overly verbose captions.2 Footnote 2: During caption filtering, statistical analysis determined token count ranges for descriptions. For short captions, the range was 10 to 40 tokens, based on the Interquartile Range (IQR) rule. For long ones, it was 40 to 500 tokens. Descriptions outside these bounds were treated as outliers and excluded. ### Figure-type extraction We leverage the common structure of short captions and apply rule-based methods to extract key phrases appearing after the "are/is a/an," "shows," "illustrates," and "depicts.", etc. As a result, we retrieved 1506 different classes, which were later reduced to 412 after manual revision and text normalization. We grouped the most frequently appearing categories, ranging from the least abstract (top) to the most abstract (bottom) categories present in the dataset as illustrated in Figure 2 in Anne ### Figure-caption matching with OCR input image and a simple prompt. The input image is fed into the model for processing, and the model generates a caption of the image based solely on its visual content. The goal of this experiment is to evaluate the model's ability to understand and describe images without any additional text-based context. ### Vision+Text **Task**: Given an image and a prompt including the patent title generate the description. **Title**: "Activation of energy devices" **Terms**: 137602: "sensor", 137604: "wired connection", 137650: "surgical site opening", 120: "patient side cart", 137606: "surgical instrument", 137600: "retractor". **Prompt 2**: _Please provide the detailed description of the figure associated with **title**._ **Prompt 3**: _Please provide the detailed description of the figure associated with **title**. The image contains the following reference numerals and **terms**._ This experiment aims to evaluate the impact of added text-based cues on generating contextually accurate captions. The model uses the image as visual input and incorporates the title and terms as additional text-based context for more detailed and relevant captioning. The terms are retrieved from the patent application's complete description, and a subset of 500 samples from the test data is selected to assess the terms' effect. ## 5 Discussion Long descriptions usually mention the different parts that the numbers in the figure refer to; short captions generally do not. As expected, the long caption generation generally benefits from adding the terms to the input, in particular for the CIDEr score, conceived as a caption metric (except for the BLEU2 score). Interestingly, the short caption generation seems to be disturbed by the term list in the input. This is in line with results by [13] on scientific image captioning. Their BLEU scores are similarly low, e.g. at 0.0231 for vision-only models, and decrease even further when adding textual cues. This might be explained by the fact that the model may overlook crucial visual features, resulting in less accurate captions. Additionally, conflicts between textual and visual cues may also confuse the model. It is well-known that neural vision models do not implicitly learn OCR, and with the goal of generating good captions, it is not actually necessary that our model generates the reference numerals as we could simply add the extracted reference numerals to the matching terms in the generated caption. So the model's performance is generally better when evaluating without taking the numerals into account. The significant difference between the gold corpus and the generated captions can be attributed to limitations of MiniGPT-4: 1) The use of a frozen Q-former in the visual encoder may result in the loss of key features like visual-spatial grounding. 2) Training only a single projection layer may limit the model's ability to learn comprehensive visual-text alignment effectively. ## 6 Conclusion and future work This paper introduces the first extensive dataset for gauging the efficiency of Large Vision Language Models (LVLM) on patent figures. Distinguished by reference numerals, a formal template-like caption style, and a wealth of text data linked to each figure, this dataset provides a unique challenge, differentiating it from conventional captioning tasks. Moreover, PatFig encapsulates wider types of patent images compared to the existing datasets, spanning technical drawings, block diagrams, flow charts, plots, and grayscale photographs. It offers multiple data points that can be harnessed for image captioning and potentially for other tasks such as image search and image generation, as well as addressing a broader scope of patent figure analysis tasks. We delved into a key function of LVLM, exploring the dynamic interplay between language and image during the generation of two distinct caption types: short and long. These variants necessitate different input information, and our findings affirm that our LVLM models can effectively assimilate this information, thereby enhancing the results. Yet, various variations of our experiments should be studied, such as variations of training data size, prompt improvements, with the inclusion of textual cues during the finetuning stage, training directly on texts from which reference numerals have been removed, and, more difficult, removing the numbers from the images. Additionally, the identified patent figure types can be used to categorize the results based on each figure type. An intriguing aspect warranting further study is identifying the threshold where the image itself becomes redundant in the text generation process. In other words, discerning when a text-only large language model can accurately predict the figure's content without directly analyzing it. Future research will expand to investigate the generation of figures from patent text. This could not only streamline the work of patent attorneys significantly but also shed light on the necessary information for drawing a figure and how this data is amalgamated to create a figure. ## 7 Acknowledgments The authors would like to thank the rest of the Quent team, including all researchers, engineers, developers, and law experts, for their insights and collaboration throughout the project.
2309.10536
The Sign of non-Gaussianity and the Primordial Black Holes Abundance
The abundance of primordial black holes changes in the presence of local non-Gaussianity. A positive non-linear parameter $f_{NL}$ increases the abundance while a negative one reduces it. We show that in non-attractor single-field models of inflation which enhance the curvature power spectrum and may give rise to primordial black holes, $f_{NL}$ is always positive, when computed in correspondence of the peak of the curvature power spectrum where the primordial black hole abundance has its maximum. This implies that the interpretation of the recent pulsar timing arrays data from scalar-induced gravitational waves generated at primordial black hole formation may not be supported by invoking non-Gaussianity within non-attractor single-field models.
Hassan Firouzjahi, Antonio Riotto
2023-09-19T11:33:20Z
http://arxiv.org/abs/2309.10536v1
# The Sign of non-Gaussianity and the Primordial Black Holes Abundance ###### Abstract The abundance of primordial black holes changes in the presence of local non-Gaussianity. A positive non-linear parameter \(f_{NL}\) increases the abundance while a negative one reduces it. We show that in non-attractor single-field models of inflation which enhance the curvature power spectrum and may give rise to primordial black holes, \(f_{NL}\) is always positive, when computed in correspondence of the peak of the curvature power spectrum where the primordial black hole abundance has its maximum. This implies that the interpretation of the recent pulsar timing arrays data from scalar-induced gravitational waves generated at primordial black hole formation may not be supported by invoking non-Gaussianity within non-attractor single-field models. _Introduction_. Very recently the NANOGrav [1; 2], EPTA [4; 5; 6], PPTA [7; 8; 9] and CPTA [10] collaborations have provided evidence for a stochastic background of Gravitational Waves (GWs) detected through the pulsar timing arrays. One immediate question is under which circumstances such GWs can be associated to the formation of Primordial Black Holes (PBHs) during which GWs are inevitably generated at second-order [11]. Their amount is proportional to the the square of the amplitude of the dimensionless curvature perturbation power spectrum \(\mathcal{P}_{\rm{\cal R}}\), \(\Omega_{\rm GW}\sim\mathcal{P}_{\rm{\cal R}}^{2}\). The abundance of PBHs is exponentially sensitive to the same amplitude, \(f_{\rm PBH}\sim\exp(-1/\mathcal{P}_{\rm{\cal R}})\), where \(f_{\rm PBH}\) is the PBH abundance with respect to the total dark matter. The problem is that the observed stochastic GW background is explained by a relatively large values of \(\mathcal{P}_{\rm{\cal R}}\), which has been claimed to lead to a too large PBH abundance [12; 13; 14; 15; 16]. While this negative conclusion may be invalidated by the recent observation that corrections from the non-linear radiation transfer function and the determination of the true physical horizon crossing decrease the PBH abundance [17], one can also rely on the introduction of some local Non-Gaussianity (NG) in the curvature perturbation \[\mathcal{R}=\mathcal{R}_{\rm g}+\frac{3}{5}f_{NL}\left(\mathcal{R}_{\rm g}^{2 }-\langle\mathcal{R}_{\rm g}^{2}\rangle\right), \tag{1}\] where \(\mathcal{R}_{\rm g}\) is the Gaussian component1. The short-scale power spectrum \(\mathcal{P}_{S}\) responsible for the PBH formation is modulated by the presence of a long mode \(\mathcal{R}_{L}\). The threshold \(\mathcal{R}_{c}\) for the formation of the PBHs is shifted approximately by [18] Footnote 1: We are adopting this quadratic expansion to be model independent, even though in general the exact relation between \(\mathcal{R}\) and \(\mathcal{R}_{\rm g}\) can be worked out model by model. However, since typically \(f_{NL}\mathcal{R}_{\rm g}\lesssim 1\), the quadratic expansion is justified. \[\mathcal{R}_{\rm c}\simeq\mathcal{R}_{c}^{\rm g}\left(1-\frac{3}{5}f_{NL} \mathcal{R}_{c}^{\rm g}\right), \tag{2}\] compared to the threshold \(\mathcal{R}_{c}^{\rm g}\) in the Gaussian theory. Therefore, around peaks of the power spectrum of the curvature perturbation, a positive \(f_{NL}\) increases the abundance of the PBHs, while a negative \(f_{NL}\) has the opposite effect, thus helping the agreement with the recent pulsar timing array observations. This remains true even when calculating the abundance through a more correct variable, the averaged density contrast [19; 20]. Under general assumptions, in this paper we will show that the sign of \(f_{NL}\) at the peak scale of the power spectrum, where PBHs are mostly formed, is always positive in non-attractor single-field models. This no-go result is intimately related to the fact that \(f_{NL}\) measures the response of the short-scale power spectrum \(\mathcal{P}_{S}\) to the presence of a long mode and the sign of the NG is determined by the rate of growth of \(\mathcal{P}_{S}\). The latter is positive if PBHs needs to be produced and this sets the sign of \(f_{NL}\). Our findings automatically imply that NG may not help non-attractor single-field models to relax the tension between the observed stochastic GW background in pulsar timing arrays and the overproduction of PBHs. _Non-attractor single-field models and the sign of NG._ In attractor single-field models the curvature perturbation is constant on superhorizon scales and is equivalent in the spatially flat gauge to a field fluctuation \(\mathcal{R}=-\delta\phi/\phi^{\prime}\), where primes denote derivatives with respect to the number of e-folds. The phase-space trajectory of the long mode perturbation follows that of the background itself. Short-scale modes evolving in a long mode perturbation then follow the phase-space trajectory of the background, with the only difference being the local e-folds which determines the relation between the comoving and the physical wavenumbers. The NG is therefore proportional to the variation of the short-scale power spectrum due to the long-wavelength mode \[\mathcal{P}_{S}(x) = \mathcal{P}_{S}\left[1-\frac{d\ln\mathcal{P}_{S}}{d\ln k_{S}} \mathcal{R}_{L}(x)\right] \tag{3}\] \[= \left[1+\frac{12}{5}f_{NL}\mathcal{R}_{L}(x)\right].\] This modulation is zero at the peak of the short-scale power spectrum and corresponds to a dilation of scales rather than an amplitude enhancement. In non-attractor single-field models, the attractor condition \(\delta\phi^{\prime}=(\phi^{\prime\prime}/\phi^{\prime})\delta\phi\) is violated. In fact, during an Ultra-Slow-Roll (USR) phase, the curvature perturbation grows like \(\mathcal{R}\sim a^{3}\), being \(a\) the scale factor, and therefore in the spatially flat gauge \(\delta\phi=-\phi^{\prime}\mathcal{R}=\mathrm{constant}\), implying that \(\delta\phi^{\prime}=0\). Because of the the dependence of the background evolution on the initial kinetic energy, the perturbation may not be mapped into a change in the background clock along the same phase-space trajectory. The long mode perturbations carry no corresponding \(\delta\phi^{\prime}\) and so they shift the USR trajectory to one with a different relationship between \(\phi\) and \(\phi^{\prime}\). In other words, a local measurement is sensitive to \(\phi^{\prime}\) as different observers provide different measurements of the short-scale power spectrum depending on their relative position in the long-wavelength mode. This implies that in USR models the corresponding value of \(f_{NL}\) can be large, even at the peak of the short-scale power spectrum. We consider single field models of inflation with the potential \(V(\phi)\) for a canonically normalized scalar field with the sound speed of perturbations being equal to the speed of the gravitational waves perturbations. To be general, we do not specify the form of the potential. We assume that inflation has multiple stages, containing at least three distinct phases. The first stage is a conventional slow-roll (SR) phase in which the observed large scales, such as the CMB scales, leave the horizon. The power spectrum of these perturbations are fixed by the CMB observations [21] to be \(\mathcal{P}_{\mathcal{R}}\simeq 2\times 10^{-9}\) with \(\mathcal{R}\) being the curvature perturbations. The second phase is when the power spectrum experiences a rapid growth with a prime peak in power spectrum to generate PBHs [22; 23; 24; 25; 26]. A common mechanism for the enhancement of the power spectrum may be the USR setup where the potential is flat [27; 28]. However, we consider a general case and for this purpose, we may call this intermediate non-attractor phase as a "USR-type" phase. All we require from the form of the potential to be such that the power spectrum to increase monotonically during the second phase. The final phase is an attractor SR regime which is extended towards the end of inflation. The transitions between the stages can be either sharp or mild. We present our results for a three-phase setup \(\mathrm{SR}\to\mathrm{non\text{-attractor}}\to\mathrm{SR}\), and the extension of the results to higher multiple phases is straightforward. We do not consider the stochastic random motion of the background field so the behaviour of \(\phi\) is monotonic. The non-attractor phase is extended in the region \(\phi_{e}<\phi<\phi_{s}\) during the time interval \(t_{s}<t<t_{e}\) and we are interested in the growth of power spectrum for the modes which leave the Hubble radius during the non-attractor phase. For PBH formation, we are interested in the short-scale power spectrum and in particular the PBH mass function will be dominated by the PBHs forming when the scale \(k_{\mathrm{pk}}\) corresponding to the peak of the power spectrum will re-enter the Hubble radius. Let us consider therefore the effect of the long mode \(k_{L}\lesssim k_{\mathrm{pk}}\sim k_{S}\). Notice that long mode is itself suffering a period of USR phase, but it has exited the Hubble radius earlier than the scale \(k_{S}\). The measurements of the power spectrum and the bispectrum are made at the end of inflation \(t=t_{f}\) when the modes are frozen. The effects of the long mode on the short modes can be viewed as the modulation of the background quantities at the end of non-attractor phase \(t=t_{e}\). As in separate universe approach, one can view the effects of the long mode as affecting nearby patches slightly differently. Consequently, different patches approach the final attractor phase with slightly different initial conditions modulated by the long mode at the end of non-attractor phase. With this picture in mind the bispectrum for two short modes under the modulation of a long mode can be written as \[\left\langle\mathcal{R}_{L}^{f}\mathcal{R}_{S}^{f}\mathcal{R}_{S}^{f}\right\rangle \simeq\left\langle\mathcal{R}_{L}^{f}\left\langle\mathcal{R}_{S}^{f}\mathcal{R }_{S}^{f}\right\rangle_{\mathcal{R}_{L}^{e}}\right\rangle \tag{4}\] in which \(\mathcal{R}_{S}\) and \(\mathcal{R}_{L}\) represent the short and long modes while the superscript \(f\) and \(e\) indicate the corresponding values at \(t=t_{f}\) and \(t=t_{e}\), respectively. The assumption of having a single-field setup is essential in writing the above relation. If there are extra light fields, then one has to include the modulations by them in the right-hand side of Eq. (4) as well. In non-attractor single-field models \(\mathcal{R}_{L}\) and \(\dot{\mathcal{R}}_{L}\) are to be treated as independent variables [29]. Expanding \(\left\langle\mathcal{R}_{S}^{f}\mathcal{R}_{S}^{f}\right\rangle_{\mathcal{R}_ {L}^{e}}\) to leading order yields \[\left\langle\mathcal{R}_{L}^{f}\mathcal{R}_{S}^{f}\mathcal{R}_{S}^ {f}\right\rangle \simeq \left\langle\mathcal{R}_{L}^{f}\left(\mathcal{R}_{L}^{e}\frac{ \partial}{\partial\mathcal{R}_{L}^{e}}\langle\mathcal{R}_{S}^{f}\mathcal{R}_{S} ^{f}\rangle\right.\right. \tag{5}\] \[+ \left.\left.\dot{\mathcal{R}}_{L}^{e}\frac{\partial}{\partial \mathcal{R}_{L}^{e}}\langle\mathcal{R}_{S}^{f}\mathcal{R}_{S}^{f}\rangle \right)\Big{\rangle}.\] An implicit assumption in performing the above expansion is that \(\mathcal{R}\) and \(\dot{\mathcal{R}}\) to be continuous across the transition. This is the usual assumption that one needs to impose for the continuity of the metric and the extrinsic curvature across the transition. Having said this, we do not impose any assumption on the potential \(V(\phi)\) and its derivatives, as long as \(\mathcal{R}\) and \(\dot{\mathcal{R}}\) are continuous across the transition. Expressing the left hand side of Eq. (5) in terms of the usual non-Gaussianity parameter \(f_{NL}\) and defining the power spectrum in Fourier space as \(\left\langle\mathcal{R}_{\mathbf{k}_{1}}\mathcal{R}_{\mathbf{k}_{2}}\right\rangle =(2\pi)^{3}\delta^{3}(\mathbf{k}_{1}+\mathbf{k}_{2})P(k_{1})\) and discarding the trivial factors of \((2\pi)^{3}\delta^{3}(\mathbf{k})\) which matches automatically from the momentum conservation, we obtain \[\frac{12}{5}f_{NL}P_{L}^{f}P_{S}^{f} \simeq \langle\mathcal{R}_{L}^{f}\mathcal{R}_{L}^{e}\rangle\frac{ \partial P_{S}^{f}}{\partial\mathcal{R}_{L}^{e}}+\left\langle\mathcal{R}_{L}^ {f}\dot{\mathcal{R}}_{L}^{e}\right\rangle\frac{\partial P_{S}^{f}}{\partial \mathcal{R}_{L}^{e}}, \tag{6}\] in which \(P_{S}^{f}\) and \(P_{L}^{f}\) represents the power spectrum at the end of inflation for the short and long modes respectively. From the above expression we have to calculate correlations like \(\left\langle\mathcal{R}_{L}^{f}\mathcal{R}_{L}^{e}\right\rangle\) for the long mode perturbations at two different times \(t_{e}\) and \(t_{f}\). As explained before, this is because the long mode at the end of non-attractor phase modulates the power spectrum of the short modes which are measured at the end of inflation. Since the long mode is far outside the horizon at the end of non-attractor phase, we can treat it as classical and relate \(\left\langle\mathcal{R}_{L}^{f}\mathcal{R}_{L}^{e}\right\rangle\) to \(P_{L}^{f}\) via the ratio of the mode functions at these two times: \[\left\langle\mathcal{R}_{L}^{f}\mathcal{R}_{L}^{e}\right\rangle=\left(\frac{ \mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{f}}\right)P_{L}^{f}, \tag{7}\] and similarly \[\left\langle\mathcal{R}_{L}^{f}\dot{\mathcal{R}}_{L}^{e}\right\rangle=\frac{ 1}{2}\left(\frac{\mathcal{R}_{L}^{f}}{\mathcal{R}_{L}^{e}}\right)\frac{dP_{L} ^{e}}{dt}. \tag{8}\] Plugging the above relations into Eq. (6) yields \[\frac{12}{5}f_{NL} = \left(\frac{\mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{f}}\right) \frac{\partial\ln\mathcal{P}_{S}^{f}}{\partial\mathcal{R}_{L}^{e}}+\frac{1}{ 2}\left(\frac{\mathcal{R}_{L}^{f}}{\mathcal{R}_{L}^{e}}\right)\frac{\dot{ \mathcal{P}}_{L}^{e}}{\mathcal{P}_{L}^{f}}\frac{\partial\ln\mathcal{P}_{S}^{f }}{\partial\mathcal{R}_{L}^{e}},\] in which the dimensionless power spectrum \(\mathcal{P}_{\mathcal{R}}\) is related to the power spectrum via \[\mathcal{P}_{\mathcal{R}}\equiv\frac{k^{3}}{2\pi^{2}}P_{\mathcal{R}}. \tag{10}\] We should now trade the two independent variables \((\mathcal{R}_{L},\dot{\mathcal{R}}_{L})\) with two other variables in which then partial derivative has a more transparent meaning. From the point of view of a local observer within a region of size \(\sim 1/k_{S}\), the long mode perturbation evolves with time, but with negligible spatial gradients so the metric takes the following form \[ds^{2}=-dt^{2}+a^{2}(t)e^{2\mathcal{R}_{L}(t)}d\mathbf{x}^{2}, \tag{11}\] We can absorb the long mode into the scale factor via \(\widetilde{a}\equiv ae^{\mathcal{R}_{L}}\) and the corresponding Hubble rate will change as \(\widetilde{H}=H+\dot{\mathcal{R}}_{L}\). Consequently \[d\ln\widetilde{a}=d\mathcal{R}_{L} \tag{12}\] and \[d\widetilde{H}=d\dot{\mathcal{R}}_{L}. \tag{13}\] Eqs. (12) and (13) are two differential relations that can be used to relate \((d\mathcal{R},d\dot{\mathcal{R}})\) to \((d\ln\widetilde{a},d\ln\widetilde{H})\). More specifically, we have \[d\ln\mathcal{P}_{S} = \frac{\partial\ln\mathcal{P}_{S}}{\partial\mathcal{R}_{L}}d \mathcal{R}_{L}+\frac{\partial\ln\mathcal{P}_{S}}{\partial\dot{\mathcal{R}}_{L }}d\dot{\mathcal{R}}_{L} \tag{14}\] \[= \frac{\partial\ln\mathcal{P}_{S}}{\partial\ln\widetilde{a}}d\ln \widetilde{a}+\frac{\partial\ln\mathcal{P}_{S}}{\partial\widetilde{H}}d \widetilde{H}.\] Using the relations between \((d\ln\widetilde{a},d\widetilde{H})\) and \((d\mathcal{R},d\dot{\mathcal{R}})\), from the second line of the above equation we obtain \[d\ln\mathcal{P}_{S}=\frac{\partial\ln\mathcal{P}_{S}}{\partial\ln\widetilde{a }}d\mathcal{R}_{L}+\frac{\partial\ln\mathcal{P}_{S}}{\partial\widetilde{H}}d \dot{\mathcal{R}}_{L}. \tag{15}\] Comparing this differential equation with the first line of Eq. (14) we obtain \[\frac{\partial\ln\mathcal{P}_{S}}{\partial\mathcal{R}_{L}}=\frac{\partial\ln \mathcal{P}_{S}}{\partial\ln\widetilde{a}} \tag{16}\] and \[\frac{\partial\ln\mathcal{P}_{S}}{\partial\dot{\mathcal{R}}_{L}}=\frac{ \partial\ln\mathcal{P}_{S}}{\partial\widetilde{H}}. \tag{17}\] Now, plugging the above relations into formula (9) and replacing \(\widetilde{a}\) and \(\tilde{H}\) simply by \(a\) and \(H\) yields \[\frac{12}{5}f_{NL} = \left(\frac{\mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{f}}\right) \frac{\partial\ln\mathcal{P}_{S}^{f}}{\partial\ln a_{e}}+\left(\frac{\mathcal{R }_{L}^{f}}{\mathcal{R}_{L}^{e}}\right)\frac{\dot{\mathcal{P}}_{L}^{e}}{2H_{e}^ {P}P_{L}^{f}}\frac{\partial\ln\mathcal{P}_{S}^{f}}{\partial\ln H_{e}}.\] One can think of Eq. (18) as an extension of Maldacena's consistency condition [30] to the non-attractor setups (see also [31; 32]). The importance of this consistency condition is that we can read off the value of \(f_{NL}\) from the properties of the power spectrum and without the need to calculate the bispectrum using either \(\delta N\) or in-in formalisms for higher orders perturbation theory. So far our analysis was general relying only on the assumption of a single-field inflation model undergoing non-attractor phase(s) during inflation. The working assumption is that the power spectrum experiences rapid growth until it reaching a peak associated to the narrow scale where PBHs are formed. For the modes which leave the Hubble radius during the non-attractor phase and near the peak, the power spectrum locally has the following form in momentum space \[\mathcal{P}_{S}=f(a_{e})\left(\frac{k_{S}}{a_{e}H_{e}}\right)^{n_{\mathcal{R}}- 1}, \tag{19}\] in which \(n_{\mathcal{R}}\) is the spectral index and \(f(a)\) is a function of the background which controls the rapid growth of the power spectrum. Technically speaking, the factor \(f(a)\) comes from the fact that the first slow-roll parameter \(\epsilon\equiv-\dot{H}/H^{2}\) falls off rapidly during the non-attractor phase so the the power spectrum \(\mathcal{P}\propto\epsilon^{-1}\) experiences a rapid growth during the non-attractor phase. For example, in the conventional USR phase \(\epsilon\propto a^{-6}\) and correspondingly \(f(a)=a^{6}\). In our analysis, we do not rely on the particular type of the transition and the form of \(f(a)\) and all we assume is that \(f(a)\) is a growing function of \(a\) to ensure the rapid growth of \(\mathcal{P}_{\mathcal{R}}\) during the non-attractor phase. We emphasize again that the form of power spectrum given in Eq. (19) is valid only locally near the peak which is followed by a rapid increase in power spectrum. The general form of the power spectrum in \(k\)-space is more complicated and may not be even described by a power law behaviour. For example, it can have oscillatory features after the prime peak as in conventional USR setup [22; 23; 24; 25; 26]. However, since we are interested in power spectrum slightly prior and around the peak associated to the narrow scales where the PBHs are formed, then the ansatz (19) is physically justified. From Eq. (19) we infer \[\frac{\partial\ln\mathcal{P}_{S}^{f}}{\partial\ln H_{e}}=-\frac{d\ln\mathcal{ P}_{S}^{f}}{d\ln k_{S}}=1-n_{\mathcal{R}}. \tag{20}\] Near the peak of the power spectrum by definition \((n_{\mathcal{R}}-1)\simeq 0\) and correspondingly we obtain \[f_{NL}^{\rm pk}=\frac{5}{12}\left(\frac{\mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{ f}}\right)\frac{\partial\ln\mathcal{P}_{S}^{f}}{\partial\ln a_{e}}. \tag{21}\] We note that the prefactor \((\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f})\) appears because the mode function in general evolves after the non-attractor phase. This is because the transition from the non-attractor phase to the final attractor phase may be mild so the mode keeps evolving in time until it reaches its final attractor value [33]. The long mode is far outside the horizon after the peak, evolving from its initial value \(\mathcal{R}_{L}^{e}\) at \(t=t_{e}\) to its final value \(\mathcal{R}_{L}^{f}\) at \(t=t_{f}\). Therefore, \(\mathcal{R}_{L}^{f}\) is in phase with \(\mathcal{R}_{L}^{e}\) in \(k\)-space. However, as the background quantities such as the slow-roll parameters are evolving during a mild transition, the mode function may change sign so the ratio \((\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f})\) may become negative. On the other hand, if the transition is mild, then the peak in power spectrum will not be significant as the power spectrum evolves in subsequent evolution so it is not a viable model for PBHs formation in the first place. Therefore, in what follows, we make an implicit assumption that the transition from the intermediate non-attractor phase to the final attractor phase is sharp enough such that \((\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f})\) remains positive. Since the power spectrum is an increasing function of time during the intermediate non-attractor phase, we conclude that \[f_{NL}^{\rm pk}>0. \tag{22}\] While our conclusion about the sign of \(f_{NL}^{\rm pk}\) is general (with the implicit assumption of a sharp enough transition), let us examine it for some non-trivial examples. Let us consider a setup in which a USR phase is followed by an attractor SR phase in which the transition to the final attractor phase can be either sharp or mild. Defining the slow-roll parameter associated to the derivative of the potential at the final attractor phase by \(\sqrt{2\epsilon_{V}}\equiv V_{\phi}/V\), the sharpness of the transition from the intermediate USR phase to the final attractor phase is determined by the parameter \(h\) given by [33] \[h\equiv-6\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\,, \tag{23}\] in which \(\epsilon_{e}\) is the value of the slow-roll parameter at the end of USR phase. Note that in this convention \(h<0\). For a very sharp transition \(|h|\gg 1\) while for a mild transition \(h\) may be comparable to slow-roll parameters. In order to have sharp enough transition such that the ratio \((\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f})\) remains positive, we assume \(\eta_{V}\to 0\) in which \(\eta_{V}\) is the second slow-roll parameter given by \(\eta_{V}=V_{\phi\phi}/V\). The mode function for the modes which leave the horizon during the USR phase is given by [33] \[\mathcal{R}_{k}^{f}=\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right) \frac{H}{\sqrt{4\epsilon_{V}k^{3}}}. \tag{24}\] Since during the USR phase the slow-roll parameter falls off like \(a^{-6}\), then \(\epsilon_{e}\propto a_{e}^{-6}\). Taking the derivative with respect to \(a_{e}\) we find \[\frac{d\ln\mathcal{P}^{f}}{d\ln a_{e}}=6\sqrt{\frac{\epsilon_{V}}{\epsilon_{e} }}\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)^{-1}=\frac{6h}{h-6}. \tag{25}\] On the other hand, the ratio \(\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f}\) yields an additional factor \[\frac{\mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{f}}=\sqrt{\frac{\epsilon_{V}}{ \epsilon_{e}}}\left(1+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\right)^{-1}= \frac{h}{h-6}. \tag{26}\] We see that the ratio \((\mathcal{R}_{L}^{e}/\mathcal{R}_{L}^{f})\) is positive as expected. Using Eqs. (25) and (26) in our formula (21) yields \[f_{NL}^{\rm pk}=\frac{5h^{2}}{2(6-h)^{2}}>0. \tag{27}\] For an infinitely sharp transition with \(h\to-\infty\) in which the mode function is frozen immediately after the transition with \(\mathcal{R}_{L}^{e}=\mathcal{R}_{L}^{f}\), from Eq. (27) we obtain the expected result \(f_{NL}^{\rm pk}=5/2\). The expression Eq. (27) agrees with the result for \(f_{NL}\) obtained in [33] where the power spectrum is scale-invariant as well. As a second example, now suppose we extend the above setup such that there is an upward shift \(\Delta V\) in the potential at the end of non-attractor phase, followed by the final SR phase. As in Ref. [34], suppose the upward step in the potential is instantaneous, yielding to a sudden change in inflaton's velocity. Imposing the conservation of energy, the inflaton velocity at the end of upward transition \(\pi_{d}\) is related to the velocity at the end of no-attractor phase \(\pi_{e}\) via \[\pi_{d}=-\sqrt{\pi_{e}^{2}-6\frac{\Delta V}{V}}\,, \tag{28}\] in which \(\pi\equiv\phi^{\prime}\) with a prime denoting the derivative with respect to the number of e-folds. The linear mode function is given by [34] \[\mathcal{R}_{k}^{f}=\left(\frac{1}{g}+\sqrt{\frac{\epsilon_{V}}{\epsilon_{e}} }\right)\frac{H}{\sqrt{4\epsilon_{V}k^{3}}}, \tag{29}\] in which \(g\equiv\pi_{d}/\pi_{e}\) with \(0<g<1\). Correspondingly, this yields \[\frac{d\ln\mathcal{P}^{f}}{d\ln a_{e}}=\frac{6hg^{4}+36g^{2}-36}{g^{2}(g^{2}h- 6)}, \tag{30}\] in which the sharpness parameter \(h\) is now defined as \(h\equiv-(6/g)\sqrt{\epsilon_{V}/\epsilon_{e}}\). In addition, the ratio of the mode functions is given by \[\frac{\mathcal{R}_{L}^{e}}{\mathcal{R}_{L}^{f}}=\frac{hg^{2}}{hg^{2}-6}>0. \tag{31}\] Note that if we set \(g=1\) so \(\Delta V=0\), Eqs. (31) and (30) reduce to Eqs. (26) and (25) respectively. Now plugging Eqs. (31) and (30) into our master formula Eq. (21) yields \[f_{NL}^{\rm pk}=\frac{5h(hg^{4}+6g^{2}-6)}{2(g^{2}h-6)^{2}}, \tag{32}\] in exact agreement with [34] for a scale-invariant power spectrum. If we set \(g=1\), corresponding to no bump in potential, then Eq. (32) reduces to Eq. (27). Noting that \(h<0\) and \(0<g<1\), one can check that \(f_{NL}^{\rm pk}>0\) for all allowed values of \((h,g)\) as our theorem predicts. Note that the above value of \(f_{NL}\) was calculated in [34] using the \(\delta N\) formalism to second order in perturbation theory. However, in our approach based on consistency condition, we only need to calculate the linear mode function without the need to go to higher orders in perturbation theory. As a corollary, our theorem implies that in the setups where the power spectrum experiences a suppression going through a minimum, then \(f_{NL}<0\) at the minimum as was observed in a specific setup in [35]. _Conclusions._ In this note we have shown that the nonlinear parameter \(f_{NL}\) in single-field non-attractor models is always positive if calculated for the peak of the enhanced power spectrum. This result implies the NG always increases the PBH abundance. The sign of the NG is fixed by the response of the short-scale power spectrum to the presence of a long mode. If PBHs need to be form, the short-scale power spectrum needs to grow and this set the sign of \(f_{NL}^{\rm pk}\) uniquely. This logic implies that our no-go result does not hold in the case in which the NG is generated after the inflationary phase, e.g. in the presence of a spectator field. Indeed, one can generate PBHs within a spiky model where the comoving curvature power spectrum is enhanced at small scales through a spectator isocurvature field [36]. This isocurvature perturbation will then subsequently decay into radiation perturbation and become a curvature mode after inflation. In such a case the long mode cannot be reabsorbed by a redefinition of the scale factor and therefore the sign of the NG is not defined. As a consequence, \(f_{NL}\) can be negative in models with extra fields. We comment that our conclusion about the sign of \(f_{NL}^{\rm pk}\) requires an implicit assumption that the transition from the non-attractor phase to the final attractor phase be sharp enough so the mode function keeps its original sign. Physically, this is the relevant case for PBHs formation since if the transition is not sharp enough, then the peak is not prominent and PBHs may not form in the first place. _Acknowledgments._ H.F. thanks the Department of Theoretical Physics at the University of Geneva for the kind hospitality when part of this work has been done. We thank M. Sasaki and M. H. Namjoo for insightful discussions and comments. A.R. thanks the Boninchi Fundation for support.
2309.11628
Interactive Flexible Style Transfer for Vector Graphics
Vector graphics are an industry-standard way to represent and share visual designs. Designers frequently source and incorporate styles from existing designs into their own work. Unfortunately, popular design tools aren't well suited for this task. We present VST, Vector Style Transfer, a novel design tool for flexibly transferring visual styles between vector graphics. The core of VST lies in leveraging automation while respecting designers' tastes and the subjectivity inherent to style transfer. In VST, designers tune a cross-design element correspondence and customize which style attributes to change. We report results from a user study in which designers used VST to control style transfer between several designs, including designs participants created with external tools beforehand. VST shows that enabling design correspondence tuning and customization is one way to support interactive, flexible style transfer. We also find that someone using VST can significantly reduce the time and work for style transfer compared to experienced designers using industry-standard tools.
Jeremy Warner, Kyu Won Kim, Bjoern Hartmann
2023-09-20T20:40:54Z
http://arxiv.org/abs/2309.11628v1
# Interactive Flexible Style Transfer for Vector Graphics ###### Abstract. Vector graphics are an industry-standard way to represent and share visual designs. Designers frequently source and incorporate styles from existing designs into their work. Unfortunately, popular design tools are not well suited for this task. We present VST, _Vector Style Transfer_, a novel design tool for flexibly transferring visual styles between vector graphics. The core of VST lies in leveraging automation while respecting designers' tastes and the subjectivity inherent to style transfer. In VST, designers tune a cross-design element correspondence and customize which style attributes to change. We report results from a user study in which designers used VST to control style transfer between several designs, including designs participants created with external tools beforehand. VST shows that enabling design correspondence tuning and customization is one way to support interactive, flexible style transfer. vector graphics, style transfer, graphic design, creativity support tools, human-AI collaboration, computational design tools + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote † †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † † †: thanks: [ + Footnote † † †: Footnote † † †: thanks: [ + Footnote † † †: Footnote † † †: thanks: [ + Footnote † † † †: Footnote Designers often edit vector graphics' overall appearance or style while retaining their underlying content and structure. In this work, when we write _style_, we refer to the defining visual properties of a design's elements (e.g., color, shape, size, and font). Many alternative and valid definitions of this broad term exist. Style editing tasks arise in multiple situations, such as applying inspirations from a mood board, updating existing graphics to a new visual identity, or exploring multiple alternative style variations. For example, both a novice designer seeking to apply styles from a more polished design to their work and an experienced designer creating several variations of a similar design to present to a client for feedback face this task. This complex task requires many selection and editing operations for different groups of objects. Updating a design to conform to a new visual style can be exceptionally tedious and limits the exploration of different styles, even for experienced designers. One potential solution is to use document-level themes or rules that consistently apply visual attributes to classes of objects. This approach is standard across many design and presentation software tools. For example, web pages use CSS (Cascading Style Sheets) to enable document-level styling, but these style-content links must be manually created and maintained. A notable downside of using document themes or stylesheets is their _rigidity_. Compelling themes require element class information and pre-planning, introducing _viscosity_(Dasas and others, 2018) into the authoring process. Despite CSS support in SVG (Krishnan and others, 2018) via the <use> tag (Krishnan and others, 2018), most vector graphics avoid it. Another promising direction is to _automatically_ transfer visual styles between graphics using information on how two given designs relate to each other. However, this approach often fails to transfer styles as each designer uniquely intends. This failure stems from two sources: 1) the accuracy limitations of the algorithm and 2) the inherent subjectivity around _good_ style and varying tastes that designers may have. A fully automated approach may transfer styles in undesired or unpredictable ways. The lack of adequate designer controls is a clear barrier to levering automation (Krishnan and others, 2018). A tool should enable rapid iterating on different possible style transfer results to address the shortcomings of a fully automatic style transfer approach. Our research aims to combine the benefits of automation with effective controls for customizing and exploring design variations., Our approach combines automatically generated design correspondences with interactive control of how and where to transfer styles. We leverage prior work (Sutton et al., 2019) on generating an automatic correspondence between vector graphics. This method yields a between-design element correspondence (Fig. 2) and element-wise similarity along multiple dimensions. We present a new design tool, VST, short for _Vector Style Transfer_. VST provides designers with an interface to visualize and customize how style flows across designs (Fig. 3). VST displays a dynamic list of element styles, allowing designers to easily copy, reset, and customize element style attributes (see Appendix C for all attributes). With VST, designers can map and remap example Source element styles onto contextually similar elements. VST also features fast and flexible ways to identify, select, and style Target elements. The Outputcur canvas re-renders the stylized Target graphics in real-time with any changes, providing immediate visual feedback. Conceptually, VST expands the _eyedropper_ or element-wise _style copy-paste_ interactions to groups of elements. VST can infer many element relations directly, omitting the need for explicit element structure or class information. Our combined automation-powered interactive style transfer approach means that designers can get the best of both worlds - their style definitions can both be based on ad-hoc demonstrations and quick to apply flexibly across designs. To evaluate VST's style transfer capability, we recruited six designers to transfer styles between nine designs. Each designer participating in the study successfully used VST to interactively transfer styles to their satisfaction and make nine new Output designs. In a follow-up design replication study, we recruited four expert designers to each manually replicate six of these Output designs in their preferred design tool. The results from this preliminary study suggest that someone using VST may reduce the time and work for this style transfer task compared to experienced designers using industry-standard tools. Our contributions include the following: 1. VST, a design tool that introduces a novel user interface for interactive, user-guided, flexible style transfer for vector graphics. Its key interaction principles are: a) enabling users to edit computed correspondences at multiple levels, and b) enabling users to customize how attributes are transferred between designs across the correspondence. 2. Two user studies that demonstrate: a) that designers can successfully transfer styles between graphics with VST, and b) that designers without VST can spend more time and effort to produce equivalent design results. ## 2. Related Work The most relevant prior work follows several themes: supporting creative processes with automation, inferring design structures, automatic transfer techniques, and other advanced vector graphics design tools. We review each of these in turn. ### Supporting Creative Processes with AI While automation is powerful, gracefully integrating it into existing creative practices demands care. Regarding working with AI as a design material, scholars have elaborated on the need for retaining control (Sutton et al., 2019; Sutton et al., 2019; Sutton et al., 2019; Sutton et al., 2019; Krishnan and others, 2018; Krishnan and others, 2018). For GUI design, Dayama et al. present a method for interactive layout transfer, where the layout of a source design is transferred automatically using a selected template layout while complying with relevant guidelines (Das and others, 2018). In photography, researchers have provided mechanisms for guiding photographers to optimize image aesthetics (Sutton et al., 2019) and to find ideal portrait lighting conditions (Das and others, 2018). Goal-oriented transformations can also be applied to existing designs (e.g., improving accessibility) (Krishnan and others, 2018) or to produce alternative designs for different viewports (Krishnan and others, 2018). Our rationale for using element relationships between designs as a primary mechanism for transfer is that this mirrors how designers tend to work already when manually transferring styles. Highly related to our line of work are feedforward and example-driven corrections. Feedforward work refers to showing the user the output or result of their action before it happens-a preview of applying different interface actions (Das and others, 2018; Krishnan and others, 2018; Krishnan and others, 2018). For example, OctoPocus provides dynamic guidance to bolster users' ability to learn stroke-based gestures (Das and others, 2018). Example-driven corrections and interaction models like those in FlashMeta (Krishnan and others, 2018) or programming-by-demonstration disambiguation models (Krishnan and others, 2018) provide alternative techniques that address similar problems. Feedforward and inherent feedback can promote UI element functionality understanding to users, though computing this information fast enough for live, interactive contexts can be challenging. With that said, cluing in authors on their actions' impact is valuable. For example, the Lightspeed rendering pipeline enabled interactive prototyping of professional 3D graphics, enabling more design variation exploration (Krishnan et al., 2017). One approach might leverage lower-fidelity previews of variations when interacting with automation, such as design galleries. We avoid using design galleries as our early prototypes showed the varying complexity and breadth were visually overwhelming. For an analogy in text editing: VST spell-checks the entire document, while feedforward suggests autocompletion options given what is already written. Example-based corrections generate a program that satisfies all demonstrated changes, iteratively growing more complex. Example-based style retargeting for websites provides a successful analog to vector graphic style transfer in HTML/CSS (Bartos et al., 2019; Krishnan et al., 2017). Example galleries can effectively support open-ended design authoring, where styles come from potentially multiple sources (Krishnan et al., 2017). While the document-object-model hierarchy is essential to styling web pages, such grouping structures and labels are entirely optional and often absent in vector graphics. Groups may be constructed arbitrarily (e.g., for editing convenience) rather than having any consistent semantic meaning. Designers can encode hierarchical information through groups but frequently opt to style elements directly (Krishnan et al., 2017). Bringing interactive style transfer to vector graphics is a unique problem. ### Inferring Design Structures Researchers have used several approaches to infer underlying or implicit structures in visual designs. Traditionally, this work primarily operates on some structured representation (like HTML or SVG). For user interfaces, large libraries have helped to characterize and infer document structure (Krishnan et al., 2017; Krishnan et al., 2017). Linking styles via direct manipulation and element cloning provide a clear view and control of an element's style properties (Krishnan et al., 2017). There is also work to recognize higher-level design patterns through designs by inducting grammars (Krishnan et al., 2017). For the domain of D3 visualizations, Hoque et al. map data types onto shapes/axes to help search for relevant designs (Harper et al., 2017). Harper et al. showcase tools for deconstructing and restyling a D3 visualization by extracting the data and modifying visual attributes of marks (Harper et al., 2017). More recent work also focuses on inferring design structure from images directly. Computer vision techniques are improving on reverse engineering user interface models directly from screenshots (Harper et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Similar work using vision-based methods has helped leverage attention towards answering questions and understanding mobile UIs (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Reddy et al. use differentiable compositing to identify pattern instances within a design (Krishnan et al., 2017). Scene graphs have also characterized structural relationships within and between 3D environments (Krishnan et al., 2017). For vector graphics, Shin et al. demonstrate a technique using graph kernels to find relationships between elements of designs (Krishnan et al., 2017). We leverage this preexisting automatic technique to compute a correspondence between design elements (like those shown in Fig. 2). The contribution of this work centers on our novel design tool that goes beyond pure algorithmic automation by enabling flexible interactions between the capabilities of such an algorithm and the designer's high-level styling goals. ### Automatic Transfer Techniques While automatic style transfer techniques can generate impressive image transformations, they are generally functional as _theme selections_. Due to the broad range of shape primitives, graphic designs do not immediately lend themselves to this document-level style transfer approach. The selective extraction and transfer of specific styles are too precise to be encoded in a one-dimensional slider (Krishnan et al., 2017; Krishnan et al., 2017). The variations of vector designs also make mapping onto an otherwise standard template difficult (e.g., facial key points) (Krishnan et al., 2017). Additionally, text can be used to edit image content and style directly (Bartos et al., 2019). While layout is not our tool's focus, prior work highlights optimization techniques that can be used to automatically format text documents (Krishnan et al., 2017). ImagineNet restyles mobile apps with neural style transfer and updating assets in place (Krishnan et al., 2017). To be stylized with image-based techniques, vector graphics must first be rasterized, losing future object-level awareness and scaling abilities. The state of the art in automatic vector generation includes leveraging pixel-based diffusion models (Krishnan et al., 2017) by leveraging a differentiable vector graphics representation (Krishnan et al., 2017). DeepSVG uses GANs to generate and interpolate between SVG teams and shares a large-scale SVG dataset (Krishnan et al., 2017). Kotovenko et al. model a painting using discrete strokes to recreate style transfer better (Krishnan et al., 2017). Within font, some work shows the possibility of even inferring and transferring style between font glyphs (Krishnan et al., 2017; Krishnan et al., 2017). These techniques often give users little to no control of _how_ the style is transferred. Our work focuses on optimizing the potential value that these automatic approaches can provide by introducing meaningful high-leverage interactions to customize and control generated output while retaining the core vector graphics representation that designers are familiar with working with. ### Vector Graphics Design Tools Several techniques for authoring or adjusting vector graphics exist and inform this work. Object-Oriented Drawing introduces a new way to create and style elements directly on the canvas (Krishnan et al., 2017). Datalnk supports cloning and binding user-generated symbols to data, facilitating lightweight restyling (Krishnan et al., 2017). Sketch-n-Sketch links drawing code and vector graphics, letting users directly edit the SVG in a canvas, modifying the code which generates it (Krishnan et al., 2017). For mathematical diagramming, Penrose uses layout energy-minimization techniques coupled with a language for specifying explicit styles and content of what to render (Krishnan et al., 2017). Fak uses user demonstrations and program synthesis to create new visualizations (Krishnan et al., 2017). Existing tools can even convert web designs into a vector layout (Krishnan et al., 2017). Para supports binding procedural art generation constraints with graphics, including cases where there are many-to-many constraints (Krishnan et al., 2017). A follow-up project, Dynamic Brushes, combined procedural programming into brush behavior and design, enabling more custom expression (Krishnan et al., 2017). Other design tools have looked at supporting design layout (Krishnan et al., 2017; Krishnan et al., 2017), fashion (Krishnan et al., 2017) and design coloring (Krishnan et al., 2017; Krishnan et al., 2017). ## 3. Vector Style Transfer When transferring styles between vector graphics, designers may identify an inspirational style they want to copy from a Source design. Next, in a Target design, they may identify design elements they would like to stylize. Then, they will update the stylistic attributes of those relevant Target elements using the Source style as a reference. Alternatively, they may first focus on the Target design they wish to change and pull stylistic influences in from a range of Sources, exploring possible variations. Generally, this styling is an iterative and flexible process that involves reasoning about (a) _which elements correspond to each other across designs_ and (b) _which style attributes to transfer_. There is subjectivity regarding the most desired application of style, and higher-level considerations like the overall cohesion of the Target design after styles have transferred further complicate this task. The resulting Output design has the _style_ of one design and the _content/structure_ of another - though this distinction is still inherently subjective. Still, this task (using examples to update existing graphics with new visual styles) is expected in the graphic design process (Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2018). ### Design Goals A high-quality element correspondence is one way to enable fast and effective style transfer for vector graphics designs. To provide designers with flexible control over style transfer is to provide them with tools to control the correspondence between designs. Moreover, to be worthwhile, the resulting designs should be of satisfying quality and faster to generate than existing tools, especially when considering the cost of learning to use a new tool. Grounded in our literature review and personal experience editing graphics, we created these design goals for Vector Style Transfer (VST): * Let designers powerfully tune design correspondences. * Enable flexible control over which styles are transferred. * Reduce the work and time needed for transferring styles. Our vision for how the functionality of VST best fits into existing processes is as a plugin or new tool in existing vector graphics design software. Designers could select an object group and copy their style. Then, they could select any other group within their design document and apply that style - without manually selecting each element subset. Additionally, they could filter which styling attributes they would like to copy. This work could either be used as a starting point to render a design in several alternative styles or to make a set of designs adhere to a single style. ### Exemplar Scenario We will demonstrate VST's functionality with an exemplar scenario involving vector style transfer. Consider Xavier, a designer hired by a local Italian restaurant, _Leonard_'s. After a recent renovation, the restaurant is set to have a grand re-opening. Xavier has created a new flyer to help them advertise, which the business manager approves. To unify the brand's style, the business manager also asks him to create new versions of several existing graphics, including menus and a special delivery advertisement. These designs should look like they all refer to the same restaurant. This style unification process Xavier faces involves many repeated manual edits and cross-references. Instead of manually ensuring exact visual consistency, the opens VST and loads in both graphics (Source: the new flyer, Target: the previous advertisement). VST computes a correspondence between elements of these two designs and automatically copies styles between matches. This correspondence technique ensures a one-to-many mapping from the Source elements to the Target elements. This ensures that every Target element will be matched, while some Source elements may not be initially matched. Xavier then sees the Output canvas update with newly stylized graphics (Fig. 3). Figure 2. An overview of automated design correspondence. To relate design elements, we first construct a graph from each given design, where the _vertices_ are primitive design elements (e.g., shapes, text, images) and _edges_ are semantic relationships (e.g., same fill, containment, same font). Once the Source and Target graphs are constructed, we then compute a correspondence between the two designs’ elements using the technique previously detailed in (Wang et al., 2019). This automatically generated correspondence is VST’s basis for (a)how to find similar elements _within_ a design (e.g., for easier selection/styling) and (b) identifying which elements are similar to each other _across_ designs (e.g., determining which initial styles to transfer). Each Target element is linked to a single Source element. Only a subset of links between these designs’ elements are shown. For each Target element, styles are copied from the most similar Source element as determined by the design correspondence algorithm (S graphics, we compute a correspondence between the two designs using a comparison technique introduced by Shin et al. (Shin et al., 2018). This technique represents each design as a multigraph (rather than a typical parent-child hierarchy tree) to support matching elements across a broader range of similar attributes. Vertices are primitive design elements (e.g., shapes, text, images), and edges represent semantic relationships between elements (e.g., alignment, containment, same fill). This correspondence contains per-element similarity scores across several dimensions (e.g., color, shape, size, and text). In our implementation, correspondences between 20 or fewer elements are generally computed in real-time (\(<\) 1s). Though slower, our study's larger design pairs are still tractable to match, with the largest pair (185 total elements) taking about 100s. Our example set's average matching time per design pair (across Style Transfer Tasks 1 and 2) is 7.78s. Once obtained, match information can be exported and saved for later use. A version of VST for styling pre-matched design pair examples is available at: [https://berkeleyhci.github.io/vst/](https://berkeleyhci.github.io/vst/). ## 4. Evaluations Style preferences are subjective, which means that making absolute statements about a style transfer tool's _performance_ is difficult. Still, we sought to evaluate three key research questions: * How would designers use VST for style transfer? * Could VST stylize realistic, open-ended designs? * Could VST reduce the time or work of styling? ### Style Transfer Evaluation **Method -** To answer RQ1 and RQ2, we ran an exploratory study with six experienced designers (D1-6). Before the study began, we asked designers to create a new design from a given prompt with their preferred design tool. The prompt requested a single menu page design for a local restaurant's (_Leonard's_) mobile phone application. The goal was to include designer-provided source graphics to create a more realistic style transfer scenario. More methodology details are available in Appendix A, and more information about the participant's background is in Appendix B. **Task 1: Basic Graphics Pairs -** After an interface demo and the opportunity to ask questions, designers used VST to transfer styles between five pairs of example designs that the authors prepared. The design pairs we chose for designers to transfer from are shown in Fig. 5 (T1.1-5). We chose these graphics to capture a breadth of different graphic design domains (e.g., art, infographics, UI mockups). We instructed designers to apply styles from the Source to the Target graphics to make the Source and Output as stylistically similar as possible. Once satisfied, they would save the Output graphics and move on to the next pair. **Task 2: Open-Ended Transfer -** To observe how VST handled styling more open-ended realistic designs (RQ2), designers transferred styles from their externally created designs onto three new related templates (T2.1-3). In these tasks, the Source was a menu page created by each designer before the study with their preferred design tool. We matched their designs to three new template pages (a loading screen, a reviews page, and a checkout cart), all for _Leonard's_ mobile app. The generated output design correspondences (Fig. 2) were not hand-tuned at all before the study. ### Style Transfer Results Our style transfer evaluation study found that designers could use VST to control style transfer across basic designs (RQ1), even generating variety in their Output designs from the same inputs. Those designers successfully used VST to flexibly transfer styles from more realistic, open-ended designs created with external tools (RQ2). We take this as an indication that VST enabled the style transfer it was designed to support. Each designer participating in the study (D1-6) used VST to generate eight new Output designs successfully. Designers also answered Likert-scale questions regarding their experience with VST (Fig. 8). Style transfer examples from the evaluation are shown in Figures 5 and 6. Designers, despite never using a similar interface before, used VST's features to both (a) modify design correspondences (DG1) and (b) filter and edit styles per correspondence (DG2). Software instrumentation revealed that almost all designers on almost all tasks used VST to tune computed correspondence matches. On average, designers performed 6 such corrections per task. While making these corrections, designers used the functionality to _select similar_ elements to the ones they manually selected. On average, designers performed 7.3 similarity selections and spent about 4.8 minutes per task. As a reminder, designers were only instructed to match the styles to the best of their ability - not to do so as quickly or efficiently as possible. We showcase additional, more complex VST graphics made outside of this study in the Appendix (Fig. 12) and in our paper's accompanying project video. **VST let designers tune design correspondences (DG1).** Overall, designers appreciated the style transfer control that VST provided them. The designers' Likert-scale responses indicated they could produce designs they were satisfied with (Fig. 8). Most designers could see themselves using the tool again and found VST flexible enough to perform style transfer as they intended. Their verbal remarks are corroborated by the frequency with which they used the correspondence correction feature (Average: \(\mu:6.0\), Standard Deviation: \(\sigma=3.8\)) and attribute editing feature (\(\mu:24.0\), \(\sigma=17.3\)). Figure 4. The black lines show an initial correspondence between the elements of the Source and Target designs. The green lines show an alternative, more desired set of links. When users select their desired Source and Target elements and press _Transfer Source Style_, VST will update these links, redirecting the flow of visual styles across designs. **VST enabled flexible control of style transfer (DG2).** The designers created a wide variety of designs, even when given the same input graphics (Fig. 5). For their own provided graphics, designers reproduced a consistent theme across a set of provided vector graphics templates (Fig. 6). Several designers remarked on the convenience of reusing visual styles directly. D4: Very fun! Appealing to a visual thinker who values efficiency and hates repeatedly doing the same things. Magical, "it read my mind!" kind of feeling. While most found it clear how to use the different parts of the prototype to achieve their desired style transfer, there was also feedback that the transfer results were sometimes surprising. This surprise likely stemmed from having multiple ways to style elements (e.g., tuning the correspondence vs. what styles the correspondence transfers). **Designers enjoyed applying broad changes.** Designers valued the ability to apply broad style changes quickly. D3: I was impressed by how well the system generated its "best guess" when I selected the "Copy All." I also thought it was easy to learn and intuitive. It had tools that worked similarly to design software I already used (like dragging values to change the font size). D5: I liked how efficient the transferring process was in closely replicating the desired style with just a button. Even if it wasn't completely accurate, the toggle buttons under Copy All made fine-tuning specific aspects of design elements easy - I could definitely see how this interface could reduce the amount of time that a designer would need to update designs. Designers also appreciated directly selecting similar elements easily, which helped broader styling. D4: Being able to select multiple elements precisely is very nice. **Correspondence-based transfer presents novel controls.** No designer reported using a similar style transfer design tool before this study. D6: I have not used anything that performed this exact function before, but I've used a tool to try to analyze an image and find out what fonts were used. It was not as reliable as this tool. While most designers (4/6) indicated an interest in using the tool again, others were hesitant, citing VST's deviation from the types of tools they were familiar with. Some designers recognized the value of a style transfer tool: D4: I have manually copied styles and have had other humans manually copy my own. When successful, this tool manages to give you that feeling of empathy and creative connection ("Wow, the other designer understood my aesthetic and was able to replicate it! I feel they really understand my vision"). When it is not successful, it is easier and less stressful to correct than a human might be. Plus, it is faster than asking another designer, fewer resources, less risk, and when it is successful, high reward! ### Design Replication Evaluation **Method** - To answer RQ3, we ran a follow-up study. Our goal with this study was to compare the time and work required for style transfer in VST with that of an expert using industry-standard design software. We recruited four new expert designers as replication designers (RD1-4). More information about their background is in Appendix B. They were tasked with recreating a subset of the Output graphics from the previous study (T2-1-3) in their preferred design tool (Adobe Illustrator). Given that VST is a novel design tool there are no users with equivalent VST expertise comparable Figure 5. Task 1 (Style Transfer) – Basic Graphics Pairs. Here, designers D1-6 used VST to transfer styles from the Source to the Target graphics. Both Source and Target designs were provided to the designers. In simpler cases, the design transfer result is uniform across designers (T1.1-3). Still, despite each designer starting from the same pair of designs, variations arose in more complex design pairs (T1.4-5). Figure 6. Task 2 (Style Transfer) – Open-Ended Transfer. Before the study, we gave designers (D1-6) a prompt for a menu design with specific elements without any style instructions. The column header shows designs that they brought into the study (Sources), and the row header shows design templates (Targets). The inner table shows new designs created by applying styles from their externally created Source design onto previously unseen Target templates. Inspecting each column shows a unified visual style inherited from the Source document, while rows show the Target structure. to the RDs' Illustrator skill. To approximate the performance of an expert VST user, the authors used VST to generate the same Output designs using the same input materials provided to the RDs. This data is labeled VST in Table 1. Further methodology details are in Appendix A. We report the comparison between these three design replication methods in our results. **Task: Design Replication** - We selected six Output design examples from Fig. 6 for this designer to replicate in Illustrator (_Goal_ in Fig. 7). We selected designs to include both graphics from every task (T2.1-3) that we gave the original designers and to include one example per designer (D1-6). We provided the RDs with the Source and Target vector graphics files and an image of the generated Output (created initially by D1-6). The RDs were then tasked with transforming the Target graphics to resemble the provided Output. To measure what human adjustment is needed when working with the automatically stylized designs, we also asked the RDs to replicate the Output starting with the initial automatically stylized Output graphics from VST. These graphics (_Auto_) are created by copying all styles using the initial automatic Source and Target correspondence. We asked the RDs to transform the now-partially stylized Target graphics to resemble the Output image. Any difference between these two sets (_Basic_ and _Auto_) would highlight the algorithm's impact on the task time and work. To compare the potential of VST and existing tools, the authors also replicated the same Output designs from the previous study using VST (RT1-6). The same input materials were used as in the Illustrator replication: the Source and Target vector graphics files and an Output image. ### Design Replication Results In our study, using VST to transfer styles was faster than expert replication designers (RD1-4) transferring styles within their preferred design tool (RQ3). The RDs also performed more edit and selection operations using Illustrator than the authors using VST. We report total work as a combination of selection and edit operations. On average, the RDs spent 534 seconds replicating from scratch (_Basic_) and 774 seconds replicating from the output of the correspondence algorithm (_Auto_). In comparison, the authors required, on average, 129 seconds to match styles using VST. A plot of the duration for each task is shown in Fig. 9. Stats averaged over all tasks (RT1-6) are shown in Table 1. Each replication designer also reported the style replication task as difficult and tedious. **Transferring styles with existing tools is tedious.** After replicating the designs in Fig. 7 (RT1-6), the RDs reported on their experience by answering Likert-scale (ranging from 1-7) and open-ended survey questions. They reported that using Illustrator for this style matching task is tedious for both starting points, with _Auto_ slightly more tedious than _Basic_ (Average (\(\mu\)): \(6.8\to 5.8\), Standard Deviation: \(\sigma_{Basic}=1.3\), \(\sigma_{Auto}=0.5\)). The associated scale labels were: 1-_Not tedious at all_ and 7-_Extremely tedious_. They also reported starting from _Auto_ was less fun than _Basic_ (\(\mu\): \(2.0\to 3.8\)), with 1-_Not fun at all_ and 7-_Extremely fun_ (\(\sigma_{Basic}=1.0\), \(\sigma_{Auto}=0.8\)). **Editing from Auto was not faster than Basic.** Combining automated style transfer with existing design software tools may even hinder designer performance. The RDs reached roughly the same Likert-scale level of satisfaction with their final designs' quality from both the _Basic_ and _Auto_ starting points (\(\mu_{Basic}=4.3\), \(\mu_{Auto}\) = 4.5), with 1-_Complexly dissatisfied_ and 7-_Completely satisfied_ (\(\sigma_{Basic}=1.0\), \(\sigma_{Auto}=1.0\)). However, they reported that generating the desired Output was harder with _Auto_ than _Basic_ (\(\mu\): \(6.3\to 5.0\)), with 1-_Not difficult at all_ and 7-_Extremely difficult_ (\(\sigma_{Basic}=1.0\), \(\sigma_{Auto}=0.8\)). These stats match their written feedback: RD1: Editing the auto files is harder - there's more variance in the output, and sometimes unnecessary properties were added from the automatic transfer. RD2: In the standard [Basic] file, editing elements is more straightforward, while for the modified [Auto] one, I spent some extra Figure 7. Design Replication Task – A new set of expert designers (replication designers RD1-4) replicated six reference designs (RT1-6) from the previous style transfer evaluation tasks (Fig. 6) using two different starting points: _Basic_ and _Auto_. The first approach involved using Illustrator to transform the _Basic_ input design to the replication goal. The second approach again used Illustrator, but instead has the algorithmic output (_Auto_) as the starting point. We provided the RDs with source styles and target structures from the previous study in vector form and a reference image of the replication goal for both approaches. time cleaning. RD4: I largely had a similar approach to both design files, though the original [Basic] one tended to be easier. **Replication designers wanted transfer tools like VST.** After briefly interacting with VST at the end of the study, all RDs were genuinely interested in trying out an Adobe Illustrator plugin with similar functionality (\(\mu=6.25,\sigma=1.0\)), with 1-_Not at all interested_ and 7-_Extremely interested_. RD4: The prototype looks very interesting! RD1: I would definitely try it when I want to apply vector-based styles to my design. When asked about if and where they would find VST useful: RD1: I can see how this tool would be beneficial for tasks like redesigning an existing UI or early-stage exploration. When asked about other similar tools they have used: RD2: In Figma, we save the font/color as a library preset, then when we change the setting, it automatically updates the components. RD3: The style transfer prototype is more adaptive than design components because files that I need to change may not have a component system. ## 5. Discussion The success of VST demonstrates the value of two key design goals that are relevant as recommendations for other automation-powered correspondence-based transfer tools: include the ability to flexibly _tune generated design correspondences_ (DG1) and include the ability to flexibly _customize what correspondences do_ (DG2). **Tuning Generated Design Correspondences.** Providing powerful and convenient ways to tune correspondences avoids requiring users to make each mapping manually (DG1). In VST, this functionality is represented by our _Selecting Similar_ feature, the ability to view and select elements sharing any of the same values in the customization pane, and the _Similarity Threshold_ feature (which lets users quickly preview selections). **Customizing Correspondence Functions.** Customizing a correspondence retains the flexibility of a manual approach, ensuring that designers still have control (DG2). The domain will ultimately specify what is reasonable to transfer per correspondence. Generally, the designer should be able to control what happens when two objects are linked. In VST, we achieve this through our customization panel, where designers can copy, reset, and customize attribute values. We also provide flexible ways to filter this list (e.g., by active selection and showing modified/all attributes). **The Cost of Automation -** One notable point in our results is that starting with the algorithm's output (_Auto_) did not make replication easier. In fact, the RDs reported that starting with the automatically generated algorithm output was more difficult and less fun. Simply throwing automation into existing tools and processes may backfire. This is backed by our quantitative results: the _Auto_ designs, on average, required more work to style than the corresponding _Basic_ starting point (_Basic_: 265 operations, _Auto_: 383 operations). This is jarring, as applying the style transfer algorithm should have the opposite effect -- otherwise, why apply it at all? First, applying a semi-correct transformation reduces cohesion in the design. The lack of cohesion commonly found in _Auto_ designs reduces the efficiency of applying gestalt principles. This makes selecting similar elements to style them together harder. Second, the vast scope of the copied attributes may introduce new work. Incorrectly changing an attribute does not create new work if it already needs to be changed. However, if part of a Source style is not desired in the Output graphics, those attributes must be manually reset to their original Target value. Current design software fails to support this type of style transfer interaction. In contrast, VST features convenient ways to quickly select and explore element styles (double-clicking an element/selection, precision selection controls, visually selecting via the same attribute value). Current correspondence algorithms do not seem to reduce the total work in style transfer otherwise. This is especially true for more complex examples where correspondence accuracy is often lower. ## 6. Limitations and Future Work ### Limitations VST is not a general-use vector graphics editing platform. The SVG standard is complex; even industry-standard platforms like Inkscape and Adobe Illustrator may render the same graphics differently. Still, some missing features limited how useful VST was for designers in its current state. Users wanted more advanced layering/z-reordering for sub-selections in complex design areas. Additionally, the current correspondence structure usage limits elements to inheriting styles from one Source element unless manually mixed with other styles. We also did not measure the impact of algorithm matching performance on this task. Informally, study participants D1-6 updated the correspondence an average of six times per task, though our study instrumentation did not record the number of adjusted elements per update. In Shin's prior work (Shin, 2018), the average match accuracy was 95 However, their evaluation (Shin, 2018) was performed \begin{table} \begin{tabular}{l c|r r r} & & Basic & Auto & VST \\ \hline Task Duration & Mean & 532 & 774 & **129** \\ & S.D. & 341 & 347 & 80 \\ Work Operations & Mean & 265.7 & 383.5 & **30.3** \\ & S.D. & 167.8 & 159.2 & 18.9 \\ Attribute Edits & Mean & 80.0 & 113.1 & **13.0** \\ & S.D. & 59.9 & 77.8 & 8.7 \\ Selection Updates & Mean & 185.7 & 270.4 & **17.3** \\ & S.D. & 122.8 & 185.7 & 12.1 \\ & & & & \\ \end{tabular} \end{table} Table 1. Replication work data – usage statistics averaged over replication tasks RT1-6 (see Fig. 7). The _Basic_ and _Auto_ columns show aggregate data collected from the four expert replication designers (RD1-4), while the _VST_ column shows data from the paper authors using VST to replicate designs. Figure 8. Summary of Likert survey data from designers D1-6. with the Source as an element group within a Target design, rather than a separate design. Explicitly varying the match quality and leveraging different matching techniques are opportunities for future work. Another limitation of this work is the smaller scale of the surveyed designer population (10 unique designers across both studies). For our design replication study, we worked with four expert designers. While this smaller study size allowed us to deepen the level of feedback and data we gathered, future studies could evaluate a larger expert population to get additional feedback. Future work could conduct a larger-scale study with more designers to potentially collect insights into a broader set of behaviors that designers exhibit. Also, when comparing VST to other tools, the authors have more awareness of the replication goal and task, which likely improves their relative performance. Another evaluation could train experienced designers with VST and have them replicate graphics from the original study. ### Future Work Images can naturally add vibrancy to a design, though VST's style transfer only applies to vector graphics. One future direction is sourcing vector styles directly from images. RD1: It would be great to apply bitmap styling to my vector design. This use case is more common in my workflow. This requires converting the image to vector graphics or a novel style extraction technique. Some features (e.g., colors) are simple to extract, while other features like paths, gradients, shapes, and fonts are potentially much harder to source from an image correctly. For image-to-vector graphics conversion, some research methods (Wang et al., 2018) and commercial tools (Bahdan et al., 2018) exist. However, these methods tend to optimize pixel-based similarity to the source image over a consistent output structure or element resolution. The internal document complexity makes determining correspondences much more challenging. Rasterizing vector graphics is a lossy process with no perfect inverse. Still, given the ubiquity of image-based inspiration, a vector styling tool that uses images as a styling source is an exciting future direction. Better correspondence algorithms may reduce the need for a corrective interface like VST. Consider automatic speech transcription as an analogy: under a certain accuracy threshold, manually transcribing speech is easier than correcting a low-quality generated transcript. The work required to fix the algorithm's output exceeds that of simply creating that same output manually. There is room for improvement in design correspondence accuracy for vector graphics. However, even with the best algorithm, some cases will still need manual tuning. This ambiguity stems from the inherent subjectivity around _good_ style and varying designer states. Primarily, our style transfer with this prototype addresses element size, font, stroke, and fill. While designers can modify other features, this feature subset visually dominates the result. A complete list of transferable properties is in Appendix C. Future work could serve as a larger-scale multi-design style linter or unification technique where many designs are edited simultaneously. The design layout and structure are held constant throughout our style transfer process. Applying the layout from source to target is an exciting and relevant next direction. ## 7. Conclusion We presented a novel design tool called VST (Vector Style Transfer) for flexibly transferring styles across vector graphics designs. We conducted two studies to investigate (1) how designers may use correspondence-based transfer tools like VST and (2) the potential of these tools in relation to traditional industry-standard design tools (e.g., Adobe Illustrator). The first study, an open-ended style transfer evaluation, revealed that despite not previously using any similar tools, experienced designers could effectively transfer styles even across graphics independently created using other design tools. The second study, a preliminary design replication evaluation, suggests that tools like VST may reduce the time and work required to transfer styles across designs compared to traditional design tools. These expert designers also found directly editing automatically stylized graphics more difficult and tedious than the original baseline design templates. This work provides two design recommendations for future design tools to support flexible user control: enable tuning generated design correspondences and customizing how these correspondences transform designs. Figure 9. Plots of the duration, edits, and selections data from the design replication (RT1-6). Along each recorded measure (duration, edits, and selections), the authors using VST outperformed all four expert designers using Adobe Illustrator in replicating the stylized designs. The _Basic_ and _Auto_ plots also include ticks showing the _standard error_ for each task computed over RD1-4. VST was only used once per task to obtain a baseline, so there are no comparable ticks to show.
2309.09073
Enhancing personalised thermal comfort models with Active Learning for improved HVAC controls
Developing personalised thermal comfort models to inform occupant-centric controls (OCC) in buildings requires collecting large amounts of real-time occupant preference data. This process can be highly intrusive and labour-intensive for large-scale implementations, limiting the practicality of real-world OCC implementations. To address this issue, this study proposes a thermal preference-based HVAC control framework enhanced with Active Learning (AL) to address the data challenges related to real-world implementations of such OCC systems. The proposed AL approach proactively identifies the most informative thermal conditions for human annotation and iteratively updates a supervised thermal comfort model. The resulting model is subsequently used to predict the occupants' thermal preferences under different thermal conditions, which are integrated into the building's HVAC controls. The feasibility of our proposed AL-enabled OCC was demonstrated in an EnergyPlus simulation of a real-world testbed supplemented with the thermal preference data of 58 study occupants. The preliminary results indicated a significant reduction in overall labelling effort (i.e., 31.0%) between our AL-enabled OCC and conventional OCC while still achieving a slight increase in energy savings (i.e., 1.3%) and thermal satisfaction levels above 98%. This result demonstrates the potential for deploying such systems in future real-world implementations, enabling personalised comfort and energy-efficient building operations.
Zeynep Duygu Tekler, Yue Lei, Xilei Dai, Adrian Chong
2023-09-16T18:42:58Z
http://arxiv.org/abs/2309.09073v1
# Enhancing personalised thermal comfort models with Active Learning for improved HVAC controls ###### Abstract Developing personalised thermal comfort models to inform occupant-centric controls (OCC) in buildings requires collecting large amounts of real-time occupant preference data. This process can be highly intrusive and labour-intensive for large-scale implementations, limiting the practicality of real-world OCC implementations. To address this issue, this study proposes a thermal preference-based HVAC control framework enhanced with Active Learning (AL) to address the data challenges related to real-world implementations of such OCC systems. The proposed AL approach proactively identifies the most informative thermal conditions for human annotation and iteratively updates a supervised thermal comfort model. The resulting model is subsequently used to predict the occupants' thermal preferences under different thermal conditions, which are integrated into the building's HVAC controls. The feasibility of our proposed AL-enabled OCC was demonstrated in an EnergyPlus simulation of a real-world testbed supplemented with the thermal preference data of 58 study occupants. The preliminary results indicated a significant reduction in overall labelling effort (i.e., 31.0%) between our AL-enabled OCC and conventional OCC while still achieving a slight increase in energy savings (i.e., 1.3%) and thermal satisfaction levels above 98%. This result demonstrates the potential for deploying such systems in future real-world implementations, enabling personalised comfort and energy-efficient building operations. ## 1 Introduction Personal thermal comfort models are used to predict individual-level thermal comfort responses and can capture the distinct preferences between individuals compared to aggregated comfort models [1]. Integrating these models into HVAC controls is essential for occupant-centric controls (OCC) and has been shown in past studies to significantly reduce HVAC energy consumption while enhancing occupants' thermal satisfaction through various modelling approaches and control strategies. For example, Jazizadeh et al. [2] developed a fuzzy-rule-based model to determine the preferred temperature of each occupant and adjusted the centralised HVAC temperature setpoint via BMS to minimize the sum differences between local temperatures and preferred temperatures in each thermal zone. As a result, the daily airflow rates were reduced by 39%, compared to conventional HVAC controls with predefined temperature setpoints, while significantly improving thermal satisfaction. A study conducted by Aryal et al. [3] proposed the use of Na\({}^{\ast}\)ive Bayes-based probabilistic thermal comfort models to inform the HVAC temperature setpoint, resulting in a 25% average increase in occupant satisfaction and a 2.1% increase in energy savings compared to a fixed temperature setpoint of 22.5degC. Despite the benefits of integrating personal thermal comfort models with HVAC systems to achieve OCC, the real-world practicality of such implementations remains limited as these personalised comfort models require large amounts of occupants' preference data to achieve accurate model performances. This preference data is usually collected using various surveying tools, such as online surveys and wearables, which are often highly intrusive and labour-intensive, especially when conducting large-scale data collection studies [1]. The challenge related to data collection cost has been an active research topic, with Active Learning (AL) gaining popularity due to its performance and generalisability across various fields. AL is a branch in machine learning that uses an algorithmic approach to identify the most informative instances for human annotation to achieve the desired model performance while minimising annotation costs. A recent study shown that AL can reduce the user labelling effort by up to 46% for personal thermal comfort models [4], demonstrating the potential benefits of AL in this domain. In this study, we developed a thermal preference-based HVAC control framework enhanced with AL to address the data challenges related to real-world implementations of such OCC systems. The feasibility of our proposed AL-enabled OCC was demonstrated in an EnergyPlus simulation of a real-world testbed, supplemented with the thermal preference data of 58 study occupants. The implications of this work can increase the feasibility of future real-world implementations, enabling OCC and energy-efficient building operations. ## 2 Methodology ### Data Collection Setup and Processing The dataset used in this study was collected from a \(50m^{2}\) testbed constructed in the Building and Construction Authority Academy building in Singapore. The testbed, which represents a single thermal zone, experiences a tropical climate and is conditioned through a variable air volume (VAV) system and two ceiling fans with four levels of speed control. The data was collected over ten consecutive working days involving 58 study participants (29 males, 29 females) between the ages of 21 and 60, and had lived in tropical climates for at least the past three years. Each user is randomly assigned into groups of 5 to 6, where the thermal preference data for each group is collected under different indoor conditions over one working day. Throughout the experiment, indoor air temperature (24C to 28C) and air speed (0.1 m/s to 0.8 m/s) were changed once every 30 minutes in a randomised order and indoor conditions were continuously monitored through various sensors. These include indoor air temperature, globe temperature, relative humidity, air speed, total volatile organic compounds, carbon dioxide, and fine particulate matter. The outdoor weather conditions were also measured through a nearby weather station, consisting of outdoor air temperature, outdoor relative humidity, atmospheric pressure, and fine particulate matter. Every 30 minutes, participants were asked to provide feedback about their thermal preferences through a combination of wearables and online surveys. More specifically, a short comfort survey was disseminated to the participants' Apple watches at the 5\({}^{th}\) and 15\({}^{th}\)-minute mark, while an online survey was conducted at the 25\({}^{th}\)-minute mark to capture the participants' thermal preferences (i.e., Cooler, No Change, Warmer). This resulted in 1,563 valid instances collected. A detailed description of the experiment design is provided in [4]. Based on the thermal comfort dataset collected from the testbed, an Extreme Gradient Boosting (XGB)-based feature selection step was performed to identify the most important features for thermal comfort modelling. This step allows us to reduce the model's likelihood of overfitting and decrease the model's training time. The feature selection approach adopted in this work is based on a previous study [5], which objectively evaluates each feature's usefulness based on its feature ranking and importance score for thermal comfort modelling. Each feature ranking is evaluated using the recursive feature elimination (RFECV) approach, which is a top-down elimination method that recursively eliminates the least relevant features that do not significantly contribute to the model's predictive performance. The approach includes a cross-validation step that splits the dataset into k-folds before obtaining the final rankings for each feature by averaging their rankings across each fold to increase the approach's robustness. The second step in the framework involved generating the feature importance scores using a tree-based ensemble model. The importance score for each feature was calculated by weighting the decrease in impurity achieved at each attribute split point against the number of samples affected by the split. Finally, by sorting the features based on their feature rankings followed by their feature importance scores, the top five most important features for thermal comfort modelling are indoor temperature, air speed, outdoor temperature, outdoor humidity, and the unique identifier assigned to each participant. ### Overview of the Proposed Framework This study examines two control strategies: 1) AL-enabled OCC with partial preference information and 2) Conventional OCC with complete preference information. The distinction between both strategies lies in the labelling process, where the former strategy uses AL to label the most informative instances while the latter strategy requires all sampled instances to be labelled to generate the individual thermal comfort profiles. Figure 1 presents an overview of the proposed AL-enabled OCC (left) and conventional OCC (right), consisting of six key steps: 1. At the beginning of every control time step, sample the thermal preference data of six unique occupants from the thermal comfort dataset based on the current indoor conditions. 2. Apply AL on the sampled thermal preference data and determine which instances should be labelled based on their informativeness towards predicting the occupants' thermal comfort. This step only applies to the AL-enabled OCC strategy. 3. Update personal comfort model based on labelled data collected till this control timestep. 4. Predict the occupants' thermal preferences across different indoor temperatures and real-time outdoor weather data to generate their thermal comfort profiles. 5. Update the zone's optimal setpoint based on the group thermal comfort profile obtained by aggregating the occupants' thermal comfort profiles. 6. Run the selected control action in the simulation environment to generate the indoor conditions for the following control timestep. Figure 1: Graphical overview of the proposed OCC strategies. ### Personal Comfort Model Development with Active Learning At each control timestep, AL is applied to the sampled instances from the thermal comfort dataset to identify the most informative instances and query the respective occupants' thermal preferences. The instances are sampled based on their closeness to the current indoor condition to accurately represent the comfort survey responses of the occupants. The AL algorithm selected for this study is based on the Query-by-Committee Sampling (QBC) algorithm, which maintains a committee of model classifiers trained on different subsets of the labelled instances. By voting on the predicted labels of the sampled instances, the instances that resulted in the greatest disagreement among the committee are deemed more informative and selected for human annotation. After the occupants have labelled the informative instances, the newly labelled instances are subsequently used to update the personalised thermal comfort model using a supervised learning approach. The personalised thermal comfort model was developed using an ensemble tree-based model based on the XGB algorithm [6]. The algorithm uses an iterative functional gradient descent method to minimise the loss function in the direction of steepest descent by iteratively introducing a weak classifier in a forward stage-wise fashion to overcome the errors made by previous models [7]. ### Thermal Comfort Profiles The resulting model is used to generate the occupants' personalised thermal comfort profiles, which are subsequently aggregated across all occupants to identify the optimal temperature setpoint for the current control timestep. This is achieved by passing into the model information about the occupant's unique ID, the current air speed, outdoor temperature, relative humidity, and a range of indoor temperatures (i.e., 24.5C to 28C) to generate the probabilistic distribution of each occupants' preferences (i.e., Cooler, No Change, Warner) over the range of indoor temperatures. Based on each occupant's thermal comfort profile, their comfort temperatures are determined by the temperature ranges where the probability of feeling comfortable (i.e., No Change) is greater than the probability of preferring a cooler or warmer condition (i.e., Warmer or Cooler). Finally, the optimal setpoint is identified by aggregating the personalised comfort temperatures of all occupants and selecting the highest comfort temperature with the greatest agreement. ### Virtual Environment Setup An EnergyPlus model of the testbed with evidence-based calibration serves as the virtual environment, which has a 7.135% Coefficient of Variance of the Root Mean Square Error (CV(RMSE)) and complies with ASHRAE Guideline 14 [8]. The Singapore's International Weather for Energy Calculation (IWEC) file is also used for annual building energy simulation. At each control timestep, the temperature setpoint of the zone is set to the optimal setpoint determined by the proposed OCC strategies. Subsequently, the indoor conditions are updated to initiate the next control loop. Additionally, the temperature setpoint at the first control timestep was set at 24C, reflecting Singapore's typical office operating conditions. ## 3 Results and Discussion ### Comparison of Control Strategies Figure 2 presents a comparison of the control actions derived from both AL-enabled OCC and conventional OCC strategies. During the initial week of implementation, notable fluctuations in the optimal temperature setpoints were observed, ranging from 24.5C to 28C. This behaviour can be attributed to the limited training data collected in a mild transient condition when training the thermal comfort model at the onset of control implementation. From the third week onwards, the fluctuations in the temperature setpoints decreased substantially, as the thermal comfort profiles reached stability over time, representing steady-state thermal comfort. Nevertheless, the conventional OCC strategy generally opted for a lower temperature setpoint compared to the AL-enabled OCC strategy. This discrepancy is primarily due to the different amount of thermal preference instances used from the database, despite employing identical methods for developing the thermal comfort profiles. Lastly, around the 37th to 38th day (02-06 to 02-07) of control implementation, the temperature setpoints from both strategies converged to the same value of 27.9degC. This outcome demonstrates that both strategies had, by then, acquired all the necessary thermal comfort information for optimal control. Interestingly, both control strategies stabilized almost on the same date, suggesting that additional thermal comfort information may not necessarily result in faster convergence in control decisions. ### Control Performance Evaluation The control performance of the proposed AL-enabled OCC is evaluated against conventional OCC by taking into account both building cooling energy reduction and thermal comfort improvement. Moreover, given the availability of elevated air movement provided by ceiling fans, we also included a baseline with a fixed temperature setpoint of 27degC and an air speed of 0.94 m/s. While an air speed of 0.8 m/s is often used as a baseline in accordance with ASHRAE 55 [9], the chosen baseline in this study represents the highest thermal comfort attainable using a rule-based control strategy, as demonstrated in a previous experimental study conducted in the same study area in Singapore [10]. #### 3.2.1 Cooling Energy Reduction Figure 3 illustrates the weekly cooling energy consumption for the first eight weeks of control implementation, which consists of district cooling energy, air handling unit (AHU) fan energy, and chilled water pump energy. Given the strong correlation between energy consumption and outdoor weather conditions, the corresponding weekly outdoor temperature and relative humidity are also presented in Figure 3. Both OCC strategies converged around 02-06 to 02-07, resulting in identical cooling energy consumption from that point onwards. According to the annual energy simulation results, the AL-enabled OCC and conventional OCC achieved 4.6% and 3.5% energy reductions over the baseline, respectively. Furthermore, it can be observed from Figure 3 that the AL-enabled OCC consistently reported lower weekly energy consumption compared to conventional OCC during the initial weeks of control implementation prior to convergence due to the selection of higher temperature setpoints. #### 3.2.2 Thermal Comfort Acceptability The thermal comfort acceptability score is calculated at each control timestep for both OCC strategies representing the percentage of occupants who are more likely to be comfortable at the selected temperature setpoint. By averaging the acceptability scores before model convergence, the results for both OCC strategies are similar, Figure 2: Illustration of the selected control actions for both OCC strategies during the first week (left), third week (middle), and fifth week (right) of the control implementation. with the AL-enabled OCC and conventional OCC reporting an average acceptability score of 98.3% and 99.5%, respectively. ### Reduction in Labelling Effort Lastly, by tracking the model performance of the personal thermal comfort model as more instances are identified and labelled through the AL algorithm, the model was able to match the performance of a fully supervised model with a labelling effort of 69.0%, thereby achieving a 31% reduction in labelling effort. ## 4 Conclusion In this study, we have demonstrated the feasibility of enhancing thermal preference-based HVAC control with AL to address the data challenges related to real-world implementations of such systems. The results of this study have shown that our AL-enabled OCC strategy significantly reduced the overall labelling effort by 31.0% while achieving a 1.3% increase in energy savings and a high thermal satisfaction score of 98.3% when compared to conventional OCC. Future directions of this work will include the integration of more advanced HVAC control algorithms [11] and personalised air movement preference models to support mixed-mode OCC implementations. ## Acknowledgements This study is supported by the National Research Foundation, Singapore, and the Ministry of National Development, Singapore, under its Cities of Tomorrow R&D Programme (CoT Award COT-V4-2020-5) and Johnson Controls (grant number A-8001200-00-00).
2310.04434
Impact of global monopole on heavy mesons in hot-dense medium
This research study is primarily focus on investigating how the topological effects influence the eigenvalue solutions in the presence of a hot-dense medium. To accomplish this, we employ the non-relativistic Schr\"odinger wave equation, taking into consideration both the quantum flux field and an interaction potential. Through this approach, we determine the energy eigenvalues and their corresponding wave functions using the Nikiforov-Uvarov method. Our findings indicate that when we consider both the topological effects and the magnetic flux, $\Phi$, there is a noticeable reduction in the binding energy within the hot-dense medium. Additionally, we analyze the role of the baryonic potential in shaping the binding energy within the $(T, u_b)$ plane. Interestingly, it is evident that the influence of the baryonic potential becomes more pronounced as its values decrease
M. Abu-Shady, Faizuddin Ahmed
2023-09-29T11:16:00Z
http://arxiv.org/abs/2310.04434v2
###### Abstract ###### Abstract In this research study, the focus is on exploring the influence of topological effects in the presence of a hot-dense medium. To achieve this, we solve the non-relativistic Schrodinger wave equation while considering the quantum flux field and its interaction potential. By doing so, we are able to obtain the energy eigenvalues and corresponding wave functions by using the Nikiforov-Uvarov method. The findings reveal that when taking into account both topological effects and the magnetic flux \(\Phi\), there is a reduction in the binding energy in the hot-dense medium. Furthermore, we examine the role of the baryonic potential on the binding energy in the \((T,u_{b})\) plane. It is observed that the effect of the baryonic potential is more pronounced when its values are smaller. **Impact of Global Monopoles on Heavy Mesons in a Hot-Dense Medium** **M. Abu-Shady\({}^{\copyright}\)1** Footnote 1: **[email protected]** Department of Mathematics and Computer Science, Faculty of Science, Menoufia University, Shbien El-Kom, Egypt **Faizuddin Ahmed\({}^{\copyright}\)2** Footnote 2: **[email protected]; [email protected]** Department of Physics, University of Science & Technology Meghalaya, Ri-Bhoi, 793101, India **Keywords**: Topological effects; Schrodinger equation; Nikiforov-Uvarov method; Finite temperature; Baryonic chemical potential ## 1 Introduction The investigation of strongly interacting matter in extreme conditions has become a subject of great interest due to its relevance to particle physics and astrophysics. One specific area of importance is studying how the properties of hadrons, such as their masses, magnetic moments, and decay constants, can be altered when they propagate through a hot medium. Understanding the behavior of quarks and gluons in this hot medium, known as quark-gluon plasma (QGP), requires a thorough examination of hadron properties at finite temperature and density. The exploration of this phase, the QGP, is being conducted in experiments at RHIC (Brookhaven National Laboratory) and CERN, where there is substantial evidence supporting its existence [1]. Numerous studies have been conducted on this topic, employing both relativistic and non-relativistic quark models as detailed in references [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. In Ref. [2], the analytical solution of the N-radial Schrodinger equation is obtained by extending the Cornell potential to finite temperature. This study aims to investigate the behavior of charmonium and bottomonium masses at finite temperature. In Ref. [3], the dissociation of quarkonia (bound states of quarks and antiquarks) in a thermal quantum chromodynamics medium is studied by using the conformable fractional of the Nikiforov-Uvarov (CF-NU) method. This study aims to understand how the thermal environment affects the stability of quarkonia states. Ref. [4] focuses on the thermodynamic properties of heavy mesons. These properties are calculated using the N-dimensional radial Schrodinger equation. Ref. [5] presents an analytical study of the N-radial Schrodinger equation using the supersymmetric quantum mechanics method. In this study, the heavy-quarkonia potential is introduced at finite temperature and baryon chemical potential to investigate its effects on the system. Moreover, the dissociation of quarkonia is investigated in an anisotropic plasma within hot and dense media in Refs. [6, 7, 8]. These studies aim to understand how the anisotropic nature of the plasma influences the behavior of quarkonia. The quark sigma model, a relativistic quark model, has emerged as a valuable tool in comprehending strong nuclear interactions [9, 10]. Within this model, the phenomenon of spontaneous chiral symmetry breaking and its restoration at higher temperatures are demonstrated. Numerous researchers have explored the Hartree approximation of the linear sigma model employing two or four quark flavors, investigating its behavior at different temperature regimes [1, 11, 12, 13, 14, 15]. Furthermore, several studies have successfully applied the quark sigma model to characterize both static and dynamic baryons at various temperatures and densities, as documented in references [16, 17, 18]. This demonstrates the model's versatility in describing the properties of baryons under diverse thermodynamic conditions. Topological defects are fascinating and exotic objects that are believed to have formed during phase transitions in the early universe. They are intriguing phenomena that arise in various physical systems and have been extensively studied in the context of quantum mechanics, atomic and molecular physics, and condensed matter physics. The different types of topological defects known in the literature include cosmic strings, domain walls, global monopoles, and textures, among others. Each of these defects exhibits unique properties and characteristics that make them significant in understanding fundamental aspects of physics. Cosmic strings are long, thin, and stable one-dimensional defects that are hypothesized to have formed when the universe underwent a phase transition. These strings are thought to play a crucial role in the large-scale structure of the cosmos, influencing the distribution of matter and galaxies. Global monopoles are three-dimensional spherical objects that are the result of spontaneous symmetry breaking in certain grand unified theories. They have interesting properties related to their mass and interaction, and they can influence the evolution of the universe on cosmological scales. The study of topological defects has led to deep insights into various aspects of physics and has provided valuable connections between different fields. Their presence and behavior have implications for the early universe, as well as condensed matter systems and atomic and molecular interactions. In this study, our objective is to investigate the influence of topological effects and magnetic flux in a hot-dense medium on heavy mesons. To achieve this, we solve the radial Schrodinger equation using the Nikiforov-Uvarov method [19], obtaining the energy eigenvalues and corresponding wave functions. To the best of our knowledge, the impact of topological effects on heavy mesons in a hot-dense medium has not been adequately considered in previous research, making this study a significant contribution to the field. The paper is organized as follows: In the introduction, the context and motivation of the study are presented. Section 2 elaborates on the Nikiforov-Uvarov method, providing readers with a clear understanding of the mathematical approach used in the research. The subsequent Section 3 outlines the details of the energy eigenvalues and wave functions calculation. Section 4 engages in a comprehensive discussion of the obtained results, interpreting and analyzing the implications of the topological effects and magnetic flux in a hot-dense medium on heavy mesons. Finally, Section 5 summarizes the key findings and presents concluding remarks, underscoring the significance of the study and potential avenues for further research. ## 2 Theoretical Description of the Nikiforov-Uvarov (NU) Method In this section, we will present a concise overview of the NU method [19], which serves as a valuable tool for solving second-order differential equations in the specified form given by \[\Psi^{\prime\prime}(s)+\frac{\bar{\tau}(s)}{\sigma(s)}\Psi^{\prime}(s)+\frac{ \tilde{\sigma}(s)}{\sigma^{2}(s)}\Psi(s)=0, \tag{1}\] where \(\sigma(s)\) and \(\tilde{\sigma}(s)\) are polynomials of maximum second degree and \(\bar{\tau}(s)\) is a polynomial of maximum first degree with an appropriate \(s=s(r)\) coordinate transformation. To find particular solution of Eq. (1) by separation of variables, if one deals with the transformation \[\Psi(s)=\Phi(s)\chi(s), \tag{2}\] it reduces to an equation of hypergeometric type as follows \[\sigma(s)\chi^{\prime\prime}(s)+\tau(s)\chi^{\prime}(s)+\lambda\chi(s)=0, \tag{3}\] where \[\sigma(s)=\pi(s)\frac{\Phi(s)}{\Phi^{\prime}(s)}, \tag{4}\] \[\tau(s)=\bar{\tau}(s)+2\pi(s);\quad\tau^{\prime}(s)<0, \tag{5}\] and \[\lambda=\lambda_{n}=-n\tau^{\prime}(s)-\frac{n(n-1)}{2}\sigma^{\prime\prime}( s),n=0,1,2,... \tag{6}\] \(\chi(s)=\chi_{n}(s)\) which is a polynomial of \(n\) degree which satisfies the hypergeometric equation, taking the following form \[\chi_{n}(s)=\frac{B_{n}}{\rho_{n}}\frac{d^{n}}{ds^{n}}(\sigma^{\prime\prime}( s)\rho(s)), \tag{7}\] where \(B_{n}\) is a normalization constant and \(\rho(s)\) is a weight function which satisfies the following equation \[\frac{d}{ds}\omega(s)=\frac{\tau(s)}{\sigma(s)}\omega(s);\quad\omega(s)= \sigma(s)\rho(s), \tag{8}\] \[\pi(s)=\frac{\sigma^{\prime}(s)-\bar{\tau}(s)}{2}\pm\sqrt{(\frac{\sigma^{ \prime}(s)-\bar{\tau}(s)}{2})^{2}-\tilde{\sigma}(s)+K\sigma(s)}, \tag{9}\] \[\lambda=K+\pi^{\prime}(s), \tag{10}\] the \(\pi(s)\) is a polynomial of first degree. The values of \(K\) in the square-root of Eq. (9) is possible to calculate if the expressions under the square root are square of expressions. This is possible if its discriminate is zero. (for detail, see Ref. [19]). ## 3 The Schrodinger Equation in point-like Global Monopole with potential interaction In this section, we delve into the analysis of the eigenvalues for non-relativistic particles under the influence of a quantum flux field, taking into account the presence of a point-like global monopole with a potential. We employ the NU-method to solve the radial equation, enabling us to obtain the necessary solutions. Furthermore, our investigation involves a comprehensive examination of the influence of several critical factors, such as the topological defect and the magnetic flux, particularly in the context of a hot and/or dense medium. By considering these factors, we aim to gain valuable insights into the behavior and properties of the system, shedding light on the intricate interplay between quantum flux, topological defects, and thermodynamic conditions. For a detailed explanation of the two-particle system interacting through an electromagnetic spherically symmetric potential \(V(r)\), see Ref. [20, 21, 22, 23, 24, 25, 26, 27, 28, 29] \[\left[\frac{d^{2}}{dr^{2}}+\frac{1}{\alpha^{2}}\Big{\{}2\,\mu\left(E-V(r) \right)-\frac{\ell^{\prime}(\ell^{\prime}+1)}{r^{2}}\Big{\}}\right]\Psi(r)=0, \tag{11}\] where \(\ell^{\prime}=\left(\left|m-\Phi\right|+\kappa\right)\) with \(\kappa=0,1,2,3,...\), \(m\) is the magnetic quantum number, \(\ell\) is the angular momentum quantum number, \(\mu\) is the reduced mass for the quarkonium particle (for charmonium \(\mu=\frac{m_{c}}{2}\) and for bottomonium \(\mu=\frac{m_{b}}{2}\)), respectively, and \(0<\alpha<1\) characterise the topological defect parameter of point-like global monopole, and \(\Phi\) is the amount of magnetic flux which is a positive real number. At finite temperature, the potential interaction can given as [30] as follows \[V(r)=a\,\left(m_{D},r\right)\,r-\frac{b\left(m_{D},r\right)}{r}, \tag{12}\] where \(a\left(T,r\right)=\frac{a}{m_{D}\left(T\right)r}\left(1-e^{-m_{D}\,r}\right)\) and \(b\left(T,r\right)=b\,e^{-m_{D}\,r}\) where \(m_{D}\left(T,u_{q}\right)\) is the Debye mass that vanishes at \(T\to 0\) and \(a\) and \(b\) are arbitrary constants will be determined later (for detail, see Ref. [30]). By substituting Eq. (2) into Eq. (1) and using approximation \(e^{-m_{D}\left(T\right)r}=\sum\limits_{j=0}^{\infty}\frac{\left(-m_{D}\left(T \right)r\right)^{j}}{j!}\) up to second-order, which gives a good accuracy when \(m_{D}r\ll 1\). We obtain \[\left[\frac{d^{2}}{dr^{2}}+2\mu^{\prime}(E-A+\frac{b}{r}-Cr+Dr^{2}-\frac{\ell ^{\prime}(\ell^{\prime}+1)}{2\,\mu\,r^{2}})\right]R(r)=0, \tag{13}\] where, \(A=b\,\,m_{D}\left(T\right),C=a-\frac{1}{2}bm_{D}^{2}\left(T\right)\), \(D=\frac{1}{2}a\,\,m_{D}\left(T\right)\), and \(\mu^{\prime}=\frac{\mu}{\alpha}\) By taking \(r=\frac{1}{x}\), Eq. (13) takes the following form \[\left[\frac{d^{2}}{dx^{2}}+\frac{2}{x}\frac{d}{dx}+\frac{2\,\mu^{\prime}}{x^{4}} \Big{(}E-A+bx-\frac{C}{x}+\frac{D}{x^{2}}-\frac{\ell^{\prime}(\ell^{\prime}+1)}{ 2\mu}x\Big{)}\right]R(x)=0. \tag{14}\] The scheme is based on the expansion of \(\frac{C}{x}\) and \(\frac{D}{x^{2}}\) in a power series around the characteristic radius \(r_{0}\) of meson up to the second order. Setting \(y=x-\delta\), where \(\delta=\frac{1}{r_{0}}\), thus, we expand the \(\frac{c}{x}\) and \(\frac{D}{x^{2}}\) into a series of powers around \(y=0\). \[\frac{C}{x} = \frac{C}{y+\delta}=\frac{1}{\delta}(1+\frac{y}{\delta})^{-1} \tag{15}\] \[= \frac{C}{\delta}(1-\frac{y}{\delta}+\frac{y}{\delta^{2}}),\] \[= C(\frac{3}{\delta}-\frac{3x}{\delta^{2}}+\frac{x^{2}}{\delta^{3 }}).\] Similarly, \[\frac{D}{x^{2}}=D(\frac{6}{\delta^{2}}-\frac{8x}{\delta^{3}}+\frac{3x^{2}}{ \delta^{4}}). \tag{16}\] By substituting Eqs. (15 and 16) into Eq. (14). Eq. (14) takes the following form \[\left[\frac{d^{2}}{dx^{2}}+\frac{2}{x}\frac{d}{dx}+\frac{2\mu^{\prime}}{x^{4}} (-A_{1}+A_{2}x-A_{3}x^{2})\right]R(x)=0, \tag{17}\] where, \(A_{1}=-(E-A-\frac{3C}{\delta}+\frac{6D}{\delta^{2}})\), \(A_{2}=(\frac{3C}{\delta^{2}}-\frac{8D}{\delta^{3}}+b)\), and \(A_{3}=(\frac{C}{\delta^{3}}-\frac{3D}{\delta^{4}}+\frac{\ell^{\prime}\,(\ell^{ \prime}+1)}{2\mu})\). The \(\frac{1}{x}\) expansion gives a good accuracy when \(\delta\) tends to \(x\). By comparing Eq. (17) and Eq. (1), we find \(\bar{\tau}(s)=2x\), \(\sigma(s)=x^{2}\), and \(\tilde{\sigma}(s)=2\mu^{\prime}(-A_{1}+A_{2}x-A_{3}x^{2})\). Hence, the Eq. (17) satisfies the conditions in Eq. (1). By following the NU method that mentioned in Sec. 2, therefore \[\pi=\pm\sqrt{\left(K+2A_{3}\right)x^{2}-2A_{2}x+2A_{1}}. \tag{18}\] The constant \(K\) is chosen such as the function under the square root has a double zero, i.e. its discriminant \(\Delta=4A_{2}^{2}-8A_{1}\left(K+2A_{3}\right)=0\). Hence, \[\pi=\pm\frac{1}{\sqrt{2A_{1}}}\left(2A_{1}-A_{2}x\right). \tag{19}\] Thus, \[\tau=2x\pm\frac{1}{\sqrt{2A_{1}}}\left(2A_{1}-A_{2}x\right). \tag{20}\] For bound state solutions, we choose the positive sign in above equation so that the derivative \[\tau^{\prime}=2-\frac{2A_{2}}{\sqrt{2A_{1}}}. \tag{21}\] By using Eq. (10), we obtain \[\lambda=\frac{A_{2}^{2}}{2A_{1}}-2A_{3}-\frac{A_{2}}{\sqrt{2A_{1}}}, \tag{22}\] and Eq. (6), we obtain \[\lambda_{n}=-n\left(2-\frac{2A_{2}}{\sqrt{2A_{1}}}\right)-n(n-1). \tag{23}\] From Eq. (6); \(\lambda=\lambda_{n}\). The energy eigenvalues of Eq. (13) in the hot-dense medium is given \[E_{n\,\ell}^{N}=A+\frac{3C}{\delta}-\frac{6D}{\delta^{2}}-\frac{2\mu^{\prime}( \frac{3C}{\delta^{2}}+b-\frac{8}{\delta^{3}})^{2}}{\left[(2n+1)\pm\sqrt{1+ \frac{8\mu^{\prime}C}{\delta^{3}}+\frac{4}{\alpha}\,\ell^{\prime}(\ell^{\prime }+1)-\frac{24\mu^{\prime}D}{\delta^{4}}}\right]^{2}}. \tag{24}\] The radial of wave function of Eq. (13) takes the following form \[R_{n\,\ell}\,(r)=C_{n\,\ell}\ r^{-\frac{A_{2}}{\sqrt{2A_{1}}}-1}e^{\sqrt{2A_{1 }}\,r}(-r^{2}\frac{d}{d\,r})^{n}(r^{-2n+\frac{A_{2}}{\sqrt{2A_{1}}}}e^{-2\sqrt {2A_{1}}r}). \tag{25}\] \(C_{nL}\) is the normalization constant that is determined by \(\int\left|R_{nL}\left(r\right)\right|^{2}dr=1\). ## 4 Discussion of Results In this section, we calculate spectra of the heavy quarkonium system such as bottomonium mesons in the hot and dense medium. The mass of quarkonium is calculated in the 3-dimensional space. We apply the following relation as in Ref. [2] \[M=2\,m+E_{n\ell}, \tag{26}\] where \(m\) is quarkonium bare mass for the charmonium or bottomonium mesons. By using Eq. (24), we write Eq. (26) as follows: \[M=2m+A+\frac{3C}{\delta}-\frac{6D}{\delta^{2}}-\frac{2\mu^{\prime}(\frac{3C}{ \delta^{2}}+b-\frac{8}{\delta^{3}})^{2}}{\left[(2n+1)\pm\sqrt{1+\frac{8\mu^{ \prime}C}{\delta^{3}}+\frac{4}{\alpha}\ell^{\prime}(\ell^{\prime}+1)-\frac{24 \mu^{\prime}D}{\delta^{4}}}\right]^{2}} \tag{27}\] Eq. (27) represents the quarkonium masses in hot and dense medium with topological effects and magnetic flux. By taking \(\alpha=1\)and \(\Phi=0\), we obtain \[M=2m+A+\frac{3C}{\delta}-\frac{6D}{\delta^{2}}-\frac{2\mu(\frac{3C}{\delta^{2 }}+b-\frac{8}{\delta^{3}})^{2}}{\left[(2n+1)\pm\sqrt{1+\frac{8\mu C}{\delta^{3 }}+4\ell(\ell+1)-\frac{24\mu D}{\delta^{4}}}\right]^{2}} \tag{28}\] Eq. (28) coincides with the result obtained in Ref. [2]. We can obtain the quarkonium masses at classical case by taking \(T=0\) leads to \(A=D=0\) and \(C=a\), and \(\alpha=1\)and \(\Phi=0\),Therefore, Eq. (27) takes the following form \[M=2m+\frac{3a}{\delta}-\frac{2\mu(\frac{3a}{\delta^{2}}+b)^{2}}{\left[(2n+1) \pm\sqrt{1+\frac{8\mu a}{\delta^{3}}+4\ell(\ell+1)}\right]^{2}}. \tag{28}\] Eq. (28) coincides with the result obtained in Ref. [31]. In the present work, the Debye mass \(D(T,\mu_{b})\) is given in Refs. [32, 33] \[D(T,\mu_{b})=gT\sqrt{\frac{N_{c}}{3}+\frac{N_{f}}{6}+\frac{N_{f}}{2\pi^{2}} \left(\frac{\mu_{q}}{T}\right)^{2}}, \tag{29}\] where, \(g\) is the coupling constant as defined in Ref. [34], \(\mu_{q}\) is the quark chemical potential \(\left(\mu_{q}=\frac{\mu_{b}}{3}\right)\), \(N_{f}\) is number of flavours, and \(N_{c}\) is number of colors. For the bottomonium meson, the binding energy is plotted as a ratio of temperature \((\frac{T}{T_{a}})\) where \(T_{a}\) is a critical temperature in Fig. (1). This plot is done when the baryonic chemical potential is not considered. It is observed that the binding energy decreases as the temperature increases. This behavior is similar for different values of the magnetic flux (\(\Phi\)). When the magnetic flux (\(\Phi\)) is decreased, the curves on the plot shift to higher values. Furthermore, the effect of temperature is more significant at the critical temperature (\(T_{a}=0.17\) GeV). As the temperature increases beyond this critical temperature, the effect of magnetic flux (\(\Phi\)) becomes less pronounced in Fig. 2 The authors of Ref. [2] considered the topological effects without considering the hot-dense medium. They found that the potential energy is shifted to higher values with increasing values of magnetic flux (\(\Phi\)) or the parameter \(\alpha\), which is consistent with our findings. Fig. 3 shows the binding energy plotted for different values of \(\alpha\) without considering the baryonic chemical potential (\(u_{b}=0\)). It is noted that the binding energy reaches its maximum when the topological effects are ignored at \(\alpha=1\). Additionally, the effect of topological effects on the binding energy is observed to be dependent on the value of \(\alpha\). The curves in Fig. 3 are shifted from each other when topological effects are considered at \(\alpha=0.25\). Overall, these findings suggest that the topological effects, magnetic flux (\(\Phi\)), and parameter \(\alpha\) have significant impacts on the binding energy of the bottomonium meson in different temperature regimes, as illustrated above. In Fig. 4, we plotted the binding energy as a function of ratio of temperature and the baryonic chemical potential, in which we study the effect of dense medium on the binding energy in the hot medium. When ignored the effect of topological effects at \(\alpha=1\) with magnetic flux \(\Phi=0.25\), we note that the binding energy decreases with increasing the temperature at any value of the baryonic chemical potential. Additionally, the binding energy decreases slowly by increasing baryonic chemical potential. Therefore, we deduced that the effect of hot medium is more effect on the binding energy of bottomonium. In Fig. 5, we consider the topological effects at \(\alpha=0.25\), we obtain the a similar behavior as in Fig. 4, but we note the binding energy decreases when the topological effects are considered. In Fig. 6, by fixing \(\alpha=1\) and increases \(\Phi=0.75\), we note the binding energy is a little small in comparison with Fig. 4. The contours (7) show that the binding energy of bottomonium at vanishing of baryonic potential is greater than the binding energy at higher values of baryonic chemical potential in which the topological effects \(\alpha=0.4\) and magnetic flux \(\Phi=0.25\) are taken. By comparing with contour (8), we note that the binding energy increases by ignoring the the topological effects \(\alpha=1.0\). Additionally, we note that the regular change in the binding energy in the plane of \((T,u_{b})\) above the critical temperature \(T_{c}\). ## 5 Summary and Conclusion Our primary objective was to analyze the effects of topological phenomena induced by a point-like global monopole in the presence of a hot-dense medium. To achieve this, we investigated the Schrodinger wave equation within the context of quantum flux fields, incorporating an interaction potential. Utilizing the parametric Nikiforov-Uvarov method, we successfully obtained the energy eigenvalues and corresponding wave functions. The results revealed that both the topological defect parameter, denoted as \(\alpha\), and the magnetic flux, represented by \(\Phi\), had a significant influence on the eigenvalues when subjected to a hot-dense medium. This observation underscores the importance of considering these factors when studying such systems. Furthermore, we explored the impact of the baryonic potential on the binding energy in the \((T,u_{b})\) plane. Interestingly, we observed that the effect of the baryonic potential was more pronounced when its values were smaller. This finding highlights the sensitivity of the system to changes in the baryonic potential and its potential implications for understanding the system's behavior. Overall, our study provides valuable insights into the intricate interplay of topological effects and magnetic flux within a hot-dense medium. By shedding light on these complex interactions, our findings contribute to a deeper understanding of the system's behavior and pave the way for further research in this fascinating area of study.
2310.20582
The serotonergic psychedelic N,N-dipropyltryptamine alters information-processing dynamics in cortical neural circuits
Most of the recent work in psychedelic neuroscience has been done using non-invasive neuroimaging, with data recorded from the brains of adult volunteers under the influence of a variety of drugs. While this data provides holistic insights into the effects of psychedelics on whole-brain dynamics, the effects of psychedelics on the meso-scale dynamics of cortical circuits remains much less explored. Here, we report the effects of the serotonergic psychedelic N,N-diproptyltryptamine (DPT) on information-processing dynamics in a sample of in vitro organotypic cultures made from rat cortical tissue. Three hours of spontaneous activity were recorded: an hour of pre-drug control, and hour of exposure to 10$\mu$M DPT solution, and a final hour of washout, once again under control conditions. We found that DPT reversibly alters information dynamics in multiple ways: first, the DPT condition was associated with higher entropy of spontaneous firing activity and reduced the amount of time information was stored in individual neurons. Second, DPT also reduced the reversibility of neural activity, increasing the entropy produced and suggesting a drive away from equilibrium. Third, DPT altered the structure of neuronal circuits, decreasing the overall information flow coming into each neuron, but increasing the number of weak connections, creating a dynamic that combines elements of integration and disintegration. Finally, DPT decreased the higher-order statistical synergy present in sets of three neurons. Collectively, these results paint a complex picture of how psychedelics regulate information processing in meso-scale cortical tissue. Implications for existing hypotheses of psychedelic action, such as the Entropic Brain Hypothesis, are discussed.
Thomas F. Varley, Daniel Havert, Leandro Fosque, Abolfazl Alipour, Naruepon Weerawongphrom, Hiroki Naganobori, Lily O'Shea, Maria Pope, John Beggs
2023-10-31T16:16:03Z
http://arxiv.org/abs/2310.20582v1
The serotonergic psychedelic N,N-dipropyltryptamine alters information-processing dynamics in cortical neural circuits. ###### Abstract Most of the recent work in psychedelic neuroscience has been done using non-invasive neuroimaging, with data recorded from the brains of adult volunteers under the influence of a variety of drugs. While this data provides holistic insights into the effects of psychedelics on whole-brain dynamics, the effects of psychedelics on the meso-scale dynamics of cortical circuits remains much less explored. Here, we report the effects of the serotonergic psychedelic N,N-dipropyltryptamine (DPT) on information-processing dynamics in a sample of _in vitro_ organotypic cultures made from rat cortical tissue. Three hours of spontaneous activity were recorded: an hour of pre-drug control, and hour of exposure to \(10\mu\)M DPT solution, and a final hour of washout, once again under control conditions. We found that DPT reversibly alters information dynamics in multiple ways: first, the DPT condition was associated with higher entropy of spontaneous firing activity and reduced the amount of time information was stored in individual neurons. Second, DPT also reduced the reversibility of neural activity, increasing the entropy produced and suggesting a drive away from equilibrium. Third, DPT altered the structure of neuronal circuits, decreasing the overall information flow coming into each neuron, but increasing the number of weak connections, creating a dynamic that combines elements of integration and disintegration. Finally, DPT decreased the higher-order statistical synergy present in sets of three neurons. Collectively, these results paint a complex picture of how psychedelics regulate information processing in meso-scale cortical tissue. Implications for existing hypotheses of psychedelic action, such as the Entropic Brain Hypothesis, are discussed. ## 1 Introduction Serotonergic psychedelics such as LSD, psilocybin, and mescaline, are known to induce intense, exotic states of consciousness that depart markedly from normal day-to-day patterns of cognition and perception [1]. Since the turn of the century, there has been a resurgence of interest in the scientific exploration of psychedelic states, with a particular focus on using whole-brain neuroimaging technologies to understand the neural correlates of the psychedelic experience. In typical recent studies, adult human volunteers are given a psychedelic, and then brain activity is recorded for analysis, which can then be compared to self-reported phenomonological experiences (such as the experience of ego dissolution [2]), or clinical presentations (such as depression [3]). Human neuroimaging studies have been done using almost every available modality, including fMRI (for a review of existing fMRI dataset, see [4]), EEG (for a partial review of EEG studies, see [5]) and MEG [6]. Collectively, these studies have painted a complex picture of the effects of different psychedelics on whole-brain, macro-scale activity, with one of the most-discussed effects being a general increase in the entropy (or "complexity") of macro-scale brain activity (for review, see [7], although for an recent study into which specific measures of entropy replicate, see [8]). This apparent link prompted Carhart-Harris and colleagues to propose the so-called "entropic brain hypothesis" (EBH), which posits a link between the information density of spontaneous brain activity and the perceptual richness or lability of conscious experience [9, 10]. There have been far fewer attempts to understand the micro-scale, circuit-level effects of psychedelics. This creates something of a schism in the field of psychedelic science: at the level of individual neurons, ligands, and receptors, the pharmacological properties of psychedelics are well understood [1], and at the level of the entire brain, the effects of psychedelics on brain dynamics are beginning to crystallize as well (increased complexity of spontaneous activity, etc [10, 7]). However, the intermediary circuit-level dynamics induced by psychedelics at the "meso-scale", which presumably form the causal substrate of the high-level dynamical changes, remains largely unexplored. The few studies that have been done in this space have largely focused on single measures, such as firing rate [11, 12], or coherence [13]. Our goal with this study was a more comprehensive analysis of how a serotonergic psychedelic alters the information-processing dynamics of neural circuits. Information dynamics [14] is a branch of information theory concerned with the understanding how distributed systems "compute" their trajectories through configuration space over time. Prior work has shown that the information dynamics framework applied to spiking neural activity is powerful enough to reveal meaningful differences in cognitive state and behaviour in awake, behaving animals [15], and has been used to explore the structure and dynamics of organotypic cultures ([16, 17, 18, 19, 20, 21]. Here, following [15] we applied the information dynamics framework to spontaneous spiking activity collected from organotypic cultures before, during, and after exposure to the serotonergic psychedelic N,N-dipropylryptamine (DPT), with the aim of to creating a comprehensive portrait of the way that the psychedelic drug alters information dynamics at the circuit level. DPT is a serotonergic psychedelic of the tryptamine class and a close analogue of the more well-known psychedelic, N,N-dimethylryptamine (DMT, one of the active ingredients in Ayahuasca). DPT has been known to science since the early days of psychedelic research: as early as 1962 it was being explored as a tryptamine analogue of psilocybin [22, 23]. By the 1970s, it had become an object of clinical research, being tested as a treatment for alcoholism [24], and later to test if its mystical experience-producing properties might be of use for terminal cancer patients facing the end of their lives [25]. In the years following the passing of the Controlled Substances Act, scientific and clinical interest in DPT waned, however, it was never criminalized in the United States and it remains unscheduled at the Federal level. Despite its legality, DPT remains much less well-known among the general public than it's more famous siblings such as psilocybin, DMT, mescaline, and LSD. A notable exception to this is it's use by a religious organization based in New York City, The Temple of the True Inner Light, which uses DPT as a religious sacrament [26]. Despite it's somewhat unusual history and status, pharmacological research has shown it to be a standard serotonergic psychedelic of the tryptamine class, with activity mediated by both the 5-HT\({}_{2\text{A}}\) and 5-HT\({}_{1\text{A}}\) receptors, which is typical of the class of drugs in question [27, 28]. Its legal status, and close relationship to more well-known, scheduled drugs, made it an excellent compound for this study. ## 2 Results ### Summary of Methods Here, we will briefly outline the methods and analyses presented in this paper. For more details, see the Materials & Methods section. To investigate how DPT affects neuronal activity at the meso-scale, we chose to use organotypic cultures of rat somatosensory cortex. Organotypic cultures preserve some of the layered structure that is typical of cortex, yet are compact and easily accessible for fluid Figure 1: **Visual explanation of methods.****A.** The slices are prepared from cortical tissue of Sprague-Dawley rats, sectioned, and cultured _in vitro_ for a period of two weeks. **B.** Following incubation, cultures were recorded for three hours: one hour before drug administration in a control medium (pink), one hour while being exposed to a 10\(\mu\)M solution of N,N-DPT (pink), and finally for one hour under control conditions after washout (blue). Example raster plots showing spikes for each condition are showed. changes, as needed in this study. Moreover, these cultures have been shown to display many of the emergent properties reported from recordings of in vivo systems, including wave-like structures [29], synchrony [30], gamma oscillations [31], repeating activity patterns [32], and neuronal avalanches [33]. They also display a rich club structure of effective connectivity [17], as reported in many other neural systems [34, 35, 15]. Following prior work [36, 16], organotypic cultures of cortical tissue were taken from 5-day postnatal rats, and after a two week incubation period, spontaneous spiking activity was recorded on a 512-electrode array. Recordings lasted for a three-hour period; in the first hour, the cultures were recorded in their standard environment of cell media. In the second hour, the cultures were exposed to a 10 \(\mu\)M solution of N,N-dipropylryptamine at a perfusion rate of 3mL/minute, and in the third hour, the drug was washed out and a subsequent hour of control condition was recorded. We then analyzed how the statistics of population firing activity varied between control, drug, and washout recordings. The data was spike-sorted using the kilosort3 package [37], and analyzed using the _information dynamics_ framework [14] with the aid of the IDTxl package. Information dynamics uses the mathematics of information theory to describe the statistical structure of temporally extended processes, with the ultimate goal of creating an effective model of the distributed "computations" the system is performing. Due to variability between cultures (such as which specific regions of the somatosensory cortex the initial culture was taken from, precise placement of the electrode array, etc), we aggregated all neurons into a single sample for analysis and do not explore culture-level differences. The various information-dynamic measures can be grouped into three general categories: first-order measures that describe the dynamics of individual neurons. We considered the Shannon entropy of the spike train (a measure of activity intensity), the active information storage [38, 39] (a measure of temporal autocorrelation), and the entropy production [40, 41] (a measure of how time-reversible the dynamics of the elements are). The second set of measures were second order; describing the interactions between pairs of elements. We considered the multivariate transfer entropy [42, 43, 44], a measure of information flow from a "source" neuron to a "target" neuron, and for each culture, inferred a multivariate transfer entropy network, after [15]. In addition to the amount of information flow between neurons in bits, we also characterized the local topology of the directed networks with the local clustering coefficient [45]. The final set of "higher-order" measures was the statistical synergy between pairs of sources onto a single target (for review, see [21]). This serves as a measure of information modification [46], or non-trivial "computation" in circuits of multiple interacting neurons [47]. Since almost all of the measures returned values spanning multiple orders of magnitude (a typical feature of neural data [48]), we log-transformed the values for statistical analysis. Furthermore, since not every neuron was active in every condition, we filtered the neurons and only included those cells that were active in all three conditions; this ensures the validity of the repeated measures design. Finally, information-theoretic measures (AIS, mTE, synergy) were normalized as described in [21], by dividing the measure by the target entropy, which accounts for the variable firing rates that could confound the data. Collectively, this suite of measures presents a multi-dimensional perspective on how the serotonergic psychedelic N,N-DPT alters computational dynamics in cortical circuits. We have provided a glossary of reference terms at the end of the manuscript (see Sec. 4), and all the measures are detailed more formally in the Materials and Methods. ### First-Order Measures Friedman's \(\chi^{2}\) found a significant difference in the log-transformed Shannon entropy (\(Q\approx 174.89\), \(p\approx 1.06\times 10^{-38}\)). Post-hoc analysis found that the DPT condition had significantly higher log-transformed entropy (\(-2.29\pm 0.72\)) than the both the control condition (\(-2.57\pm 0.99\), \(t\approx-12.92\), \(p\approx 2.06\times 10^{-36}\), Cohen's D=-0.32), and the washout condition (\(-2.58\pm 0.9\), \(t\approx 16.88\), \(p\approx 6.07\times 10^{-59}\), Cohen's D=0.35), but there was no significant difference between the control and washout conditions. This is consistent with whole-brain level findings that serotonergic psychedelics increase the overall entropy of brain activity [9, 7]. When considering the log-transformed entropy-production (a measure of irreversibility) of the spiketrains, Friedman's test found a significant difference between conditions (\(Q\approx 80.42\), \(p\approx 3.44\times 10^{-18}\)), and posthoc analysis once again found a small, but significantly higher entropy-production (greater irreversibility) production in the DPT condition (\(-4.13\pm 1.39\)) when compared to the control condition (\(-4.47\pm 1.57\), \(t\approx-8.29\), \(p\approx 3.71\times 10^{-16}\), Cohen's D=-0.23) and the washout condition (\(-4.46\pm 1.45\), \(t\approx 10.22\), \(p\approx 2.29\times 10^{-23}\), Cohen's D=0.23), but not between control and washout. Recent work on human neuroimaging has found that loss of consciousness is associated with increased reversibly of brain activity [49, 50], and so the finding that a psychedelic like DPT is associated with an increase in entropy production suggests that time-reversibility may be a more general marker of conscious states. Interestingly, we found no significant differences in the log-transformed active information storage (AIS) between any of the conditions, however, we did find strong, significant differences in the maximum search depth for the embedding lag (\(Q\approx 348.17\), \(p\approx 2.49\times 10^{-76}\)). The maximum search depth can be understood as the "time-horizon" of the neuron's memory: the maximum distance into the past that still contains information about the immediate future. Post-hoc analysis found that all three conditions were distinct. The DPT condition had the shortest memory (4.04 ms \(\pm\) 0.96), lower than both the control (\(t\approx 8.41\), \(p\approx 1.59\times 10^{-16}\), Cohen's D=0.41) and washout (\(t\approx-19.88\), \(p\approx 4.12\times 10^{-73}\), Cohen's D=-1.07) conditions. The control condition was in the middle (4.47 ms \(\pm\) 1.14), and significantly lower than the washout condition (\(t\approx-8.24\), \(p\approx 5.9\times 10^{-16}\), Cohen's D=-0.41), which had the longest average memory (4.82 ms \(\pm\) 0.39). Collectively, these results indicate that the dynamics induced by DPT are distinct from the drug-free state: the single-neuron activity in the DPT condition is characterized by higher entropy, less reversible dynamics, as well as a shorter "memory" in each neuron (although the total AIS was surprisingly unchanged). These results are broadly consistent with what we might expected based on the entropic brain hypothesis. ### Network Measures After constructing the multivariate transfer entropy network (for details, see Materials & Methods), we analyzed the structure of directed, pairwise dependencies between neurons. Friedman's test found small, but significant differences between conditions in the log-transformed total information flowing into each neuron (\(Q\approx 87.35\), \(p\approx 1.08\times 10^{-19}\)). Post-hoc analysis found that, once again, there was no significant difference between the control (\(-14.79\pm 5.24\)) and washout (\(-14.91\pm 4.21\)) conditions, but that the DPT condition had significant less mTE (\(-16.15\pm 3.85\)) than either control (\(t\approx 8.53\), \(p\approx 6.04\times 10^{-17}\), Cohen's D=0.3) or washout (\(t\approx-11.46\), \(p\approx 1.91\times 10^{-28}\), Cohen's D=-0.31). Curiously if we consider the discrete in-degree, rather than considering the total information inflow, we find the opposite pattern: (\(Q\approx 131.9\), \(p\approx 2.28\times 10^{-29}\)). There is no significant difference in in-degree between the control (\(8.08\pm 2.07\) edges) and washout (\(8.27\pm 1.81\) edges) conditions, however, the DPT condition has a significantly greater in-degree (\(8.71\pm 1.6\) edges) than both the control condition (\(t\approx-10.37\), \(p\approx 8.81\times 10^{-24}\), Cohen's D=-0.34) and the washout condition (\(t\approx 10.57\), \(p\approx 1.33\times 10^{-24}\), Cohen's D=0.25). This is curious, as it suggests that, in the DPT condition, there is an increase in low-level connectivity, but that the strength of individual edges is also reduced: a proliferation of weak connections. For visualization of an example network, see Figure 2. This hypothesis is supported by an analysis of local circuit density in the network. Commonly called the "clustering coefficient" [51, 45]. Briefly, the clustering coefficient gives a measure of local integration: for each neuron, it quantifies how many of that neuron's neighbors are also neighbors (i.e. form closed triangles). Friedman's test found significant differences in the log-transformed clustering coefficient between all conditions (\(Q\approx 211.73\), \(p\approx 1.06\times 10^{-46}\)), and post-hoc analysis found significant differences between all pairs of conditions. The control condition had the lowest log-transformed clustering coefficient (-1.91 \(\pm\) 0.61) compared to DPT (-1.7 \(\pm\) 0.4, \(t\approx-15.22\), \(p\approx 2.54\times 10^{-46}\), Cohen's D=-0.39) and washout (-1.63 \(\pm\) 0.39, \(t\approx-8.87\), \(p\approx 4.51\times 10^{-18}\), Cohen's D=-0.18), and the washout condition was significantly higher than the DPT condition (\(t\approx-8.87\), \(p\approx 4.51\times 10^{-18}\), Cohen's D=-0.18), although note the weak effect size. These results suggest that addition of DPT is associated with an increase in weak, local integration: while the total amount of information coming into each neuron is decreased, more locally clustered weak connections are Figure 2: **A.** A cumulative distribution function (CDF) plots of the individual neuron entropies for the three conditions (pink: control, green: DPT, blue: washout). The Friedman’s \(\chi^{2}\) statistic is computed from all three distributions. **B-I.** CDF plots for the following measures: AIS, AIS maximum lag, total in-coming mTE, in-degree, joint mutual information from two parents onto a single target, redundant information synergistic information, and entropy production. **J.** Visualization of a representative mTE network from a single culture during all three conditions. Visual inspection shows that the DPT condition has an increased number of weak (thin) edges when compared to the control condition, consistent with the finding that the in-degree of each neuron has increased even as overall information flow decreases. allowed to open. Curiously, unlike many of the other metrics, this effect persists even after the drug is washed out. Collectively, these results challenge simplistic stories such as "increased connectivity" or "decreased connectivity", but rather suggest a more nuanced change in the communicative structure of the network, typified by both an overall decrease in the total information flow, but an increase in the number of weak open connections. ### Higher-Order Statistical Synergy When considering higher-order information integration (statistical synergy), we found weak, but significant patterns consistent with prior results. Friedman's test on the log-transformed normalized synergy found significant differences between the conditions (\(Q\approx 29.2\), \(p\approx 4.57\times 10^{-7}\)). Post-hoc analysis found no significant difference between the control (\(-33.21\pm 15.84\)) and washout conditions (\(-32.24\pm 13.01\)), but a weak, significant decrease in log-transformed synergy in the DPT condition (\(-35.54\pm 13.64\)) compared to both control (\(t\approx 4.35\), \(p\approx 1.51\times 10^{-5}\), Cohen's D=0.16) and washout conditions (\(t\approx-7.43\), \(p\approx 2.73\times 10^{-13}\), Cohen's D=-0.25). These results tentatively suggest that, when exposed to DPT, the individual neurons are "integrating" less information from pairs of inputs then they ordinarily would. This finding was unexpected, as previous research has found that loss of synergy is generally associated with decreased conscious awareness [52, 53], although this prior work has been done exclusively at the whole-brain level. We stress that these are tentative results for two reasons, however: the first is that different redundancy functions or formulations of the PID may return different synergies [54], and the second is that we only considered the case of two parents and a single target: higher-order combinations may show quantitatively different patterns of information integration, although such an analysis is beyond the scope of this project. ## 3 Discussion In this paper, we have described how the serotonergic psychedelic N,N-dipropyltryptamine (DPT) alters the statistics of information dynamics in organotypic cultures before, during, and after drug exposure. We found that concentrations of 10 \(\mu\)M DPT induced a transient dynamic characterized by increased entropy of single neuron activity, reduced strong connections between neurons, but simultaneously, a proliferation of weak connections. We found that higher-order statistical synergy was decreased, but the temporal irreversibility of neural activity was increased. Collectively, these results paint a complex picture of the effects of DPT on neural circuit dynamics. The decrease in strong connections and reduction in synergistic processing could be described as "disintegration" of the system: in both cases, smaller proportion of the uncertainty about the future activity of the target neurons can be resolved by learning about other parts of the system. Conversely, however, the increase in in-degree (indicating a growth in weak connections) suggests that this is not the entire story: more channels of information flow may be opening, they are just weaker in nature. These results are broadly consistent with prior results from whole-brain neuroimaging. The increase in regional entropy is well-documented enough to form the core of the entropic brain hypothesis [9, 10] (although for a dissenting opinion, see [8]). Similarly, bivariate transfer entropy analysis of MEG data from humans under the influence of LSD and psilocybin found decreased effective connectivity [55]. To the best of our knowledge, at the time time of writing, there have been no published analyses of how psychedelic drugs impact temporal reversibility or statistical synergy (although Mediano reports that a closely related measure, integrated information, \(\Phi\), surprisingly decreases under LSD or psilocybin in a manner somewhat similar to sleep [56]). The finding that DPT induces an increase in weak connections may provide insights into the documented ability of tryptamine psychedelics to induce neuroplasticity in neuronal networks. _In vitro_ work has found that exposure to drugs such as LSD and psilocybin produces increased dendritic arborization and synaptogenesis [57, 58]. A naive Hebbian model might suggest that it is the increased information flow between previously dis-connected neurons that might drive the emergence of new connections, although we should stress that the transfer entropy network inference algorithm does not claim to recover purely synaptic connections. Future work that can combine spontaneous activity recording with biological analysis of neuroplasticity may be able to explore the connection more directly. Curiously, despite the consistency with macro-scale imaging analyses, the finding that DPT increased the entropy of spontaneous firing activity relative to the control and washout conditions conflicts with two prior cellular-level studies, both of which found that the psychedelic 2,5-Dimethoxyd-4-iodoamphetamine (DOI) had an inhibitory effect on spiking activity [11, 59]. One possible explanation for this discrepancy is the different pharmacological profiles of the two drugs: DOI is a substituted amphetamine, while DPT is of the tryptamine class, and they have distinct binding profiles. Another possibility is the difference between _in-vivo_ and _in-vitro_ studies. Given the overall paucity of research on the circuit-level effects of psychedelic drugs on neural dynamics, further studies will hopefully shed considerable light on these questions. This study has some limitations that are worth discussing. The most significant is the small absolute number of recordings (11), which makes culture-to-culture comparisons weak (in contrast to neuron- and circuit-level analyses, which are highly powered). The cultures themselves have no behavior or consciousness to speak of, and so the insights that can be gleaned from them about the phenomenological nature of the psychedelic state are limited. The cultures themselves are taken from the dorsal cortex near the somatomotor areas, however the precise placement of the electrodes varies, which means there is unavoidable heterogeneity with respect to which neurons are being sampled and what layers are represented. Future replications with larger N, and possibly in behaving animal models will go a long way to addressing these concerns. Recent developments in multi-layer imaging from animal cortex [18], or machine-learning based cell-type classification [60], may augment future studies in this vein. Finally, this study compares DPT to an empty vehicle solution (DMSO), and since DPT is a relatively promiscuous ligand (binding to many different serotonin receptors), it is impossible to attribute the observed effects to any single receptor. These results should be seen as a first step towards understanding the effects of psychedelics on circuit-level information-processing dynamics. The limitations discussed above suggest natural subsequent studies, including using invasive recordings from behaving animals (where placement of the array can be controlled), studying the dose-response curves with respect to measures like neural entropy, and finally, increasing the population size to improve statistical power. However, despite the limitations, we suggest that this study has provided key insights into the computational effects of psychedelics on meso-scale brain activity. ## 4 Conclusions In this study, we showed that the serotonergic psychedelic N,N-DPT disrupts information-processing dynamics of cortical tissue in _in vitro_ organotypic cultures, with some disruptions appearing to be reversible, while others persist post-exposure. The psychedelic increased the entropy of spontaneous neural firing activity, while decreasing the temporal reversibility, and altered the connectivity patterns of neural circuits: reducing the overall information flow coming into each neuron, but increasing the total number of significant connections. These different effects present a nuanced picture, largely irreducible to simple stories of "increasing integration" or "decreasing integration", and instead point to a rich area of future work more carefully characterizing the effects of psychedelics on information-processing, and computational, dynamics in the brain. ## Glossary Here we will provide a brief reference of the various information-theoretic and graph-theoretic measures described here. For readers interested in finer detail, see the Materials and Methods section and the references therein. **Entropy:** A measure of uncertainty about the outcome of a random draw from a distribution. Formally: \(H(X)=-\sum_{x\in\mathcal{X}}P(x)\log P(x)\), where \(\mathcal{X}\) is the support set of \(X\). **Support Set:** For a random variable \(X\), the support set of \(X\), denoted as \(\mathcal{X}\) is the set of all possible states \(X\) can adopt. **Mutual Information:** The amount of uncertainty about a variable \(X\) reduced upon learning the state of some other variable, \(Y\). Formally: \(I(X;Y)=H(X)-H(X|Y)\), where \(H(X|Y)\) is the conditional entropy of \(X\) given \(Y\). **Active Information Storage:** For a temporally extend process \(X\), the AIS is the information about the state of \(X_{t}\) at time \(t\) disclosed by the past. Formally \(AIS(X_{t})=I(X_{past};X_{t})\). **Maximum lag:** The maximum distance in the past accounted for when computing the AIS. **Transfer Entropy:** The information that the past of one variable \(X\) discloses about the next state of a target variable \(Y\), conditioned on \(Y\)'s own past. Formally: \(TE(X\to Y)=I(X_{past};Y_{t}|Y_{past})\). **Multivariate Transfer Entropy:** The information that the past of one variable \(\mathbf{X}\) discloses about the next state of a target variable \(Y\) conditioned on the past of all other parents of \(Y\). Formally: \(I(X_{past};Y_{t}|Y_{past},\mathbf{Z}_{past}\), where \(\mathbf{Z}\) is the set of all parents of \(Y\) excluding \(X\). **Partial Information Decomposition:** A technique for decomposing the information that two sources (\(X_{1}\) and \(X_{2}\)) disclose about a single target \(Y\) into redundant, unique, and synergistic components. **Redundant Information:** The information about a target \(Y\) that could be learned by observing either \(X_{1}\) alone or \(X_{2}\) alone. **Unique Information:** The information about a target \(Y\) that can only be learned by observing \(X_{i}\). **Synergistic Information:** The information about a target \(Y\) that can only be learned by observing the joint state of \(X_{1}\) and \(X_{2}\) simultaneously. **Entropy Production:** A measure of the time-reversibility of a temporal process. Formally: \(D_{KL}(\overrightarrow{X}||\overrightarrow{X})\). **Kullback-Leibler Divergence:** The information gained when updating one's belief from a prior distribution to a posterior distribution. Formally: \(D_{KL}(P||Q)=\sum_{x\in\mathcal{X}}P(x)\log P(x)/Q(x)\) where \(P\) and \(Q\) are both probability distributions on the support set \(\mathcal{X}\). **In-Degree:** The number of in-coming edges to a node in a network. **Out-Degree:** The number of out-going edges from a node in a network. **Local Clustering Coefficient:** A measure of how many triangles a given node participates in relative to the total number of triangles it could possibly participate in given it's degree. ## 5 Materials & Methods ### Organotypic Culture Preparation, Data Collection & Preprocessing Organotypic cultures were prepared according to the methods described in [36, 16]. Briefly, we used Sprague-Dawley strain postnatal rats which were an average age of five days old. These animals were approved by the Indiana University Animal Care and Use Committee, and all proper protocols for animal care were followed. The overall procedure involved extracting their brains and slicing them in the coronal plane using a vibrotome to achieve a thickness of 400 \(\mu\)m. After this process, the slices were placed in trays with culture medium in an incubator for a time period between two to four weeks. The culture medium in the trays was replaced by half every three days. The composition of the culture medium is as following: 1L Minimum Essential Medium (Sigma-Aldrich), 500mL Hank's balanced salt solution (Sigma-Aldrich), 500mL of heat inactivated horse serum (Sigma-Aldrich), 2mL PSN antibiotic mixture, and 10mL L-Glutamine. All animal tissue samples were prepared according to guidelines from the National Institutes of Health and all animal use procedures were approved by the Indiana University Animal Care and Use Committee (IUCAC). After 2-4 weeks of maturation, cultures were recorded on a 512-microelectrode array, with 5 micron diameter electrodes arranged in a triangular lattice with an inter-electrode distance of 60 \(\mu\)m [61]. Data were sampled at a high temporal resolution of 50 \(\mu\)s. Each culture was recorded from for three hours. The first hour was the control condition; spontaneous activity was recorded under normal conditions. A "placebo" of [X] \(\mu\)L of empty DMSO vehicle were added to the culture media. Following the control hour, the irrigation system was flushed, and a second batch of culture medium, containing 10 \(\mu\)M N,N-DPT solution in DMSO (Cayman Chemical Company) was introduced. Cultures were recorded from for another hour (the drug condition), before the system was again flushed and the original, drug-free media was re-introduced. Recordings were stopped during media turn-over to avoid artifacts. Following recording, the three one-hour datasets were appended, and spike-sorting was done using the kilosort3 software package [37], in a Python3.7 environment. Following spike-sorting, the resulting rasters were re-binned to 1 ms frames. Rasters were excluded from analysis if they contained less than 30 neurons, resulting in a final count of 11 viable datasets. ### Information Dynamics & Network Inference Information dynamics is a quantitative framework used to analyze how the elements of a complex system interact and collectively "compute" the future trajectory of a system [14]. By drawing on analogy with digital computation, the information dynamics framework breaks "computation" in complex systems down into a set of distinct dynamical features, including information storage (analogous to memory, or autocorrelation), information flow or transfer, and information modification or "integration". For an element \(X\) in a stochastic dynamical system, the simplest measure of information structure is the Shannon entropy of that element: how uncertain are we, as observers, about the state \(X\) will adopt at time \(t\)? Formally: \[H(X_{t})=-\sum_{x\in\mathcal{X}}P(x)\log_{2}P(x) \tag{1}\] Where \(\mathcal{X}\) is the support set of \(X\) and \(P(x)\) is the probability of observing that \(X=x\). The Shannon entropy has no notion of dynamics, however: it assumes that every time \(t\), \(X\) is randomly selected it's state from \(\mathcal{X}\) according to \(P(x)\). #### 5.2.1 Active Information Storage The simplest measure of information dynamics is the active information storage, which quantifies how much the past state of \(X\) constrains the possible next state \(X_{t}\): \[AIS(X) = I(X_{past};X_{t})\] \[= H(X_{t})-H(X_{t}|X_{past})\] Where \(X_{past}\) refers to a potentially multidimensional embedding of the past states of \(X\). We can re-write Eq. 2 as a kind of "information regression" that details how information about \(X\)'s next state is distributed over time [42]: \[H(X_{t})=AIS(X)+H_{\mu}(X) \tag{3}\] Here \(H_{\mu}(X)\) is the conditional entropy rate \(H(X_{t}|X_{past})\): all that uncertainty about \(X_{t}\) that is not resolved by learning the past of \(X\). For each neuron in each recording, for each condition, we inferred the AIS using a non-uniform embedding algorithm provided by the IDTxl package [62]. Briefly, the non-uniform embedding procedure iterates through lags \(1..\tau_{\max}\) (inclusive) and tests whether the addition of each subsequent lag significantly increases the AIS, conditional on all previously selected lags, up to some maximal lag \(\tau_{\max}\). For more details, see [63] and the IDTxl documentation. Here \(\tau_{\max}\) was chosen to be 5 bins, and 1000 shuffled nulls were used for null-hypothesis significance testing. To control for the effects of variable firing rates, we report the normalized active information storage: \(AIS(X)/H(X_{t})\). #### 5.2.2 Multivariate Transfer Entropy The AIS quantifies how much information the past of a single element discloses about it's own future (the amount of information "stored" in \(X\)). To quantify how much information "flows" from one element to another, we must measure how the past of other elements of the system constrains \(X_{t}\). This is done with the multivariate transfer entropy [64, 44]. For a set of parent elements \(\mathbf{Z}\), we can quantify how much information the past of \(\mathbf{Z}\) discloses about the next state of \(X\) with the conditional mutual information: \[mTE(\mathbf{Z}\to X)=I(\mathbf{Z}_{past};X_{t}|X_{past}) \tag{4}\] In the context of the infomration regression, we now have: \[H(X_{t})=AIS(X)+mTE(\mathbf{Z}\to X)+H_{\mu}(X) \tag{5}\] Where \(H_{\mu}\) is now given by \(H(X_{t}|X_{past},\mathbf{Z}_{past})\). The \(mTE\) is appealing in that it accounts for potentially higher-order synergies between multiple \(Z_{i},X_{j}\in\mathbf{Z}\), as well as not double-counting redundancies as the bivariate transfer entropy does [43]. The full \(mTE(\mathbf{Z}\to X)\) is a multivariate measure, more naturally applicable to hypergraphs than bivariate networks, however, a bivariate network that still accounts for redundancies and synergies can be recovered by defining the weight of each directed edge as \(I(Y_{past}:X_{t}|\mathbf{Z}_{past}^{-Y},X_{past})\), where \(\mathbf{Z}^{-Y}\) refers to the set of all \(Z_{i}\in\mathbf{Z}\) excluding \(Y\). For large systems with finite datasets, it is impossible to account for all possible parents, as well as all possible lags. Here, we used the IDTxl package [62] to implement a modified version of the algorithm described in [44]. IDTxl implements a greedy search, coupled with extensive null hypothesis surrogate testing to infer an optimal parent set \(\mathbf{Z}\) and the embedding for both \(X_{past}\) and \(\mathbf{Z}_{past}\), however, the runtimes can still be excessive: the time complexity for a full network inference is \(O(N^{2}\times d\times\tau_{max}\times S)\), where \(N\) is the number of neurons in the network, d is the eventual average in-degree of each neuron, \(\tau_{\max}\) is the maximum search depth, and \(S\) is the number of surrogates [65]. Given limitations in available computing resources, we first pre-filtered the set of prospective parents for each target by removing any neurons that did not have any significant bivariate transfer entropy onto the target over a range of \(1..30\) bins of lags for the source and five ms bins of lag for the target. Significance was tested using the analytic null estimator [66] as implemented by the JIDT[67]. The results of the analysis were fed into the IDTxl mTE estimator. Following prior work on transfer entropy network inference in neural cultures [16, 68, 69, 70], we constrained the multivariate transfer entropy inference to only consider one bin of source history, fixed by the lag that maximized the significant bivariate transfer entropy. The parent set \(\mathbf{Z}\) for each neuron, in each culture, in each condition was inferred in parallel (requiring approximately 5,000 unique optimizations), and significance testing was done using null distributions of 250 circularly-shifted surrogates. The circular shift was chosen to preserve the autocorrelation of each neuron. To control for the effects of variable firing rate, we report the normalized multivariate transfer entropy: \(mTE(Y\to X|\mathbf{Z}^{-Y})/H(X_{t})\). #### 5.2.3 Partial Information Decomposition & Synergy The final information dynamic we explored is information modification [14], sometimes also referred to as information integration. Information modification has been associated with "computation" in neural systems previously [21], and refers to novel information generated when a single neuron's future is constrained by the joint state of multiple inputs simultaneously [46]. Following previous work [47, 21, 15], we operationalized information modification with the statistical synergy, as computed using the partial information decomposition (PID) framework [71]. Since it's development by Williams and Beer in 2012, the PID framework has been widely applied across a variety of fields, including neuroscience [21, 72], clinical care reserach [52], sociology [73], climatology [74], machine learning [75], as well as to philosophical questions such as "emergence" [76, 77] and consciousness [53]. Briefly, the PID provides a scaffold by which the information that multiple sources disclose about a target can be decomposed into non-overlapping "atomic" components of information. Consider the case where two parent neurons \(Y_{1},Y_{2}\) disclose information about a target neuron \(X\). The total information that both parents disclose about the target can be quantified with the joint mutual information: \(I(Y_{1},Y_{2};X)\), however this is a lump sum measure and treats \(Y_{1}\) and \(Y_{2}\) as a coarse-grained macro-variable and reveals nothing about how the information about \(X\) is distributed over the \(Y_{i}\)'s. The PID solves this issue by decomposing: \[I(Y_{1},Y_{2};X)=Red(Y_{1},Y_{2};X)+Unq(Y_{1};X/Y_{2})+Unq(Y_{2};X/Y_{1})+Syn( Y_{1},Y_{2};X) \tag{6}\] The term \(Red(Y_{1},Y_{2};X)\) is the redundant information about \(X\) that could be learned by learning either the state of \(Y_{1}\) alone or the state of \(Y_{2}\) alone. The term \(Unq(Y_{i};X/Y_{j})\) is the unique information about \(X\) that can only be learned by observing \(Y_{i}\). The final term, \(Syn(Y_{1},Y_{2};X)\) is the synergistic information about \(X\) that can _only_ be learned when both the states of \(Y_{1}\) and \(Y_{2}\) are observed simultaneously. We can also decompose the marginal mutual informations: \[I(Y_{1};X)=Red(Y_{1},Y_{2};X)+Unq(Y_{1};X/Y_{2}) \tag{7}\] \[I(Y_{2};X)=Red(Y_{1},Y_{2};X)+Unq(Y_{2};X/Y_{1}) \tag{8}\] The result is an under-determined system of three equations and four unknown values (the redundant, synergistic, and two unique information atoms): if any one term is computed, the remaining three can be solved "for free." Here we used the \(I_{BROJA}\) measure of unique information [78], as it guarantees a non-negative decomposition. For each network, in each condition, we computed the bivariate PID for every instance of the two-parent/single target motif with the BROJA-2PID package [79], as provided by the IDTxl package [62]. For each parent, we used the same optimal lag as was used in the mTE network inference. To control for variable firing rates, we report the normalized synergy \(Syn(Y_{1},Y_{2};X)/H(X_{t})\). #### 5.2.4 Reversibility & Entropy Production To assess whether DPT altered the temporal reversibility of cortical activity, we computed the entropy production in the neuron level time series of spiking activity. Based on [40, 41], we estimated the entropy production with the Kullback-Leibler divergence between the forward- and reverse-time spike trains for each neuron: \(D_{KL}(\overrightarrow{X}||\overleftarrow{X})\). For a discrete random variable \(X\) that transitions from state \(x_{i}\) to state \(x_{j}\) according to a stationary transition probability matrix \(P(x_{i}\to x_{j})\), the entropy production is given by: \[D_{KL}(\overrightarrow{X}||\overleftarrow{X}):=\sum_{x_{i},x_{j}\in\mathcal{X }}P(x_{i}\to x_{j})\log\left(\frac{P(x_{i}\to x_{j})}{P(x_{j}\to x_{i})}\right) \tag{9}\] If \(P(x_{i}\to x_{j})=P(x_{j}\to x_{i})\) for all \(x_{i}\), \(x_{j}\), then the system is said to obey "detailed balance" and is at thermodynamic equilibrium: there is no "flow of time" from the perspective of the system: the flow from past to future and from future to past are indistinguishable. On the contrary, if \(P(x_{i}\to x_{j})\neq P(x_{j}\to x_{i})\), then the system has broken detailed balance and is operating far from equilibrium [41]. To ensure that the state spaces were large enough to capture rich temporal dynamics, we used a lossless coarse-graining procedure on each neuron's spike train: the time series was compressed into non-overlapping, successive 5 ms bins. Each macro-frame could be in one of thirty two possible states and we computed the TPM from the sequence of successive macro-frames. To satisfy the constraints of the \(D_{KL}\), we only included transitions where both the transition \(x_{i}\to x_{j}\) and \(x_{j}\to x_{i}\) were observed. #### 5.2.5 Clustering Coefficient The local clustering coefficient [51, 45] for each node in each network was computed using the clustering() function from the NetworkX package [80]. Briefly, the local clustering coefficient of a node quantifies was proportion of that nodes neighbors are themselves connected. a high value of the coefficient indicates greater local integration. ### Software, Code, & Data Availability Statement Data analysis scripts will be provided as supplementary materials for this study. The spike-sorted data will be deposited in the CRCNS repository. Raw, unprocessed data is available from the authors upon request. ## 6 Acknowledgements T.F.V. and M.P. are supported by the NSF-NRT grant 1735095, Interdisciplinary Training in Complex Networks and Systems. M.P. is also supported by the NSF GRFP. J.M.B. is supported by Expeditions: Mind in Vitro- Computing with Living Neurons National Science Foundation 2123781 subcontract to J.M.B. This work was supported by the Source Research Foundation.
2309.05649
Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck
The Symmetric Information Bottleneck (SIB), an extension of the more familiar Information Bottleneck, is a dimensionality reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the Generalized Symmetric Information Bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the dataset size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that, in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables.
K. Michael Martini, Ilya Nemenman
2023-09-11T17:40:37Z
http://arxiv.org/abs/2309.05649v2
# Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck ###### Abstract The Symmetric Information Bottleneck (SIB), an extension of the more familiar Information Bottleneck, is a dimensionality reduction technique that simultaneously compresses two random variables to preserve information between their compressed versions. We introduce the Generalized Symmetric Information Bottleneck (GSIB), which explores different functional forms of the cost of such simultaneous reduction. We then explore the dataset size requirements of such simultaneous compression. We do this by deriving bounds and root-mean-squared estimates of statistical fluctuations of the involved loss functions. We show that, in typical situations, the simultaneous GSIB compression requires qualitatively less data to achieve the same errors compared to compressing variables one at a time. We suggest that this is an example of a more general principle that simultaneous compression is more data efficient than independent compression of each of the input variables. **Keywords:** Information Bottleneck, Symmetric Information Bottleneck, Dimensionality Reduction, Error Bounds, Data Efficiency ## 1 Introduction Recent years have seen an explosion of large-dimensional experimental data sets (de Vries et al., 2020; Siegle et al., 2021; Haghighi et al., 2022) and the parallel growth in the number of methods for _dimensionality reduction_ (DR)--that is, for extracting low-dimensional structure from large-dimensional data (Carreira-Perpinan, 1997; Van Der Maaten et al., 2009; Nanga et al., 2021). Broadly speaking, we classify dimensionality reduction methods into two classes: unsupervised and supervised. Unsupervised DR methods seek a low-dimensional description, \(T_{X}\), of a large-dimensional variable, \(X\), that preserves its variance, entropy, or another measure of diversity of the data. Such methods include the familiar principal component analysis (PCA) (Hotelling, 1933), non-negative matrix factorization (Lee and Seung, 1999), multidimensional scaling (MDS) (Kruskal, 1964), t-distributed stochastic neighbor embedding (t-SNE) (Van der Maaten and Hinton, 2008), Isomap (Tenenbaum et al., 2000), Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018), autoencoders (Hinton and Salakhutdinov, 2006), and related techniques (Kingma and Welling, 2014). In contrast, supervised DR techniques aim to find a low-dimensional description, \(T_{X}\), of a large dimensional \(X\), while preserving \(T_{X}\)'s ability to explain another variable \(Y\), which provides an effective _relevance_ or _supervision_ signal. Common examples include variable selection in regression (Andersen and Bro, 2010; Kuo and Mallick, 1998), cross-encoders, Bayesian Ising Approximation (BIA) (Fisher and Mehta, 2015), and the Information Bottleneck (IB) (Tishby et al., 2000; Tishby and Slonim, 2000). A particularly interesting class of such supervised dimensionality reduction problems is when both the reduced variable \(X\) and the relevance variable \(Y\) are large-dimensional. In these situations, finding significant correlations within combinatorially many groups of components of \(X\) and \(Y\) is hard, suggesting parallel dimensionality reduction of both \(X\) and \(Y\) into \(T_{X}\) and \(T_{Y}\), respectively. We distinguish three classes of approaches to this problem. In the first, which we call the _Independent Unsupervised Dimensionality Reduction_ (IUDR), one applies unsupervised DR methods to \(X\) and \(Y\) independently. One then searches for statistical dependencies between \(T_{X}\) and \(Y\) or \(T_{Y}\) and \(X\) or \(T_{X}\) and \(T_{Y}\), but the dimensionality reduction itself is agnostic of this subsequent step. A familiar example of this is the Principal Components Regression, where the projections on the principal components of \(X\) are regressed against \(Y\). We also distinguish _Independent Supervised Dimensionality Reduction_ (ISDR), where \(T_{X}\) is produced by compressing \(X\) with \(Y\) as the supervision signal, while \(T_{Y}\) emerges from compressing \(Y\) with \(X\) as the supervision. The Information Bottleneck (IB) (Tishby et al., 2000), the Generalized and Deterministic Information Bottleneck (GIB) (Strouse and Schwab, 2017), and cross-encoders are examples of such approaches. Finally, _Simultaneous Supervised Dimensionality Reduction_ (SSDR) is a class of methods where \(T_{X}\) and \(T_{Y}\) are produced simultaneously, typically being supervision signals of each other.1 Examples of SSDR include the Canonical Correlation Analysis (CCA) (Hotelling, 1936; Yang et al., 2021) and its modern nonlinear neural network based generalizations (Andrew et al., 2013; Chapman and Wang, 2021), Partial Least Squares (PLS) (Wold, 1966; Wold et al., 2001), and the Symmetric version of the Information Bottleneck (SIB) (Slonim et al., 2006).2 In this paper we introduce a Generalized version of the Symmetric Information Bottleneck (GSIB) by interpolating between the compression cost measured by entropy and information. This parallels for SSDR the introduction of the Generalized Information Bottleneck (GIB) for ISDR, of which the Deterministic Information Bottleneck and the Information Bottleneck are limits (Strouse and Schwab, 2017). We then argue that SSDR approaches can require a lot less data than their ISDR counter parts to achieve the same accuracy. We demonstrate this by comparing the bias and statistical fluctuations in the objective functions of independent GIB reductions of variables \(X\) and \(Y\) (ISDR approach) with the corresponding bias and fluctuations for the GSIB (SSDR approach). We show that the bias for the GSIB scales as the product of cardinalities of the compressed variables, while the bias for the GIB scales as the (typically much larger) product of cardinalities of the supervision signal and the compressed variable. We do the comparison for both typical fluctuations and for the upper bounds on the fluctuations. While our derivations are done for the IB approaches only, the intuitive explanation of the differences between the approaches suggests that SSDR methods are likely to require less data than their ISDR analogues more generally. ## 2 Background: Information Bottleneck and the Symmetric Information Bottleneck ### Information Bottleneck and Its Generalizations The goal of _Information Bottleneck_ (IB) is to produce a compression, \(T_{X}\) of a random variable \(X\), such that the compression retains as much information as possible about another random variable \(Y\), which is called the _relevant_ (or, in our language, the _supervising_) variable. The information is measured using Shannon's mutual information (Shannon, 1948), which quantifies the difference between the joint probability distribution \(p(x,y)\) and the product of the marginal distributions \(p(x)p(y)\): \[I(X,Y)=\sum_{X,Y}p(x,y)\frac{\log(p(x,y))}{p(x)p(y)}=H(X)-H(X|Y), \tag{1}\] where \(H(X)\) is the entropy of the variable \(X\) and \(H(X|Y)\) is the conditional entropy of \(X\) given \(Y\), \(H(X|Y)=\sum_{Y}p(y)H(X|Y=y)=\sum_{Y}p(y)\sum_{X}p(x|y)\log(p(x|y))\). Mutual information is symmetric, always non-negative, and is only zero when the random variables are independent (Cover, 1999). To achieve its goal, IB produces a probabilistic mapping from \(X\) to \(T_{X}\), \(p(t_{x}|x)\), which minimizes a specific cost function. The cost function trades off preserving the information in the compression about the relevant variable, \(I(T_{X},Y)\), against losing the information about \(X\) (reducing the variable), \(I(T_{X},X)\): \[L_{\rm IB}=I(T_{X},X)-\beta I(T_{X},Y). \tag{2}\] Here \(\beta\) is the trade-off parameter, which controls how important the compression \(I(T_{X},X)\) is compared to preserving the relevant information \(I(T_{X},Y)\). As \(\beta\rightarrow\infty\), the cost function is minimized by having no compression, \(X=T\). Recently a Generalized version of IB was proposed (GIB) (Strouse and Schwab, 2017), which changes the cost function to \[L_{\rm GIB}=H(T_{X})-\alpha_{x}H(T_{X}|X)-\beta I(T_{X},Y), \tag{3}\] which has a formal solution \[p(t_{x}|x) =\frac{1}{Z(\beta,\alpha)}\exp\left[\frac{1}{\alpha_{x}}\left(\log p (t_{x})-\beta D_{\mathrm{KL}}(p(y|x)||p(y|t_{x}))\right)\right], \tag{4}\] \[p(y|t_{x}) =\frac{1}{p(t_{x})}\sum_{X}p(t_{x}|x)p(x,y), \tag{5}\] where \(D_{\mathrm{KL}}\) is the usual Kullback-Leibler divergence (Kullback and Leibler, 1951). The original IB is recovered from GIB when \(\alpha_{x}=1\). In contrast, when \(\alpha_{x}\to 0\), \(I(T_{X},X)\) is replaced with \(H(T_{X})\) in the cost function. This corresponds to replacing the cost of having a noisy channel encoding \(X\) into \(T_{X}\) with the cost of directly storing \(T_{X}\). In this case, the formal solution results in a deterministic mapping between \(X\) and \(T_{X}\), and the resulting problem is known as the _Deterministic Information Bottleneck_ (DIB) (Strouse and Schwab, 2017). If both \(X\) and \(Y\) are large-dimensional and require dimensionality reduction, one can apply IB to produce the mapping \(X\to T_{X}\) with \(Y\) as the relevant variable, and then solve a separate IB problem to map \(Y\to T_{Y}\) with \(X\) as the supervision. This approach would fall into the ISDR class in our nomenclature. ### Symmetric Information Bottleneck and its Generalization The Symmetric Information Bottleneck (SIB), introduced in Slonim et al. (2006), is an SSDR approach, where \(X\) and \(Y\) are compressed simultaneously, such that the compressed versions \(T_{X}\), and \(T_{Y}\) contain the maximal amount of information about each other. This corresponds to optimizing the loss function: \[L_{\mathrm{SIB}}=I(T_{X};X)+I(T_{Y};Y)-\beta I(T_{X};T_{Y}), \tag{6}\] where optimization is over all possible probabilistic compressions \(p(t_{x}|x)\) and \(p(t_{y}|y)\). As before, \(\beta\) determines the strength of the trade-off between the compression and preserving the relevant information. For generality, here we propose a Generalized SIB (GSIB), which incorporates flexible compression terms, similar to how GIB was obtained from IB. The new cost function is \[L_{\mathrm{GSIB}} =I_{\alpha_{X}}(T_{X};X)+I_{\alpha_{Y}}(T_{Y};Y)-\beta I(T_{X};T_ {Y}) \tag{7}\] \[=H(T_{X})-\alpha_{X}H(T_{X}|X)+H(T_{Y})-\alpha_{Y}H(T_{Y}|Y)- \beta I(T_{X},T_{Y}). \tag{8}\] Here we defined shorthands \(I_{\alpha_{X}}(T_{X},X)=H(T_{X})-\alpha_{X}H(T_{X}|X)\), and similarly for \(I_{\alpha_{Y}}\), and the cost function must be minimized with respect to \(p(t_{x}|x)\) and \(p(t_{y}|y)\). The parameters \(\alpha_{X}\) and \(\alpha_{Y}\) are what dictates how probabilistic the mapping between the uncompressed variables and their compressed versions is. In the limit \(\alpha_{x},\alpha_{Y}\to 0\), the mapping can be verified to be deterministic (see below), resulting in the Deterministic SIB (DSIB). When \(\alpha_{X},\alpha_{Y}\to 1\), GSIB becomes the usual SIB. Optimization of the cost function has a formal solution: \[p(t_{x}|x) =\frac{\exp\left[\frac{1}{\alpha_{X}}\left(\ln p(t_{x})-\beta D_{ \mathrm{KL}}(p(t_{y}|x)||p(t_{y}|t_{x}))\right]\right.}{Z_{x}(x,\alpha_{X}, \beta)}, \tag{9}\] \[p(t_{y}|y) =\frac{\exp\left[\frac{1}{\alpha_{Y}}\left(\ln p(t_{y})-\beta D_{ \mathrm{KL}}(p(t_{x}|y)||p(t_{x}|t_{y}))\right]\right.}{Z_{y}(y,\alpha_{Y}, \beta)},\] (10) \[p(t_{y}|x) =\frac{\sum_{Y}p(t_{y}|y)p(x,y)}{p(x)},\hskip 14.226378ptp(t_{y}|t_{x })=\frac{\sum_{X,Y}p(t_{y}|y)p(t_{x}|x)p(x,y)}{\sum_{X}p(t_{x}|x)p(x)},\] (11) \[p(t_{x}|y) =\frac{\sum_{X}p(t_{x}|x)p(x,y)}{p(y)},\hskip 14.226378ptp(t_{x}|t_{y })=\frac{\sum_{X,Y}p(t_{y}|y)p(t_{x}|x)p(x,y)}{\sum_{Y}p(t_{y}|y)p(y)}. \tag{12}\] Similar to IB, this formal solution can be iterated starting from an initial guess for both \(p(t_{x}|x)\) and \(p(t_{y}|y)\). Interestingly, parenthetically we note that, unlike for IB, there are now exponentially many, \(\sim 2^{|T_{X}|+|T_{Y}|}\), trivial fixed points for this iteration scheme (here \(|\cdot|\) denotes cardinality of the variable, so that the rest of our discussion focuses on random variables defined on discrete, finite sets of possible values). For example, a uniform distribution for both random mappings, \(p(t_{x}|x)=1/|T_{X}|\) and \(p(t_{y}|y)=1/|T_{Y}|\) is a fixed point of the iteration with the cost of zero, even though a uniform mapping, independent of the conditioning variable, is clearly not a useful compression. Furthermore, all distributions, where \(p(t_{x}|x)\) is zero for several values of \(t_{x}\) and uniform otherwise, are also trivial fixed points. There are exponentially many distributions of this type. When \(\alpha_{x}=\alpha_{y}=1\), these distributions are part of a larger class of trivial fixed points, which includes all mappings independent of the data, i. e., \(p(t_{x}|x)=A(t_{x})\) and \(p(t_{y}|y)=B(t_{y})\). One can easily verify that the first derivative of \(L_{\mathrm{GSIB}}\) vanishes for these solutions. The second derivative, which controls if these solutions are minima or maxima, is: \[\frac{\partial^{2}L_{\mathrm{GSIB}}}{\partial p(t_{x}|x)\partial p(t_{x}^{ \prime}|x^{\prime})}=\frac{-p(x)}{A(t_{x})}(p(x)-\alpha_{X})\delta(x,x^{\prime })\delta(t,t_{x}^{\prime})-\frac{p(x)p(x^{\prime})}{A(t_{x})}\delta(t_{x},t_{x }^{\prime})(1-\delta(x,x^{\prime})), \tag{13}\] (with similar expression for the compression of \(Y\)). These trivial fixed points are maxima when \(\alpha_{x}<p(x)\), and \(\alpha_{y}<p(y)\). When \(\alpha_{x}>p(x)\) and \(\alpha_{y}>p(y)\), such as in the case of SIB, when \(\alpha_{X}=\alpha_{Y}=1\), the trivial fixed points are saddles. Thus solutions found by the iterative algorithm must be viewed with suspicion, and one should always verify if the algorithm got trapped by one of the trivial solutions. In the limit of \(\alpha_{X},\alpha_{Y}\to 0\), the exponent in the formal solution blows up. As a result, one obtains a deterministic mapping from uncompressed variables to their compressions: \[p(t_{x}|x) =\delta(t_{x},\tau_{x}(x)) \tag{14}\] \[\tau_{x}(x) =\text{argmax}_{t_{x}}\left[\ln p(t_{x})-\beta D_{\mathrm{KL}}(p (t_{y}|x)||p(t_{y}|t_{x}))\right],\] (15) \[p(t_{y}|y) =\delta(t_{y},\tau_{y}(y))\] (16) \[\tau_{y}(y) =\text{argmax}_{t_{y}}\left[\ln p(t_{y})-\beta D_{\mathrm{KL}}(p (t_{x}|y)||p(t_{x}|t_{y}))\right]. \tag{17}\] This is the Deterministic SIB (DSIB). Results To show that GSIB is more data efficient than two GIBs applied independently to \(X\) and to \(Y\), we notice that, in practical applications, all of the information and entropy terms in the loss functions must be estimated from data. Estimation of information-theoretic quantities is a hard task, potentially as hard as estimating the underlying distributions themselves, largely due to the estimation bias (Antos and Kontoyiannis, 2001; Paninski, 2003). Crucially, for a DR algorithm to produce meaningful results, the empirically estimated loss function must accurately represent the true loss function, which is unknown to us. Thus the question of which algorithm is more data efficient is equivalent to a different question: for which of the considered IB algorithms does the estimate of the respective loss function converge faster to its true value as the sample size grows? A lot of ink has been expended on the problem of mutual information estimation (Roulston, 1999; Kraskov et al., 2004; Goebel et al., 2005; Belghazi et al., 2018). Here we do not try to produce better estimation techniques. Instead we focus on discrete random variables with finite cardinalities, and we use the simplest estimator, known as plug-in, naive, or maximum likelihood estimator, for estimation of all of the terms in the loss functions (Roulston, 1999; Paninski, 2003). For this estimator, which we denote with \(\hat{\cdot}\), the probability distribution \(p(x)\) is estimated by its maximum likelihood (ML) value, namely the frequency of an outcome in the sample, \(\hat{p}(x)=n(x)/N\), where \(n(x)\) is the number of times \(x\) occurred, and \(N\) is the total number of samples. Then \(\hat{H}\), \(\hat{I}\), and \(\hat{L}\) are all given by plugging in \(\hat{p}\) instead of \(p\) in the expression for these quantities. Shamir et al. (2010) showed that, while the ML estimator of mutual information \(\hat{I}(X,Y)\) is guaranteed to converge to the true value only when \(N\gg|X||Y|\), ML estimator of the loss function, \(\hat{L}_{\rm IB}\), converges at much smaller \(N\), making IB more practical than one would naively think. Here we continue this line of analysis and examine the convergence properties of \(\hat{L}_{\rm GSIB}\) and \(\hat{L}_{\rm GIB}\) when both \(|X|,|Y|\gg 1\) in two different ways. First, we extend the derivations of Shamir et al. (2010) and bound the error of estimating each information-theoretic term in each of the loss functions from data. This allows us to build bounds on how close \(L\) and \(\hat{L}\) are, and we can compare these bounds for GSIB and GIBs. Second, inspired by Still and Bialek (2004), we calculate the standard deviation and bias of \(L-\hat{L}\) for different versions of the IB. By both measures, for \(|X|,|Y|\gg 1\), \(\hat{L}_{\rm GSIB}\) will have a smaller bias then \(\hat{L}_{\rm GIB}\). This is our main result, allowing us to claim that the symmetric version of IB is more data efficient. ### Bounds on The Loss Functions The loss functions \(L_{\rm GSIB}\) and \(L_{\rm GIB}\) consist of multiple mutual information and entropy terms. We calculate bounds on the fluctuations between each of these terms and their estimators, and then combine them into a single estimate of the fluctuations of each loss function. We do this below in detail for \(I(T_{X};X)\) and its estimator \(\hat{I}(T_{X};X)\). Analysis of the other terms is similar. Furthermore, for our analysis, we assume that the mappings \(p(t_{x}|x)\) and \(p(t_{y}|y)\) are fixed. The expressions we develop will hold for all mappings, not just the mappings that minimize their respective loss functions. To estimate \(|I(T_{X};X)-\hat{I}(T_{X};X)|\), we compare both terms to the expected value of the empirical information \(E(I(T_{X};X))\): \[|\hat{I}(T_{X};X)-I(T_{X};X)|=|\hat{I}(T_{X};X)-E(\hat{I}(T_{X};X))+ E(\hat{I}(T_{X};X))-I(T_{X};X)|\\ \leq|\hat{I}(T_{X};X)-E(\hat{I}(T_{X};X))|+|I(T_{X};X)-E(\hat{I}(T _{X};X))|. \tag{18}\] This is the usual bias-variance decomposition for bounds on the magnitude of fluctuations, with the first term in Eq. (18) representing the bias, and the second the variance. We now bound the bias and the the variance terms separately. First we focus on the variance (second) term in Eq. (18). For this, we follow Shamir et al. (2010) and rely on the the McDiarmid's inequality. This concentration inequality bounds the probability of the difference between a function of an empirical sample and its expected value. The bound is constructed from bounds on the change in the function due to changes in individual data points: \[P\left[|f(x_{1},x_{2},\ldots,x_{N})-E\left(f(x_{1},x_{2},\ldots,x_{N})\right) |\geq\epsilon\right]\leq 2\exp\left[-\frac{2\epsilon^{2}}{\sum c_{i}} \right]\equiv\delta_{1}, \tag{19}\] \[\text{where }\quad|f(x_{1},\ldots,x_{i},\ldots,x_{N})-f(x_{1},\ldots,x_{i}^{ \prime},\ldots,x_{N})|\leq c_{i}. \tag{20}\] Thus, to use the inequality, we consider the maximum change in \(\hat{I}\) if a single datum is changed. That is, suppose the data point \((x,y)\) is replaced by another data point \((x^{\prime},y^{\prime})\). Then the maximum likelihood estimator at the point \((x,y)\), \(\hat{p}(x,y)\), decreases by \(1/N\). In contrast, \(\hat{p}(x^{\prime},y^{\prime})\) increases by \(1/N\), and the estimate does not change at all other \(x\), \(y\) values. Similarly, the marginals \(\hat{p}(x)\), \(\hat{p}(x^{\prime})\), \(\hat{p}(y)\), and \(\hat{p}(y^{\prime})\) change by at most \(1/N\), while marginals at all other values remain the same. For a fixed compression mapping, we calculate \(\hat{p}(t_{x})=\sum_{x}p(t_{x}|x)\hat{p}(x)\). We see that, with a single datum moving, \(\hat{p}(t_{x})\) can change by at most \(|p((t_{x}|x^{\prime})-p(t_{x}|x))|/N\leq 1/N\) for each \(t_{x}\in T_{X}\). Similarly \(\hat{p}(t_{y})\) can change by at most \(1/N\) for each \(t_{y}\in T_{Y}\). We now express the relevant mutual information in terms of entropy, \(\hat{I}_{\alpha_{X}}(T_{X};X)=\hat{H}(T_{X})-\alpha_{X}\hat{H}(T_{X}|X)\), where the entropy \(\hat{H}(T_{X})\) depends on the probability density \(\hat{p}(t_{x})\): \[\hat{H}(T_{X})=-\sum_{t_{x}}\hat{p}(t_{x})\log\hat{p}(t_{x}). \tag{21}\] The change in entropy from moving a single datum can be bounded using the following inequality, again borrowed from Shamir et al. (2010): \[|(a+\delta)\log(a+\delta)-a\log a|\leq\log(N)/N \tag{22}\] for any positive integer \(N\) and for any \(a\in[0,1-1/N]\) and \(\delta\leq 1/N\). We apply this identity for each term in the sum in Eq. (21) and find that the change in \(\hat{H}(T_{X})\) is bounded by \(|T_{X}|\log N/N\). We bound the change in \(\hat{H}(T_{X}|X)=\sum_{x}\hat{p}(x)H(T_{X}|X=x)\). \(H(T_{X}|X=x)\) only depends on \(p(t_{x}|x)\), which we consider fixed. \(\hat{p}(x)\) changes by at most \(1/N\) for two values of \(x\). Thus the largest change is \(|H(T_{X}|x^{\prime})-H(T_{X}|x)|/N\leq|\max(H(T_{X}|x^{\prime}),H(T_{X}|x))|/N \leq\log|T_{X}|/N\). The last inequality comes from \(H(T_{X}|X=x)\leq\log|T_{X}|\), with the bound achieved for the uniform distribution. Finally, combining the results for both entropy terms, we see that \(\hat{I}_{\alpha_{X}}(T_{X};X)\) can change by at most \((|T_{X}|\log N+\alpha_{X}\log|T_{X}|)/N\). Now we apply the McDiarmid inequality, Eqs. (19, 20) to finally obtain that, with probability of at least \(1-\delta_{1}\): \[|\hat{I}_{\alpha_{X}}(T_{X};X)-E(\hat{I}_{\alpha_{X}}(T_{X};X))|\leq(|T_{X}| \log N+\alpha_{X}\log|T_{X}|)\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}. \tag{23}\] This generalizes the result of Shamir et al. (2010) to \(\alpha_{X}\neq 1\). Similarly, we get that, with probability of at least \(1-\delta_{1}\), \[|\hat{I}_{\alpha_{Y}}(T_{Y};Y)-E(\hat{I}_{\alpha_{Y}}(T_{Y};Y))|\leq(|T_{Y}| \log N+\alpha_{Y}\log|T_{Y}|)\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}. \tag{24}\] This leaves us with the final bound on the difference between the ML estimators of various informations and their expectations, namely for \(\hat{I}(T_{X};T_{Y})\); this quantity is not analysed in Shamir et al. (2010), but we proceed very similarly. First, we calculate how much this term changes from a single datum being moved by using the identity \(\hat{I}(T_{X};T_{X})=\hat{H}(T_{X})+\hat{H}(T_{Y})-\hat{H}(T_{X},T_{Y})\). Luckily we already calculated that \(\hat{H}(T_{X})\) changes by, at most, \(|T_{X}|\log N/N\), and \(\hat{H}(T_{Y})\) changes by, at most, \(|T_{Y}|\log N/N\). We are left to calculate how much \(\hat{H}(T_{X},T_{Y})\) can change. We write \(\hat{H}(T_{X},T_{Y})=-\sum_{t_{x},t_{y}}\hat{p}(t_{x},t_{y})\log\hat{p}(t_{x}, t_{y})\), where \(\hat{p}(t_{x},t_{y})=\sum_{x,y}p(t_{x}|x)p(t_{y}|y)\hat{p}(x,y)\). Therefore, \(\hat{p}(t_{x},t_{y})\) can change by, at most, \(1/N\) for all \((t_{x},t_{y})\in(T_{X},T_{Y})\). Thus, \(\hat{H}(T_{X},T_{Y})\) can change by at most \(|T_{X}||T_{Y}|\log N/N\). We again use the McDiarmid's inequality and we determine that, with probability of at least \(1-\delta_{1}\), the difference between the ML estimate \(\hat{I}(T_{X};T_{Y})\) and its expected value is bounded by \[|\hat{I}(T_{X};T_{Y})-E(\hat{I}(T_{X};T_{Y}))|\leq((|T_{X}|+|T_{Y}|+|T_{X}||T_ {Y}|)\log N)\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}. \tag{25}\] Now we need to calculate bounds on the bias (first) terms in Eq. (18) and similar expressions for the other information quantities. For this, we use results from Paninski (2003), namely: \[|H(T_{X})-E(\hat{H}(T_{X}))| \leq\log\left(1+\frac{|T_{X}|-1}{N}\right)\leq\frac{|T_{X}|-1}{N}, \tag{26}\] \[|H(T_{Y})-E(\hat{H}(T_{Y}))| \leq\log\left(1+\frac{|T_{Y}|-1}{N}\right)\leq\frac{|T_{Y}|-1}{N},\] (27) \[|H(T_{X},T_{X})-E(\hat{H}(T_{X},T_{X}))| \leq\log\left(1+\frac{|T_{X}||T_{Y}|-1}{N}\right)\leq\frac{|T_{X} ||T_{Y}|-1}{N}. \tag{28}\] Since we consider mapping \(p(t_{x}|x)\) as fixed and known for this analysis, there is no bias \(H(T_{X}|X)-E(\hat{H}(T_{X}|X))\). This means that the bias \(|I_{\alpha_{X}}(T_{X};X)-\hat{I}_{\alpha_{X}}(T_{X};X)|\) only comes from the \(|H(T_{X})-\hat{H}(T_{X})|\) term and does not have an \(|X|\) or \(\alpha_{x}\) dependence. Putting the bounds on deviations of the estimates from their expectations and of expectations from the true values together, we get bounds on fluctuations of various information quantities that contribute to the GSIB loss function \[|I_{\alpha_{X}}(T_{X};X)-\hat{I}_{\alpha_{X}}(T_{X};X)|\leq (|T_{X}|\log N+\alpha_{X}\log|T_{X}|)\frac{\sqrt{\log(2/\delta_{1})} }{\sqrt{2N}}+\frac{|T_{X}|-1}{N}, \tag{29}\] \[|I_{\alpha_{Y}}(T_{Y};Y)-\hat{I}_{\alpha_{Y}}(T_{Y};Y)|\leq (|T_{Y}|\log N+\alpha_{Y}\log|T_{Y}|)\frac{\sqrt{\log(2/\delta_{1}) }}{\sqrt{2N}}+\frac{|T_{Y}|-1}{N},\] (30) \[|I(T_{X};T_{Y})-\hat{I}(T_{X};T_{Y})|\leq (|T_{X}|+|T_{Y}|+|T_{X}||T_{Y}|)\log N\frac{\sqrt{\log(2/\delta_{1} )}}{\sqrt{2N}}\] \[+\frac{|T_{X}|-1}{N}+\frac{|T_{Y}|-1}{N}+\frac{|T_{X}||T_{Y}|-1}{N}\] \[= ((|T_{X}|+1)(|T_{Y}|+1)-1)\log N\frac{\sqrt{\log(2/\delta_{1})}}{ \sqrt{2N}}\] \[+\frac{(|T_{X}|+1)(|T_{Y}|+1)-4}{N}. \tag{31}\] For comparison, the term \(|I_{\alpha_{x}}(T_{X};X)-\hat{I}_{\alpha_{x}}(T_{X};X)|\) in the error of the GIB loss function has the same bounds as the corresponding term in GSIB, Eq. (29). Further the term \(|I(T_{X};Y)-\hat{I}(T_{X};Y)|\) in the error of the GIB loss function is the same as for the traditional IB. Shamir et al. (2010) calculated it to be: \[|I(T_{X};Y)-\hat{I}(T_{X};Y)|\leq(3|T_{X}|+2)\log N\frac{\sqrt{ \log(2/\delta_{1})}}{\sqrt{2N}}+\frac{(|T_{X}|+1)(|Y|+1)-4}{N}. \tag{32}\] All of these bounds have a similar structure. The term proportional to \(1/\sqrt{N}\) comes from the variance of the estimators. Its contribution is controlled by \(\delta_{1}\), so that if a high certainty is required (\(\delta_{1}\to 0\)), then these terms are large. The terms proportional to \(1/N\) are the bias terms. The most crucial observation is that, even though the data comes from the joint probability distribution \(p(x,y)\), which has the cardinality of \(|X||Y|\), the terms proportional to this joint cardinality do not appear in the bounds, similar to Shamir et al. (2010). In other words, one does not need to have the joint distribution well-sampled to apply any of the IB variants. The second observation from the bounds is that the deterministic versions, \(\alpha=\alpha_{X}=\alpha_{Y}=0\), of both the SIB and the IB have slightly tighter bounds than their generalized counterparts, including the original IB versions with \(\alpha=\alpha_{X}=\alpha_{Y}=1\). The tightening does not affect the bias component of the bounds, but provides a small correction to the variance, eliminating the terms similar to \(\alpha\log|T_{X}|\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}\), which are subdominant in the size of the reduced representations compared to the terms like \(|T_{X}|\log N\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}\). We now compare the data efficiency of GSIB with that of two GIBs applied to reduce \(X\) and \(Y\) independently. We do so by bounding the error of the estimates of the loss for the GSIB vs. for two GIBs run in parallel. The GSIB loss function error is: \[|L_{\rm GSIB}-\hat{L}_{\rm GSIB}|\leq\left((|T_{X}|+|T_{Y}|)\log N+ \alpha_{X}\log|T_{X}|+\alpha_{Y}\log|T_{Y}|\right)\frac{\sqrt{\log(2/\delta_{1} )}}{\sqrt{2N}}\\ +\beta\left((|T_{X}|+1)\left(|T_{Y}|+1\right)-1\right)\log N\frac{ \sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}\\ +\frac{|T_{X}|-1}{N}+\frac{|T_{Y}|-1}{N}+\beta\frac{(|T_{X}|+1)(| T_{Y}|+1)-4}{N}. \tag{33}\] The combined loss of two GIBs reducing \(X\) and \(Y\) independently is: \[|L_{\rm GIB}-\hat{L}_{\rm GIB}|\leq\left((|T_{X}|+|T_{Y}|)\log N+ \alpha_{X}\log|T_{X}|+\alpha_{Y}\log|T_{Y}|\right)\frac{\sqrt{\log(2/\delta_{1 })}}{\sqrt{2N}}\\ +\beta\left(3|T_{X}|+3|T_{Y}|+4\right)\log N\frac{\sqrt{\log(2/ \delta_{1})}}{\sqrt{2N}}\\ +\frac{|T_{X}|-1}{N}+\frac{|T_{Y}|-1}{N}+\beta\frac{(|T_{X}|+1)( |Y|+1)+(|T_{Y}|+1)(|X|+1)-8}{N}. \tag{34}\] We see that the dominant contribution to the variance part of \(L_{\rm GSIB}\) bound is \(\beta|T_{X}||T_{Y}|\log N\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}\). For two GIBs run in parallel, Eq. (34) says that the dominant contributions to the variance would be \(3\beta(|T_{X}|+|T_{Y}|)\log N\frac{\sqrt{\log(2/\delta_{1})}}{\sqrt{2N}}\). That is, the two GIBs have smaller variances than GSIB for all but the smallest cardinalities of the compressed variables. However, notice that the cardinality of the compressed variables is usually not large, almost by definition, so that this loosening of the bound may be too small to notice for realistic \(N\gg 1\). The behavior of the bias contributions to the bounds is different. The leading term for GSIB is \(|T_{X}||T_{Y}|/N\), while for two GIBs it is \((|T_{X}||Y|+|X||T_{Y}|)/N\). Thus, when \(|X|,|Y|\sim N\), the GSIB can be _significantly_ more efficient that GIBs. When \(|X|,|Y|\gg N\), the bias bounds for GIBs become meaningless, but GSIB bounds do not depend on the cardinality of the data variables. This is the reason for our assertion that GSIB has better data efficiency than two GIBs run in parallel for realistic cardinalities of variables and sample sizes. ### Mean error and Mean squared error The error bounds for the mutual information estimators must hold for worst case underlying distributions. Thus there are many cases when the error is significantly smaller than the calculated bounds. To explore if typical errors are different from the worst case bounds, here we calculate the mean squared error of \(L_{\rm GSIB}-\hat{L}_{\rm GSIB}\), and similarly for the GIB. As always, the mean squared error is the sum of the squared bias and the variance of the estimator \[E(L_{\rm GSIB}-\hat{L}_{\rm GSIB})^{2}=(L_{\rm GSIB}-E(\hat{L}_{\rm GSIB}))^{2} +E((\hat{L}_{\rm GSIB}-E(\hat{L}_{\rm GSIB}))^{2}), \tag{35}\] and similarly for the GIB. This expression is the equivalent of the bias-variance decomposition for the bounds, Eq. (18). However, instead of bounding terms, we now calculate them. For this, we decompose every mutual information term in the loss functions into the corresponding entropy components. We use the notation \(\delta h\equiv\hat{h}-h\) for any variable that is being estimated via the ML estimator. For the ML estimator of the probability distribution \(p(x,y)\), multinomial counting statistics textbook results give \[E(\delta p(x,y)) =0, \tag{36}\] \[E(\delta p(x,y)\delta p(x^{\prime},y^{\prime})) =\frac{p(x,y)\delta_{x,x^{\prime}}\delta_{y,y^{\prime}}}{N}- \frac{p(x,y)p(x^{\prime},y^{\prime})}{N}. \tag{37}\] Expectations for fluctuations of marginal distributions can be obtained by marginalizing Eqs. (36, 37). In what follows, we will focus on \(N\gg 1\), so that fluctuations \(\delta p(x,y)\) have a small relative variance. Then, to obtain expressions for the variance of entropies, we follow Still and Bialek (2004) and expand \(\hat{H}\) around the true value \(H\) for small \(\delta p\). For \(H(X)\), we get (expressions for other entropy terms are similar): \[\hat{H}(X) =-\sum_{X}(p(x)+\delta p(x))\log(p(x)+\delta p(x))\] \[=-\sum_{X}\left[p(x)\log p(x)+(\log p(x)+1)\delta p(x)+\sum_{n=2} ^{\infty}\frac{(-1)^{n}(\delta p(x))^{n}}{n(n-1)p(x)^{n-1}}\right]\] \[=H(X)-\sum_{X}\left[(\log p(x)+1)\delta p(x)+\sum_{n=2}^{\infty} \frac{(-1)^{n}(\delta p(x))^{n}}{n(n-1)p(x)^{n-1}}\right]. \tag{38}\] From this, it follows that \(\delta H(X)=-\sum_{X}\left[(\log p(x)+1)\delta p(x)+\frac{(\delta p(x))^{2}}{ 2p(x)}\right.\)\(\left.+O((\delta p(x))^{3})))]\). Noticing that terms first order in \(\delta p\) vanish under averaging, we immediately calculate \(|E(\delta H(X))|=\frac{|X|-1}{2N}\) and \(|E(\delta H(Y))|=\frac{|Y|-1}{2N}\). Similarly, because \(p(t_{x}|x)\) is fixed, we get \(|E(\delta H(X,T_{X}))|=\frac{|X|-1}{2N}\), \(|E(\delta H(Y,T_{Y}))|=\frac{|Y|-1}{2N}\). Further, \(|E(\delta H(T_{X}))|=\frac{\sum_{T_{X},X}p(t_{x}|x)p(x|t_{x})-1}{2N}\leq\frac{ |T_{X}|-1}{2N}\) and \(|E(\delta H(T_{Y}))|=\leq\frac{|T_{Y}|-1}{2N}\) where the inequalities comes from \(p(t_{x}|x),p(t_{y}|y)\leq 1\). Combining these and similar results, we get biases of estimators of mutual information terms, which enter the GSIB loss functions: \[|E(\delta I_{\alpha_{X}}(X,T_{X}))| \leq\frac{|T_{X}|-1}{2N}, \tag{39}\] \[|E(\delta I_{\alpha_{Y}}(Y,T_{Y}))| \leq\frac{|T_{Y}|-1}{2N},\] (40) \[|E(\delta I(T_{X},T_{Y}))| \leq\frac{(|T_{X}|+1)(|T_{Y}|+1)-4}{2N}. \tag{41}\] For the terms in the GIB loss function, we similarly get \[|E(\delta I(Y,T_{X}))| \leq\frac{(|Y|+1)(|T_{X}|+1)-4}{2N}, \tag{42}\] \[|E(\delta I(Y,T_{Y}))| \leq\frac{(|X|+1)(|T_{Y}|+1)-4}{2N}. \tag{43}\] Note that these biases, to the two leading orders in \(\delta p\), are half of the bound on the biases obtained in the previous Section, Eqs. (29-32). Thus the same scaling analyses apply. Crucially, we again observe that the bias of the symmetric variant of GIB only depends on the cardinalities of the compressed variables and not the uncompressed ones. Hence it is much smaller than for two GIBs applied in parallel, where the bias depends on \(|X||T_{Y}|\) and \(|Y||T_{X}|\). Similarly we now calculate the mean squared error (see Appendix for details): \[E(\delta I(X,T_{X})^{2})=\\ =\frac{1}{N}\left[\sum_{X,T_{X},T^{\prime}_{X}}p(t_{x}|x)p(t^{ \prime}_{x}|x)p(x)\log\frac{p(x,t_{x})}{p(x)p(t_{x})}\log\frac{p(x,t^{\prime}_{ x})}{p(x)p(t^{\prime}_{x})}-I(X,T_{X})^{2}\right]. \tag{44}\] This expression can be simplified in two important limits. First, we consider the trivial minimum of the loss function, discussed earlier. There the mapping is uniform, \(p(t_{x}|x)=1/|T_{X}|\), so that also \(p(t_{x})=1/|T_{X}|\). We get: \[E(I(X,T_{X})-\hat{I}(X,T_{X}))^{2}=\\ =\sum_{X,T_{X},T^{\prime}_{X}}\frac{p(x)}{N|T_{X}|^{2}}\log\frac {p(x)/|T_{X}|}{p(x)/|T_{X}|}\log\frac{p(x)/|T_{X}|}{p(x)/|T_{X}|}-\frac{0^{2}}{ N}=0. \tag{45}\] That is, fluctuations vanish in this case. This is expected since there is no information between \(T_{X}\) and \(X\), and measuring more data points does not result in a more accurate estimate of the mutual information. The second interesting case is a "winner-take-all" mapping, \(p(t_{x}|x)=\delta(t_{x},\tau(x))\), which would correspond to a deterministic clustering of multiple values of \(x\) into one \(t_{x}\). This results in \[E(I(X,T_{X})-\hat{I}(X,T_{X}))^{2} =\frac{1}{N}\left[\sum_{X}p(x)\log\frac{1}{p(\tau(x))}\log\frac{ 1}{p(\tau(x))}-I(X,T_{X})^{2}\right]\] \[\leq\frac{1}{N}\left[\log(\min(|T_{X}|,|X|))^{2}-I(X,T_{X})^{2} \right]. \tag{46}\] Thus, here the average squared error is bound by \(\frac{\log|T_{X}|^{2}-I(X,T_{X})^{2}}{N}\leq\frac{\log|T_{X}|^{2}}{N}\), which means that the RMS error for \(I(T_{X},X)\) is \(\leq\frac{\log|T_{X}|}{\sqrt{N}}\). Similarly, the RMS errors for \(I(T_{Y},Y)\) and \(I(T_{X},T_{Y})\) are \(\leq\frac{\log|T_{Y}|}{\sqrt{N}}\) and \(\leq\frac{\log\min(|T_{X}|,|T_{Y}|)}{\sqrt{N}}\), respectively. For the traditional IB, the RMS error for \(I(T,X)\) is \(\leq\frac{\log|T|}{\sqrt{N}}\), and the RMS error for \(I(T,Y)\) is \(\leq\frac{\log|T|}{\sqrt{N}}\). Thus, the average fluctuations are small and are of the same order of magnitude for both the symmetric bottleneck and the traditional bottleneck. This means that the dominant term is the average bias. As we saw earlier, the latter can be much worse for the traditional IB than for the symmetric IB. ## 4 Conclusion Here we defined the generalized symmetric version of the information bottleneck (GSIB). We calculated the error bounds for each term within the loss function of GSIB and of the loss functions of the traditional generalized information bottleneck (GIB). We showed that the bias in estimating the loss function, and hence the error in finding the solution to the optimization problem from a finite dataset, is smaller for the GSIB compared to applying traditional GIB to each of the input variables, in parallel. We also calculated the average error and RMS error for each of these terms, resulting in essentially the same conclusions. All of these results suggest that when the cardinality of the measured variables \(X\) and \(Y\) are both large, and both variables require compression, then simultaneous compression is more data efficient than independently compressing each of the input variables. While making extrapolations from a simple discrete variable case to more complex scenarios is difficult, we hope that these results are only the first of many to demonstrate a more general point that _simultaneous_ dimensionality reduction is typically more data efficient than _independent_ dimensionality reduction. In fact, in a forthcoming publication, we expect to show this for a class of linear dimensionality reduction techniques for continuous variables. If true, such findings would suggest a general paradigm for efficient dimensionality reduction in complex multivariate datasets. Since physical theories are often formulated in terms of collective, coarse-grained representations (e.g., magnetization or temperature, which are expectation values of microscopic spins or energies of molecules), existence of data efficient algorithms for finding such reduced representations bodes well for using data-driven approaches for building physical theories of complex systems. ### Acknowledgments The authors are grateful to Eslam Abdelaleem and Ahmed Roman for useful discussions and Sean Ridout for providing feedback on the manuscript. This work was supported, in part, by the Simons Investigator award and NSF Grant No. 2010524. ## 5 Appendix ### Appendix: Derivation of the Generalized Symmetric Bottleneck In what follows, we will derive the formal solution for the generalized symmetric bottleneck for \(p(t_{x}|x)\). The formal solution is found by minimizing the cost function, Eq. (8) with respect to \(p(t_{x}|x)\), subject to the normalization constraint. For this, we calculate the following useful derivatives: \[\frac{\partial p(t_{x})}{\partial p(t^{\prime}_{x}|x^{\prime})} =\frac{\partial}{\partial p(t^{\prime}_{x}|x^{\prime})}\sum_{X}p (t_{x}|x)p(x)=\delta(t_{x},t^{\prime}_{x})p(x^{\prime}), \tag{47}\] \[\frac{\partial p(t_{y})}{\partial p(t^{\prime}_{x}|x^{\prime})} =0,\] (48) \[\frac{\partial p(t_{x},t_{y})}{\partial p(t^{\prime}_{x}|x^{\prime })} =\frac{\partial}{\partial p(t^{\prime}_{x}|x^{\prime})}\sum_{X}p(t_{x}|x)p(x, t_{y})=\delta(t_{x},t^{\prime}_{x})p(x^{\prime},t_{y}). \tag{49}\] To enforce the normalization of \(p(t_{x}|x)\), we add a Lagrange multiplier \(\lambda\) times the normalization constraint to the cost function. With the helpful identities above, we now find the first derivative: \[\frac{\partial(L_{\rm GSIB}+\lambda(\sum_{X,T_{X}}p(t_{x}|x)p(x)-1))} {\partial p(t^{\prime}_{x}|x^{\prime})}=\] \[=\frac{\partial}{\partial p(t^{\prime}_{x}|x^{\prime})}\left[- \sum_{T_{X}}p(t_{x})\ln p(t_{x})+\alpha_{x}\sum_{X,T_{X}}p(x)p(t_{x}|x)\ln p(t_ {x}|x)\right.\] \[\qquad-\sum_{T_{Y}}p(t_{y})\ln p(t_{y})+\alpha_{y}\sum_{Y,T_{Y}}p (y)p(t_{y}|y)\ln p(t_{y}|y)\] \[\qquad\left.-\beta\sum_{T_{X},T_{Y}}p(t_{x},t_{y})\ln\frac{p(t_{x},t_{y})}{p(t_{x})p(t_{y})}+\lambda\left(\sum_{X,T_{X}}p(t_{x}|x)p(x)-1\right)\right]\] \[=-p(x^{\prime})\ln p(t^{\prime}_{x})-p(x^{\prime})+\alpha_{x}[p (x^{\prime})\ln p(t^{\prime}_{x}|x^{\prime})+p(x^{\prime})]\] \[\qquad-\beta\sum_{T_{Y}}p(x^{\prime},t_{y})\ln\frac{p(t^{\prime}_ {x},t_{y})}{p(t^{\prime}_{x})p(t_{y})}+\lambda p(x^{\prime})\] \[=-p(x^{\prime})\left[\ln p(t^{\prime}_{x})+1-\lambda-\alpha_{x} \left(\ln p(t^{\prime}_{x}|x^{\prime})+1\right)\right.\] \[\qquad\left.+\beta\sum_{T_{Y}}p(t_{y}|x^{\prime})\ln\frac{p(t_{y} |t^{\prime}_{x})}{p(t_{y})}\frac{p(t_{y}|x^{\prime})}{p(t_{y}|x^{\prime})}\right]\] \[=-p(x^{\prime})\left[\ln p(t^{\prime}_{x})\right)+1-\lambda- \alpha_{x}\left(\ln p(t^{\prime}_{x}|x^{\prime})+1\right)\] \[\qquad+\beta\sum_{T_{Y}}p(t_{y}|x^{\prime})\ln\frac{p(t_{y}|t^{ \prime}_{x})p(t_{y}|x^{\prime})}{p(t_{y})p(t_{y}|x^{\prime})}\Bigg{]}\] \[=-p(x^{\prime})\left[\ln p(t^{\prime}_{x})+1-\lambda-\alpha_{x} \left(\ln p(t^{\prime}_{x}|x^{\prime})+1\right)\right.\] \[\qquad\left.+\beta\sum_{T_{Y}}p(t_{y}|x^{\prime})\left(\ln\frac{p (t_{y}|x^{\prime})}{p(t_{y})}-\ln\frac{p(t_{y}|x^{\prime})}{p(t_{y}|t^{\prime} _{x})}\right)\right]\] \[=-p(x^{\prime})\left[\ln p(t^{\prime}_{x})+1-\lambda-\alpha_{x} \left(\ln p(t^{\prime}_{x}|x^{\prime})+1\right)\right.\] \[\qquad\left.+\beta D_{\rm KL}(p(t_{y}|x^{\prime})||p(t_{y}))- \beta D_{\rm KL}(p(t_{y}|x^{\prime})||p(t_{y}|t^{\prime}_{x}))\right]. \tag{50}\] We now find the minimum of the cost function subject to the constraint that \(p(t_{x}|x)\) is normalized by setting this derivative to zero and solving for \(p(t^{\prime}_{x}|x^{\prime})\). Doing this, we find a formal solution: \[p(t^{\prime}_{x}|x^{\prime})=\frac{\exp\left[\frac{1}{\alpha_{x}}\left(\ln p( t^{\prime}_{x})-\beta D_{\rm KL}(p(t_{y}|x^{\prime})||p(t_{y}|t^{\prime}_{x}) \right)\right]}{Z_{x}(x^{\prime},\alpha_{x},\beta)}, \tag{51}\] where \(Z_{x}(x^{\prime},\alpha_{x},\beta)=\exp\left[-1+\lambda+\alpha_{x}-\beta D_{ \rm KL}(p(t_{y}|x^{\prime})||p(t_{y}))\right]\), and \(\lambda\) is chosen such that \(p(t^{\prime}_{x}|x^{\prime})\) is normalized. Notice that the normalization constant \(Z_{x}\) is independent of \(t_{y}\) and \(t^{\prime}_{x}\). It only depends on \(x^{\prime}\), \(\alpha_{x}\), and \(\beta\). The same procedure can be followed to find the solution of the generalized symmetric information bottleneck for \(p(t_{y}|y)\). \[p(t^{\prime}_{y}|y^{\prime})=\frac{\exp\left[\frac{1}{\alpha_{y}}\left(\ln p(t^ {\prime}_{y})-\beta D_{\rm KL}(p(t_{x}|y^{\prime})||p(t_{x}|t^{\prime}_{y}) \right)\right]}{Z_{y}(y^{\prime},\alpha_{y},\beta)}, \tag{52}\] ### Appendix: Mean Error Here we make explicit the calculations started in Section 3.2. Using Eq. (38) from the main text we, find the expected bias to be: \[|E(\delta H(X))| =\sum_{X}\frac{E(\delta p(x)^{2})}{2p(x)}=\sum_{X}\frac{E(\sum_{Y} \delta p(x,y))^{2}}{2\sum_{Y}p(x,y)}\] \[=\sum_{X}\frac{\sum_{Y,Y^{\prime}}E(\delta p(x,y)\delta p(x,y^{ \prime}))}{2\sum_{Y}p(x,y)}\] \[=\sum_{X}\frac{\sum_{Y}p(x,y)-\sum_{Y,Y^{\prime}}p(x,y)p(x,y^{ \prime})}{2N\sum_{Y}p(x,y)}\] \[=\sum_{X}\frac{p(x)-p^{2}(x)}{2Np(x)}=\frac{|X|-1}{2N}. \tag{53}\] Similarly, \(|E(\delta H(Y))|=\frac{|Y|-1}{2N}\), and \(|E(\delta H(X,T_{X}))|=\frac{|X|-1}{2N}\). Now we write: \[|E(\delta H(T_{X}))| =\sum_{T_{X}}\frac{E(\delta p(t_{x})^{2})}{2p(t_{x})}=\sum_{T_{X} }\frac{E(\sum_{X,Y}\delta p(t_{x}|x)p(x,y))^{2}}{2\sum_{X,Y}p(x,y)}\] \[=\sum_{T_{X}}\frac{\sum_{X,X^{\prime},Y,Y^{\prime}}E(p(t_{x}|x)p( t_{x}|x^{\prime})\delta p(x,y)\delta p(x^{\prime},y^{\prime}))}{2\sum_{X,Y}p(t_{x}|x )p(x,y)}\] \[=\sum_{T_{X}}\frac{\sum_{X,Y}p(t_{x}|x)^{2}p(x,y)-\sum_{X,X^{ \prime},Y,Y^{\prime}}p(t_{x}|x)p(t_{x}|x^{\prime})p(x,y)p(x^{\prime},y^{ \prime})}{2N\sum_{X,Y}p(t_{x}|x)p(x,y)}\] \[=\sum_{T_{X}}\frac{\sum_{X}p(t_{x}|x)^{2}p(x)-p(t_{x})^{2}}{2Np(t _{x})}=\sum_{T_{X}}\frac{\sum_{X}p(t_{x}|x)p(x|t_{x})-p(t_{x})}{2N}\] \[=\frac{\sum_{T_{X},X}[p(t_{x}|x)p(x|t_{x})]-1}{2N}\leq\frac{|T_{X }|-1}{2N}, \tag{54}\] where the inequality comes from \(p(t|x)\leq 1\), so that \(p(t|x)^{2}p(x)\leq p(t|x)p(x)\). We can combine these results to find the overall bias for \(\hat{I}(X,T_{X})\): \[|E(\delta I(X,T_{X}))| =|E(\delta H(X))+E(\delta H(T_{X}))-E(\delta H(X,T_{X}))|\] \[=\frac{|X|-1}{2N}+\frac{\sum_{T_{X},X}p(t_{x}|x)p(x|t_{x})-1}{2N} -\frac{|X|-1}{2N}\] \[=\frac{\sum_{T_{X},X}p(t_{x}|x)p(x|t_{x})-1}{2N}\leq\frac{|T_{X} |-1}{2N}. \tag{55}\] Similarly, \[|E(\delta I(Y,T_{Y}))| =|E(\delta H(Y))+E(\delta H(T_{Y}))-E(\delta H(Y,T_{Y}))|\] \[=\frac{|Y|-1}{2N}+\frac{\sum_{T_{Y},Y}p(t_{y}|y)p(y|t_{y})-1}{2N} -\frac{|Y|-1}{2N}\] \[=\frac{\sum_{T_{Y},Y}p(t_{y}|y)p(y|t_{y})-1}{2N}\leq\frac{|T_{Y} |-1}{2N}. \tag{56}\] Finally, we calculate the bias for \(\hat{I}(T_{X},T_{Y})\): \[|E(\delta I(T_{X},T_{Y}))|= |E(\delta H(T_{X}))+E(\delta H(T_{Y}))-E(\delta H(T_{X},T_{Y}))|\] \[= \frac{\sum_{T_{X},X}p(t_{x}|x)p(x|t_{x})-1}{2N}+\frac{\sum_{T_{Y}, Y}p(t_{y}|y)p(y|t_{y})-1}{2N} \tag{57}\] \[-\frac{\sum_{T_{X},T_{Y},X,Y}p(t_{x},t_{y}|x,y)p(x,y|t_{x},t_{y}) -1}{2N}, \tag{58}\] and \[|E(\delta I(T_{X},T_{Y}))| \leq|E(\delta H(T_{X}))|+|E(\delta H(T_{Y}))|+|E(\delta H(T_{X},T_ {Y}))|\] \[\leq\frac{|T_{X}|-1}{2N}+\frac{|T_{Y}|-1}{2N}+\frac{|T_{X}||T_{Y} |-1}{2N}. \tag{59}\] We can perform similar calculations for the original bottleneck to obtain: \[|E(\delta I(Y,T))| \leq|E(\delta H(Y))|+|E(\delta H(T))|+|E(\delta H(Y,T))|\] \[=\frac{|Y|-1}{2N}+\frac{\sum_{T,X}p(t|x)p(x|t)-1}{2N}+\sum_{Y,T} \frac{\sum_{X}p(t|x)p(x|t,y)}{2N}-\frac{1}{2N}\] \[\leq\frac{|Y|-1}{2N}+\frac{|T|-1}{2N}+\frac{|Y||T|-1}{2N}. \tag{60}\] ### Appendix: Mean Squared Error Using a method inspired by Still and Bialek (2004), we start by calculating the expected squared error for the mutual information between two arbitrary variables \(A\) and \(B\), where the estimated probabilities are different from the true ones by a small error \(\delta\), \(\hat{p}(a,b)=p(a,b)+\delta p(a,b)\), \(\hat{p}(a)=p(a)+\delta p(a)\) and \(\hat{p}(b)=p(b)+\delta p(b)\). First, let's calculate the mutual information to the first order in \(\delta p\): \[\tilde{I}(A,B) =\sum_{A,B}(p(a,b)+\delta p(a,b))\log\frac{p(a,b)+\delta p(a,b)}{(p( a)+\delta p(a))(p(b)+\delta p(b))}\] \[=\sum_{A,B}(p(a,b)+\delta p(a,b))\log\left[\frac{p(a,b)}{p(a)p(b)} \frac{1+\delta p(a,b)/p(a,b)}{(1+\delta p(a)/p(a))(1+\delta p(b)/p(b))}\right]\] \[=\sum_{A,B}(p(a,b)+\delta p(a,b))\left[\log\frac{p(a,b)}{p(a)p(b)} +\log\left(1+\frac{\delta p(a,b)}{p(a,b)}\right)\right.\] \[\qquad\left.-\log\left(1+\frac{\delta p(a)}{p(a)}\right)-\log \left(1+\frac{\delta p(b)}{p(b)}\right)\right]\] \[\approx\sum_{A,B}(p(a,b)+\delta p(a,b))\left[\log\frac{p(a,b)}{p( a)p(b)}+\frac{\delta p(a,b)}{p(a,b)}-\frac{\delta p(a)}{p(a)}-\frac{\delta p(b)}{p(b) }+\ldots\right]\] \[\approx\sum_{A,B}\left[\delta p(a,b)\log\frac{p(a,b)}{p(a)p(b)} +(\delta p(a,b)-p(b|a)\delta p(a)-p(a|b)\delta p(b))+\ldots\right)\right]\] \[\qquad\qquad+I(A,B)\] \[=\sum_{A,B}\delta p(a,b)\log\frac{p(a,b)}{p(a)p(b)}+\sum_{A,B} \delta p(a,b)-\sum_{A}\delta p(a)-\sum_{B}\delta p(b)+\ldots\] \[\qquad\qquad+I(A,B)\] \[=I(A,B)+\sum_{A,B}\delta p(a,b)\left(\log\frac{p(a,b)}{p(a)p(b)} -1\right)+\ldots \tag{61}\] Where in the last two lines, we used \(\sum_{B}p(b|a)=1\), \(\sum_{A}p(a|b)=1\), and \(\delta p(a)=\sum_{B}\delta p(a,b)\), \(\delta p(b)=\sum_{A}\delta p(a,b)\), respectively. Thus, we see that \(\delta I(A,B)=\sum_{A,B}\delta p(a,b)(\log\frac{p(a,b)}{p(a)p(b)}-1)\) to first order in \(\delta p(a,b)\). We can now calculate the average squared error: \[E[\delta I(A,B)^{2}] =E\left[\sum_{A,B}\delta p(a,b)\left(\log\frac{p(a,b)}{p(a)p(b)} -1\right)\right.\] \[\qquad\qquad\times\sum_{A^{\prime},B^{\prime}}\delta p(a^{\prime},b^{\prime})\left(\log\frac{p(a^{\prime},b^{\prime})}{p(a^{\prime})p(b^{ \prime})}-1\right)\right]\] \[=\sum_{A,B,A^{\prime},B^{\prime}}E\left[\delta p(a,b)\delta p(a^{ \prime},b^{\prime})\right]\] \[\qquad\qquad\times\left(\log\frac{p(a,b)}{p(a)p(b)}-1\right) \left(\log\frac{p(a^{\prime},b^{\prime})}{p(a^{\prime})p(b^{\prime})}-1\right). \tag{62}\] We can use this generic expression to find the squared error for the estimator of information between the variables \(X\) and \(T_{X}\), where \(\delta p(x,t_{x})=p(t_{x}|x)\delta p(x)\), and \(E(\delta p(x)\delta p(x^{\prime}))=1/N[\delta(x,x^{\prime})p(x)-p(x)p(x^{\prime})]\). We calculate \(E[\delta I(X,T_{X})^{2}]\) as follows: \[E[\delta I(X,T_{X})^{2}]\\ =\sum_{X,T,X^{\prime},T^{\prime}}E\left[\delta p(x,t_{x})\delta p( x^{\prime},t^{\prime}_{x})\right]\left(\log\frac{p(x,t_{x})}{p(x)p(t_{x})}-1 \right)\left(\log\frac{p(x^{\prime},t^{\prime}_{x})}{p(x^{\prime})p(t^{\prime} _{x})}-1\right)\\ =\sum_{X,T_{X},X^{\prime},T^{\prime}_{X}}p(t_{x}|x)p(t^{\prime}_{ x}|x^{\prime})E\left[\delta p(x)\delta p(x^{\prime})\right]\left(\log\frac{p(x,t_{x})}{p( x)p(t_{x})}-1\right)\\ \times\left(\log\frac{p(x^{\prime},t^{\prime}_{x})}{p(x^{\prime} )p(t^{\prime}_{x})}-1\right)\\ =\sum_{X,T_{X},X^{\prime},T^{\prime}_{X}}p(t_{x}|x)p(t^{\prime}_{ x}|x^{\prime})\frac{p(x)\delta(x,x^{\prime})-p(x)p(x^{\prime})}{N}\\ \times\left(\log\frac{p(x,t_{x})}{p(x)p(t_{x})}-1\right)\left( \log\frac{p(x^{\prime},t^{\prime}_{x})}{p(x^{\prime})p(t^{\prime}_{x})}-1\right) \\ =\frac{1}{N}\left[\sum_{X,T_{X},T^{\prime}_{X}}p(t_{x}|x)p(t^{ \prime}_{x}|x)p(x)\right.\\ \times\left(\log\frac{p(x,t_{x})}{p(x)p(t_{x})}\log\frac{p(x,t^{ \prime}_{x})}{p(x)p(t^{\prime}_{x})}-\log\frac{p(x,t_{x})}{p(x)p(t_{x})}-\log \frac{p(x,t^{\prime}_{x})}{p(x)p(t^{\prime}_{x})}+1\right)\right]\\ -\frac{1}{N}\left[\sum_{X,T_{X}}p(t_{x}|x)p(x)\left(\log\frac{p(x, t_{x})}{p(x)p(t_{x})}-1\right)\right.\\ \times\sum_{X^{\prime},T^{\prime}_{X}}p(t^{\prime}_{x}|x^{\prime} )p(x^{\prime})\left(\log\frac{p(x^{\prime},t^{\prime}_{x})}{p(x^{\prime})p(t_{ x})^{\prime}}-1\right)\right]\\ =\frac{1}{N}\left[\sum_{X,T_{X},T^{\prime}_{X}}p(t_{x}|x)p(t^{ \prime}_{x}|x)p(x)\log\frac{p(x,t_{x})}{p(x)p(t_{x})}\log\frac{p(x,t^{\prime} _{x})}{p(x)p(t^{\prime}_{x})}\right.\\ \left.-2I(X,T_{X})+1-(I(X,T_{X})-1)^{2}\right]\\ =\frac{1}{N}\left[\sum_{X,T_{X},T^{\prime}_{X}}p(t_{x}|x)p(t^{ \prime}_{x}|x)p(x)\log\frac{p(x,t_{x})}{p(x)p(t_{x})}\log\frac{p(x,t^{\prime} _{x})}{p(x)p(t^{\prime}_{x})}-I(X,T_{X})^{2}\right]. \tag{63}\] Now let's look at two limits when we can simplify the above expression. In the first limit, we assume that the mapping is uniform, \(p(t_{x}|x)=1/|T_{X}|\), which means that \(p(t_{x})=1/|T_{X}|\) as well. Then \[E[(I(X,T_{X})-\hat{I}(X,T_{X}))^{2}]=\sum_{X,T_{X},T^{\prime}_{X}}\frac{p(x)}{| T_{X}|^{2}}\log\frac{p(x)/|T_{X}|}{p(x)/|T_{X}|}\log\frac{p(x)/|T_{X}|}{p(x)/|T_{X}| }\frac{1}{N}-\frac{0^{2}}{N}=0. \tag{64}\] In the other limit, we assume a "winner takes all" mapping, where \(\delta(t_{x},\tau(x))\). We can reduce the expression to: \[E[\delta I(X,T_{X})^{2}]=\\ =\frac{1}{N}\left[\sum_{X,T_{X},T_{X}^{\prime}}\delta(t_{x},\tau(x) )\delta(t_{x}^{\prime},\tau(x))p(x)\log\frac{\delta(t_{x},\tau(x))}{p(t_{x})} \log\frac{\delta(t_{x}^{\prime},\tau(x))}{p(t_{x}^{\prime})}\right.\\ \left.-I(X,T_{X})^{2}\right]\\ =\frac{1}{N}\left[\sum_{X}p(x)\log\frac{1}{p(\tau(x))}\log\frac{1 }{p(\tau(x))}-I(X,T_{X})^{2}\right]\\ \leq\frac{1}{N}\left[\log(\min(|T_{X}|,|X|))^{2}-I(X,T_{X})^{2} \right]\leq\frac{1}{N}\left[\log(\min(|T_{X}|,|X|))^{2}\right]. \tag{65}\] The result for \(E[\delta I(Y,T_{Y})^{2}]\) is similar to that for \(E[\delta I(X,T_{X})^{2}]\), Eq. (63: \[E[\delta I(Y,T_{Y})^{2}]=\\ =\frac{1}{N}\left[\sum_{Y,T_{Y},T_{Y}^{\prime}}p(t_{y}|y)p(t_{y}^ {\prime}|y)p(y)\log\frac{p(y,t_{y})}{p(y)p(t_{y})}\log\frac{p(y,t_{y}^{\prime} )}{p(y)p(t_{y}^{\prime})}-I(Y,T_{Y})^{2}\right]. \tag{66}\] Finally we can calculate the covariance of fluctuations in the compressed variables, \(T_{X}\) and \(T_{Y}\). Here \(\delta p(t_{x},t_{y})=\sum_{X,Y}p(t_{x}|x)p(t_{y}|y)\delta p(x,y)\), and \[E[\delta p(t_{x},t_{y})\delta p(t_{x}^{\prime},t_{y}^{\prime})] =E\left[\sum_{X,Y}p(t_{x}|x)p(t_{y}|y)\delta p(x,y)\sum_{X^{\prime },Y^{\prime}}p(t_{x}^{\prime}|x^{\prime})p(t_{y}^{\prime}|y^{\prime})\delta p(x ^{\prime},y^{\prime})\right]\] \[=\sum_{X,Y,X^{\prime},Y^{\prime}}p(t_{x}|x)p(t_{y}|y)p(t_{x}^{ \prime}|x^{\prime})p(t_{y}^{\prime}|y^{\prime})E[\delta p(x,y)\delta p(x,y)]\] \[=\sum_{X,Y,X^{\prime},Y^{\prime}}p(t_{x}|x)p(t_{y}|y)p(t_{x}^{ \prime}|x^{\prime})p(t_{y}^{\prime}|y^{\prime})\] \[\qquad\times\frac{p(x,y)\delta(x,x^{\prime})\delta(y,y^{\prime}) -p(x,y)p(x^{\prime},y^{\prime})}{N}\] \[=\left[\frac{\sum_{X,Y}p(t_{x}|x)p(t_{y}|y)p(t_{x}^{\prime}|x)p(t_ {y}^{\prime}|y)p(x,y)}{N}\right]\] \[\qquad\quad-\left[\frac{p(t_{x},t_{y})p(t_{x}^{\prime},t_{y}^{ \prime})}{N}\right]. \tag{67}\] Using the previous result and Eq. (62), we find: \[E[\delta I(T_{X},T_{Y})^{2}] =\sum_{T_{X},T_{Y},T^{\prime}_{X},T^{\prime}_{Y}}E[\delta p(t_{x},t _{y})\delta p(t^{\prime}_{x},t^{\prime}_{y})]\left(\log\frac{p(t_{x},t_{y})}{p(t _{x})p(t_{y})}-1\right)\] \[\qquad\times\left(\log\frac{p(t^{\prime}_{x},t^{\prime}_{y})}{p(t ^{\prime}_{x})p(t^{\prime}_{y})}-1\right)\] \[=\sum_{T_{X},T_{Y},T^{\prime}_{X},T^{\prime}_{Y}}\left[\frac{ \sum_{X,Y}p(t_{x}|x)p(t_{y}|y)p(t^{\prime}_{x}|x)p(t^{\prime}_{y}|y)p(x,y)}{N}\right.\] \[\qquad\qquad\times\left.\log\frac{p(t_{x},t_{y})}{p(t_{x})p(t_{y} )}\log\frac{p(t^{\prime}_{x},t^{\prime}_{y})}{p(t^{\prime}_{x})p(t^{\prime}_{y })}\right]-I(T_{X},T_{Y})^{2}/N. \tag{68}\] In the "winner takes all limit", where \(p(t_{x}|x)=\delta(t_{x},\tau_{x}(x))\), and \(p(t_{y}|y)=\delta(t_{y},\tau_{y}(y))\), we find: \[E[\delta I(T_{X},T_{Y})^{2}]=\] \[\qquad=\sum_{T_{X},T_{Y},T^{\prime}_{X},T^{\prime}_{Y}}\left[ \frac{\sum_{X,Y}\delta(t_{x},\tau_{x}(x))\delta(t_{y},\tau_{y}(y))\delta(t^{ \prime}_{x},\tau_{x}(x))\delta(t^{\prime}_{y},\tau_{y}(y))p(x,y)}{N}\right.\] \[\qquad\qquad\times\left.\log\frac{p(t_{x},t_{y})}{p(t_{x})p(t_{y} )}\log\frac{p(t^{\prime}_{x},t^{\prime}_{y})}{p(t^{\prime}_{x})p(t^{\prime}_{ y})}\right]-I(T_{X},T_{Y})^{2}/N\] \[\qquad=\sum_{X,Y}\left[\frac{p(x,y)}{N}\log\left(\frac{p(\tau_{x} (x),\tau_{y}(y))}{p(\tau_{x}(x))p(\tau_{y}(y))}\right)^{2}\right]-I(T_{X},T_{ Y})^{2}/N\] \[\qquad\leq\frac{1}{N}\log\left(\min(|T_{X}|,|T_{Y}|)\right)^{2}- I(T_{X},T_{Y})^{2}/N. \tag{69}\] Here we have calculate the average bias and variance for each term in the GSIB and the GIB. We found, in general, that the variance decays as \(1/N\) and depends only on the cardinality of the compressed variables \(|T_{X}|\) and \(|T_{Y}|\). The expected bias for the GSIB depends on the cardinality of the compressed variables, while the bias for the GIB can depend on both the cardinality of the compressed variables and the cardinality of the uncompressed supervisor variables \(|X|\) and \(|Y|\).
2309.12453
On the Initial Boundary Value Problem to the Time-Fractional Wave Equation with Acoustic Boundary Conditions
This paper is concerned with the study of the well-posedeness for the initial boundary value problem to the time-fractional wave equation with acoustic boundary conditions. The problem is considered in a bounded and connected domain $\Omega \subset {\mathbb{R}^{n}}$, $n \geq 2$, which includes simply connected regions. The boundary of $\Omega$ is made up of two disjoint pieces $\Gamma_{0}$ and $\Gamma_{1}.$ Homogeneous Dirichlet conditions are enforced on $\Gamma_0$, while acoustic boundary conditions are considered on $\Gamma_1$. To establish our main result, we employ the Faedo-Galerkin method and successfully solve a general system of time-fractional ordinary differential equations which extends the scope of the classical Picard-Lindel\"of theorem.
Paulo M. de Carvalho-Neto, Cícero L. Frota, Pedro G. P. Torelli
2023-09-21T19:50:22Z
http://arxiv.org/abs/2309.12453v1
On the initial boundary value problem to the time-fractional wave equation with acoustic boundary conditions ###### Abstract. This paper is concerned with the study of the well-posedeness for the initial boundary value problem to the time-fractional wave equation with acoustic boundary conditions. The problem is considered in a bounded and connected domain \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 2\), which includes simply connected regions. The boundary of \(\Omega\) is made up of two disjoint pieces \(\Gamma_{0}\) and \(\Gamma_{1}.\) Homogeneous Dirichlet conditions are enforced on \(\Gamma_{0}\), while acoustic boundary conditions are considered on \(\Gamma_{1}\). To establish our main result, we employ the Faedo-Galerkin method and successfully solve a general system of time-fractional ordinary differential equations which extends the scope of the classical Picard-Lindelof theorem. Key words and phrases:fractional partial differential equation, Caputo derivative, fractional wave equation, acoustic boundary conditions 2010 Mathematics Subject Classification: 26A33, 34A08, 35L05, 35R11 \({}^{*}\)Corresponding author: [email protected] ## 1. Introduction Over the past few decades, there has been a growing interest in using fractional calculus in combination with differential equations as a powerful tool for analyzing complex systems. These systems include, among others, diffusion in nerve cells, anomalous diffusion processes in porous media, turbulent fluids, plasma, finance and others; see [11, 12, 16, 21, 32, 34] as a few examples. In the light of that, in this paper we are particulary interested to address the classical initial boundary value problem (IBVP) for the wave equation with acoustic boundary condition, when we replace the standard time derivative with its natural non-integer generalization, the Caputo fractional derivative. To be more precise, here we assume that \(\Omega\subset\mathbb{R}^{n}\) (with \(n\geq 2\)) is an open, bounded and connected set, with smooth boundary \(\Gamma\) made up of two disjoint parts \(\Gamma_{0}\) and \(\Gamma_{1}\) (\(\Gamma=\Gamma_{0}\cup\Gamma_{1}\) and \(\Gamma_{0}\cap\Gamma_{1}=\emptyset\)), both connected with positive measure and \(\nu\) denotes the unit normal vector on \(\Gamma_{1}\) pointing towards the exterior of \(\Omega\). The main subject of this work is to prove the well posedeness (existence and uniqueness of solution, as well as its countinuous dependence on initial data) to the following IBVP: \[{}^{C}\!D_{t}^{\alpha}u_{t}(x,t)-\Delta u(x,t)=0, (x,t)\in\Omega\times(0,T), \tag{1}\] \[u(x,t)=0, (x,t)\in\Gamma_{0}\times(0,T),\] (2) \[f(x)\delta_{tt}(x,t)+g(x)\delta_{t}(x,t)+h(x)\delta(x,t)=-u_{t} (x,t), (x,t)\in\Gamma_{1}\times(0,T),\] (3) \[\delta_{t}(x,t)=\frac{\partial u}{\partial\nu}(x,t), (x,t)\in\Gamma_{1}\times(0,T),\] (4) \[u(x,0)=u_{0}(x),\quad u_{t}(x,0)=u_{1}(x), x\in\Omega,\] (5) \[\delta(x,0)=\delta_{0}(x),\quad\delta_{t}(x,0)=\frac{\partial u_{ 0}}{\partial\nu}(x), x\in\Gamma_{1}, \tag{6}\] where \({}^{C}\!D_{t}^{\alpha}\) denotes the classical Caputo fractional derivative of order \(\alpha\in(0,1]\), \(\Delta\) is the Laplacian operator, \(f,g,h:\overline{\Gamma_{1}}\to\mathbb{R}\) are given functions and finally, \(u_{0},u_{1}:\Omega\to\mathbb{R}\) and \(\delta_{0}:\Gamma_{1}\to\mathbb{R}\) are the initial conditions of the system. In the limit case, when \(\alpha=1\), and the acoustic boundary conditions (3) and (4) are imposed on the whole boundary \(\Gamma\), we get the problem associated with a wave motion in a fluid \[u_{tt}(x,t)-\Delta u(x,t)=0, (x,t)\in\Omega\times(0,T),\] \[f(x)\delta_{tt}(x,t)+g(x)\delta_{t}(x,t)+h(x)\delta(x,t)=-u_{t}(x,t), (x,t)\in\Gamma\times(0,T),\] \[\delta_{t}(x,t)=\frac{\partial u}{\partial\nu}(x,t), (x,t)\in\Gamma\times(0,T),\] introduced by Beale and Rosencrans ([5] and [6]), which gave rise to a big range of more general problems, see for instance, [1, 7, 13, 17, 18, 19, 20, 22, 26, 27] and references therein. Still in the context of integer order time derivatives (the classic wave equation), the first paper dealing with a non-linear problem was [19], where Frota and Goldstein considered the Carrier non-linear wave equation \[u_{tt}(x,t)-M\left(\int_{\Omega}u^{2}(x,t)\,dx\right)\Delta u(x,t)+C|u_{t}(x,t)| ^{\gamma}u_{t}(x,t)=0,\quad(x,t)\in\Omega\times(0,T],\] together with (2) - (6); where \(C\) was a nonnegative constant, \(M\in C^{1}([0,\infty);\mathbb{R})\) and \(\gamma>0\). The physical justification for the model can be seen in [5], [6] and [33]. Here, just for a brief contextualization, we give some comments. In our context, \(\Omega\) represents a region of the space filled with an ideal fluid at rest which is set into motion by sound waves propagating within the domain. Therefore, if \(u\) is the potential velocity of the fluid, it satisfies the time-fractional wave equation (1). The boundary \(\Gamma\) is made up two parts \(\Gamma_{0}\) and \(\Gamma_{1}\), with \(\Gamma_{0}\) absorbing (see (2)) and \(\Gamma_{1}\) locally reactive, such that each point \(x\in\Gamma_{1}\) responds independently to the pressure caused by the sound waves. This means that \(\delta\), the vertical displacement in the normal direction to the boundary \(\Gamma_{1}\) should satisfies the equation (3). In fact, each point on the boundary \(\Gamma_{1}\) acts like a damped harmonic oscillator that "springs" in response to the sound pressure. Moreover, we also admit that there exists the compatibility between the normal speed of the boundary and the normal speed of the fluid, which is expressed by equation (4). Initial value problems for the time fractional wave equation have received extensive coverage in the scientific literature. Often, the Laplace transform has been widely employed as the primary tool for obtaining solutions. For instance, classic works in the field, such as [25] and [35], have extensively discussed Cauchy problems in the context of fractional equations. Additionally, notable contributions, as [30] and [31], have provided explanations regarding the inherent diffusion-wave phenomena associated with these solutions. The technique of separating variables combined with the Laplace transform has been utilized in [15] in the study of IBVPs, encompassing both homogeneous and non-homogeneous boundary conditions. More recently, Faedo-Galerkin's method was utilized in [24] to demonstrate the well-posedness of an IBVP for the time fractional wave equation, albeit in a slightly different sense than the Caputo formulation. To the best of author's knowledge, this is the first paper considering the time-fractional wave equation coupled with acoustic boundary conditions. In order to facilitate the implementation of numerical methods, as well as to create basis for treating more general nonlinear problems, we apply Faedo-Galerkin's constructive method. It should be mentioned that even in this context of linear equations, when we project our problem into finite-dimensional subspaces by getting the approximated problems, we arrive at a system of time-fractional ordinary differential equations that, as far as we know, has never been treated before. We drew inspiration from [6] and [19] while formulating the class of problem (1)-(6), where we shall work in a much more general context, namely, that of time-fractional wave equations. Since our approach introduces a more complex problem by incorporating the Caputo fractional time-derivative, some new notions and results should be established. In fact, there are new key challenges when considering problem (1)-(6), which are successfully addressed in this paper: * to consider the specificities and restrictions imposed by the Caputo fractional derivative; * to establish a more general version of Picard-Lindelof theorem. The remaining paper is organized as follows. In Section 2 we provide the prerequisites and auxiliary results that are crucial to the development of subsequent sections. Section 3 is devoted to analyzing a time-fractional ODE system crucial for establishing the initial aspects of our main result. This section also includes observations regarding the system's solution, which are detailed in Subsection 3.1. In Section 4, we explore the well-posedness theory of the problem (1) - (6), and in Section 5, we give some concluding remarks. ## 2. Notations, Prerequisites and Auxiliary Results In this section we give the notations for the functional spaces and also introduce the theory of fractional calculus concerning the Caputo fractional derivative. First of all let us set the triple \((\Omega,\Gamma_{0},\Gamma_{1})\). Throughout the paper \(\Omega\subset\mathbb{R}^{n}\,(n\geq 2)\) is an open bounded and connected set with smooth boundary \(\Gamma\) made up of two disjoint pieces \(\Gamma_{0}\), \(\Gamma_{1}\) both connected with positive measure. Actually \(\Gamma_{0}\) and \(\Gamma_{1}\) are connected subsets of \(\Gamma\) both with positive measure such that \(\Gamma=\Gamma_{0}\cup\Gamma_{1}\) and \(\Gamma_{0}\cap\Gamma_{1}=\emptyset.\) We observe that the domains \(\Omega\) includes simply connected regions of \(\mathbb{R}^{n}.\) For the classical functional spaces such as Sobolev spaces and \(L^{p}\) spaces we adopt the standard notation as described in [28, 29]. We denote the inner products and norms in \(L^{2}(\Omega)\) and \(L^{2}(\Gamma_{1})\) respectively by \[(u,v)=\int_{\Omega}u(x)\,v(x)\,dx,\quad|u|=\left(\int_{\Omega}(u(x))^{2}dx \right)^{\frac{1}{2}}\] and \[(u,v)_{\Gamma_{1}}=\int_{\Gamma_{1}}u(x)\,v(x)\,dx,\quad|u|_{\Gamma_{1}}= \left(\int_{\Gamma_{1}}(u(x))^{2}dx\right)^{\frac{1}{2}}.\] If \(u,v\in H^{1}(\Omega)\), the real Sobolev space of first order, we write \[(\nabla u,\nabla v)=\sum_{i=1}^{n}\left(\frac{\partial u}{\partial x_{i}},\frac{ \partial v}{\partial x_{i}}\right)=\sum_{i=1}^{n}\int\limits_{\Omega}\frac{ \partial u}{\partial x_{i}}(x)\,\frac{\partial v}{\partial x_{i}}(x)\,dx\] and \[|\nabla u|=[(\nabla u,\nabla u)]^{\frac{1}{2}}=\left[\sum_{i=1}^{n}\int\limits _{\Omega}\left(\frac{\partial u}{\partial x_{i}}(x)\right)^{2}\,dx\right]^{ \frac{1}{2}}.\] Let \(\mathcal{H}_{\Delta}(\Omega)=\{u\in H^{1}(\Omega);\Delta u\in L^{2}(\Omega)\}\) be the Hilbert endowed with the inner product \((u,v)_{\mathcal{H}_{\Delta}(\Omega)}=(u,v)_{H^{1}(\Omega)}+(\Delta u,\Delta v)\). By \(\gamma_{0}:H^{1}(\Omega)\longrightarrow H^{\frac{1}{2}}(\Gamma)\) and \(\gamma_{1}:\mathcal{H}_{\Delta}(\Omega)\longrightarrow H^{-\frac{1}{2}}(\Gamma)\) we denote the trace map of order zero and the Neumann trace map on \(\mathcal{H}_{\Delta}(\Omega)\) respectively satisfying \[\gamma_{0}(u)=u|_{\Gamma}\text{ and }\gamma_{1}(u)=\frac{\partial u}{ \partial\nu}\bigg{|}_{\Gamma},\quad\text{for all }u\in\mathcal{D}(\overline{\Omega}).\] It is well known that \(\gamma_{0}\) and \(\gamma_{1}\) are bounded linear operators and for all \(u\in\mathcal{H}_{\Delta}(\Omega)\) and \(v\in H^{1}(\Omega)\) the following generalized Green's formula holds \[(\Delta u,v)+(\nabla u,\nabla v)=\left\langle\gamma_{1}(u),\gamma_{0}(v) \right\rangle_{H^{-\frac{1}{2}}(\Gamma)\times H^{\frac{1}{2}}(\Gamma)}\,.\] Another closed subspace of \(H^{1}(\Omega)\) that we address here is the closure of the set \(\{u\in C^{1}(\overline{\Omega});u=0\text{ in }\Gamma_{0}\}\) in \(H^{1}(\Omega)\), which we denote by \(H^{1}_{\Gamma_{0}}(\Omega)\). Since \(\Omega\) is a regular domain and \(\Gamma_{0}\) has positive measure we have that \[H^{1}_{\Gamma_{0}}(\Omega)=\{u\in H^{1}(\Omega);\gamma_{0}(u)=0\text{ a.e. in }\Gamma_{0}\}\] is a reflexive and separable Hilbert space. Additionally, the Poincare inequality holds in \(H^{1}_{\Gamma_{0}}(\Omega)\) and, in view of this inequality, we have that \[((u,v))=(\nabla u,\nabla v)\quad\text{and}\quad\|u\|=|\nabla u|,\] denote an inner product and a norm in \(H^{1}_{\Gamma_{0}}(\Omega)\) that are equivalent to the usual ones induced by \(H^{1}(\Omega)\). We employ the notations on the Bochner-Lebesgue and Bochner-Sobolev spaces of vector-valued functions, \(L^{p}(0,T;X)\) and \(W^{m,p}(0,T;X)\) where \(X\) is a Banach space, as in [3, Chap. 1], [28] and [37, Chap. 23]. For \(\alpha>0\) and \(f:[0,T]\to X\), the Riemann-Liouville (RL for short) fractional integral of order \(\alpha\) is \[J^{\alpha}_{t}f(t):=\frac{1}{\Gamma(\alpha)}\int_{0}^{t}{(t-s)^{\alpha-1}f(s) \,ds},\] for every \(t\in[0,T]\) such that the above integral exists. Above \(\Gamma\) is used to denote the classical Euler's gamma function. Also the RL fractional derivative of order \(\alpha\) and the Caputo fractional derivative of order \(\alpha\) are respectively defined by \[D_{t}^{\alpha}f(t):=\frac{d^{\lceil\alpha\rceil}}{dt^{\lceil\alpha\rceil}} \left[J_{t}^{\lceil\alpha\rceil-\alpha}f(t)\right],\] and \[{}^{C}\!D_{t}^{\alpha}f(t):=D_{t}^{\alpha}\left[f(t)-\sum_{k=0}^{\lceil\alpha \rceil-1}\frac{f^{(k)}(0)}{k!}t^{k}\right],\] for every \(t\in[0,T]\) such that the right side exists. Above we use \(\lceil\cdot\rceil\) to represent the ceiling function, i.e., if \(m\in\mathbb{N}\) is such that \(m-1<\alpha\leq m\), then \(\lceil\alpha\rceil=m\). In [10] the authors prove that \(\{J_{t}^{\alpha}:\alpha\geq 0\}\subset\mathcal{L}\big{(}L^{p}(0,T;X)\big{)}\) is a \(C_{0}\)-semigroup on \(L^{p}(0,T;X),1\leq p<\infty\), and \(\{J_{t}^{\alpha}:\alpha\geq 0\}\subset\mathcal{L}\big{(}C([0,T];X)\big{)}\) forms a semigroup on \(C([0,T];X)\), where \(J_{t}^{0}f(t)=f(t)\) for almost every \(t\in[0,T]\). Concerning the existence of the Caputo fractional derivative \({}^{C}\!D_{t}^{\alpha}f(t)\) for almost every \(t\in[0,T]\), it is enough to consider functions \(f\in C^{\lceil\alpha\rceil-1}([0,T];X)\) such that \(J_{t}^{\lceil\alpha\rceil-\alpha}f(t)\in W^{\lceil\alpha\rceil,1}(0,T;X)\), see Bazhlekova [4, Section 1.2] and Carvalho-Neto [8, Section 2.2]. Below we present fundamental results concerning the previously mentioned fractional operators, which are essential for Section 3 and the estimations in Section 4. We emphasize the simplification \({}^{\prime}=\frac{d}{dt}\) throughout the remainder of this paper to simplify notation and, more importantly, to avoid excess subscripts on Theorem 8. **Proposition 1**.: _Let \(0<\alpha<1\) and \(X\) a Banach space._ * _Assume that_ \(f\in L^{1}(0,T)\) _is a nonnegative function. Then_ \[J_{t}^{1}f(t)\leq\Big{[}T^{(1-\alpha)}\Gamma(\alpha)\Big{]}J_{t}^{\alpha}f(t),\text{ for a.e. }t\in[0,T].\] (7) * _If_ \(f\in L^{1}(0,T;X)\)__ \[D_{t}^{\alpha}\left[J_{t}^{\alpha}f(t)\right]=f(t),\text{ for a.e. }t\in[0,T].\] (8) _If additionally_ \(J_{t}^{1-\alpha}f\in W^{1,1}(0,T;X)\)_, then_ \[J_{t}^{\alpha}\big{[}D_{t}^{\alpha}u(t)\big{]}=u(t)-\frac{t^{\alpha-1}}{\Gamma (\alpha)}\Big{(}J_{s}^{1-\alpha}u(s)|_{s=0}\Big{)}\] (9) * _For_ \(f\in C([0,T];X)\)_,_ \[{}^{C}\!D_{t}^{\alpha}\left[J_{t}^{\alpha}f(t)\right]=f(t),\text{ for a.e. }t\in[0,T].\] (10) _If additionally_ \(J_{t}^{1-\alpha}f\in W^{1,1}(0,T;X)\)_, then_ \[J_{t}^{\alpha}\left[{}^{C}\!D_{t}^{\alpha}f(t)\right]=f(t)-f(0),\text{ for a.e. }t\in[0,T],\] (11) _and_ \[J_{t}^{1}\left[{}^{C}\!D_{t}^{\alpha}f(t)\right]=J_{t}^{1-\alpha}f(t)-f(0)\left[ \frac{t^{1-\alpha}}{\Gamma(2-\alpha)}\right],\text{ for a.e. }t\in[0,T].\] (12) 4. _If_ \(f\in W^{1,1}(0,T;X)\)_,_ \[{}^{C}\!D_{t}^{\alpha}f(t)=J_{t}^{1-\alpha}f^{\prime}(t),\text{ for a.e. }t\in[0,T].\] (13) _If additionally_ \(f(0)=0\)_, then we can reinterpret the equation (_13_) in the form_ \[\frac{d}{dt}\left[J_{t}^{1-\alpha}f(t)\right]=J_{t}^{1-\alpha}f^{\prime}(t), \text{ for a.e. }t\in[0,T].\] (14) 5. _For_ \(f\in C^{1}([0,T];X)\) _such that_ \(J_{t}^{1-\alpha}f(t)\,\in W^{2,1}(0,T;X)\)_, we have_ \[{}^{C}\!D_{t}^{\alpha+1}f(t)={}^{C}\!D_{t}^{\alpha}f^{\prime}(t),\text{ for a.e. }t\in[0,T].\] (15) 6. _If_ \(f\in W^{1,2}(0,T;X)\)_, then_ \[\left\|{}^{C}\!D_{t}^{\alpha}f(t)\right\|_{X}^{2}\leq\left[\frac{T^{1-\alpha}} {\Gamma(2-\alpha)}\right]J_{t}^{1-\alpha}\left\|f^{\prime}(t)\right\|_{X}^{2},\text{ for a.e. }t\in[0,T].\] (16) Proof.: The proof of (7) follows from the definition. For the proof of (8)-(14) we refer the reader to [8, Proposition 2.35] and [9, Remark 2.10]. Applying the Leibniz integral rule in conjunction with (14), we have \[{}^{C}\!D_{t}^{\alpha+1}f(t)=\frac{d^{2}}{dt^{2}}\Big{\{}J_{t}^{1-\alpha} \big{[}f(t)-f(0)-tf^{\prime}(0)\big{]}\Big{\}}=\frac{d}{dt}\Big{\{}J_{t}^{1- \alpha}\big{[}f^{\prime}(t)-f^{\prime}(0)\big{]}\Big{\}}={}^{C}\!D_{t}^{\alpha }f^{\prime}(t),\] in \(X\), for almost every \(t\in[0,T]\), therefore (15) holds. To prove the estimate (16), we observe that (13) guarantees \[\left\|{}^{C}\!D_{t}^{\alpha}f(t)\right\|_{X}^{2}\leq\left[J_{t}^{1-\alpha} \left\|f^{\prime}(t)\right\|_{X}\right]^{2}=\left[\int_{0}^{t}\left(\frac{(t-s )^{-\frac{\alpha}{2}}}{\Gamma(1-\alpha)^{\frac{1}{2}}}\right)\left(\frac{(t-s )^{-\frac{\alpha}{2}}}{\Gamma(1-\alpha)^{\frac{1}{2}}}\right)\left\|f^{\prime} (t)\right\|_{X}ds\right]^{2}.\] Therefore, Holder's inequality gives \[\left\|{}^{C}\!D_{t}^{\alpha}f(t)\right\|_{X}^{2}\leq\left[\frac{t^{1-\alpha} }{\Gamma(2-\alpha)}\right]J_{t}^{1-\alpha}\left\|f^{\prime}(t)\right\|_{X}^{2} \leq\left[\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\right]J_{t}^{1-\alpha}\left\| f^{\prime}(t)\right\|_{X}^{2},\] for almost every \(t\in[0,T]\), which completes the proof. At this point, we emphasize that the remaining propositions of this section are original (and important) contributions to the theory, as far as the authors are aware. These propositions are essential for applying the following theorem, originally formulated in [9, Theorem 4.11], which is presented below and used throughout this work. **Theorem 2**.: _Assume that \(f\in C([0,T])\), \(J_{t}^{1-\alpha}f\in W^{1,1}(0,T)\) and \(J_{t}^{1-\alpha}f^{2}\in W^{1,1}(0,T)\). Then,_ \[cD_{t}^{\alpha}\big{[}f(t)\big{]}^{2}\leq 2\Big{[}cD_{t}^{\alpha}f(t)\Big{]}f(t), \quad\text{for almost every $t\in[0,T]$}.\] The following proposition helps us verify that, in certain circumstances, a function satisfies the hypotheses of Theorem 2. **Proposition 3**.: _If \(u\in C([0,T])\) is such that \({}^{C}\!D_{t}^{\alpha}u\in C([0,T])\), then \(J_{t}^{1-\alpha}u^{2}\in W^{1,1}(0,T)\)._ Proof.: From the continuity of the RL fractional integral (cf. [23, Theorem 14]) we have that \(J_{t}^{1-\alpha}u\in C([0,T])\). Then, that the identity \[D_{t}^{\alpha}u(t)={}^{C}\!D_{t}^{\alpha}u+\frac{u(0)t^{-\alpha}}{\Gamma(1- \alpha)},\] which holds for a.e. \(t\in[0,T]\), allows us to deduce that \(J_{t}^{1-\alpha}u\in W^{1,1}(0,T)\). Therefore, if we define \(h(t)={}^{C}\!D_{t}^{\alpha}u(t)\), we have that (11) ensures the identity \[u(t)-u(0)=J_{t}^{\alpha}h(t),\] for a.e. \(t\in[0,T]\). Consequently, \[J_{t}^{1-\alpha}u^{2}(t)=J_{t}^{1-\alpha}\big{[}u(0)+J_{t}^{ \alpha}h(t)\big{]}^{2}\\ =\big{[}u(0)\big{]}^{2}\frac{t^{1-\alpha}}{\Gamma(2-\alpha)}+2u(0 )\big{[}J_{t}^{1}h(t)\big{]}+J_{t}^{1-\alpha}\big{[}J_{t}^{\alpha}h(t)\big{]}^ {2}.\] It is evident that the first two terms on the right side of the above equality belong to \(W^{1,1}(0,T)\). The first follows from direct computation, while the second relies on the continuity of the RL fractional integral and the fact that \(h\in C([0,T])\). To complete the proof, we assert that \(J_{t}^{1-\alpha}\left[J_{t}^{\alpha}h\right]^{2}\in W^{1,1}(0,T)\). We only need to verify that \(D_{t}^{\alpha}[J_{t}^{\alpha}h]^{2}\in L^{1}(0,T)\), since it follows from the continuity of the RL fractional integral of order \(1-\alpha\) from \(L^{1}(0,T)\) into \(L^{1}(0,T)\) (cf. [23, Theorem 4]), that \(J_{t}^{1-\alpha}\left[J_{t}^{\alpha}h\right]^{2}\in L^{1}(0,T)\). Since \(h\in C([0,T])\), we have that \(J_{t}^{\alpha}h(t)\) is Holder continuous with exponent \(\alpha\) on \([0,T]\), or simply, \(J_{t}^{\alpha}h\in C^{0,\alpha}([0,T])\) (cf. [23, Theorem 14]). Thus, we can apply [2, Lemma 1] and (8) to obtain \[D_{t}^{\alpha}[J_{t}^{\alpha}h(t)]^{2}=2\big{[}J_{t}^{\alpha}h(t)\big{]}h(t)- \frac{\alpha}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{[J_{t}^{\alpha}h(t)-J_{s}^{ \alpha}h(s)]^{2}}{(t-s)^{\alpha+1}}ds-\frac{[J_{t}^{\alpha}h(t)]^{2}}{\Gamma( 1-\alpha)t^{\alpha}},\] for a.e. \(t\in[0,T]\). We finish this proof by noting that the above equality together with the fact that \(J_{t}^{\alpha}h\in C^{0,\alpha}([0,T])\) is enough for us to deduce that \(D_{t}^{\alpha}[J_{t}^{\alpha}h]^{2}\in L^{1}(0,T)\) We conclude this section with a result on regularity, crucial for demonstrating \(u^{\prime}(0)=u_{1}\) in \(H^{1}_{\Gamma_{0}}(\Omega)\) during Step 3: Passage to the limit in Section 4. **Proposition 4**.: _If \(u\in L^{\infty}(0,T;X)\) and \(D^{\alpha}_{t}u\in L^{\infty}(0,T;X)\) then \(u\in C([0,T];X)\) and \(u(0)=0\)._ Proof.: For any \(\varepsilon\in[0,\min\{\alpha,1-\alpha\})\), Theorem 3.5 in [10] ensures that \(J^{\alpha-\varepsilon}_{t}\big{[}D^{\alpha}_{t}u(t)\big{]}\) and \(J^{1-\alpha-\epsilon}_{t}u(t)\) are in \(C([0,T];X)\). Therefore, for \(\delta\in(0,\min\{\alpha,1-\alpha\})\) we have \[J^{\alpha}_{t}\big{[}D^{\alpha}_{t}u(t)\big{]}\big{|}_{t=0}=J^{ \delta}_{t}\big{\{}J^{\alpha-\delta}_{t}\big{[}D^{\alpha}_{t}u(t)\big{]} \big{\}}\big{|}_{t=0}=0, \tag{17}\] and \[J^{1-\alpha}_{t}u(t)\big{|}_{t=0}=J^{\delta}_{t}\big{[}J^{1- \alpha-\delta}_{t}u(t)\big{]}\big{|}_{t=0}=0. \tag{18}\] Since follows from the hypotheses that \(J^{1-\alpha}_{t}u\in W^{1,1}(0,T;X)\), identities (9) and (18) ensure that \[u(t)=J^{\alpha}_{t}\big{[}D^{\alpha}_{t}u(t)\big{]}+\frac{t^{ \alpha-1}}{\Gamma(\alpha)}\left(J^{1-\alpha}_{t}u(t)|_{t=0}\right)=J^{\alpha} _{t}\big{[}D^{\alpha}_{t}u(t)\big{]}, \tag{19}\] for a.e. \(t\in[0,T]\). The proof is now complete, as we observe that \(J^{\alpha}_{t}\big{[}D^{\alpha}_{t}u\big{]}\in C([0,T];X)\) and that (17) together with (19) guarantees \(u(0)=0\). ## 3. A Generalization of Picard-Lindelof Theorem For \(\alpha\in(0,1],f:\Omega\times[0,T]\subset\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) and \(\xi\in\mathbb{R}^{n}\) given, the classical Cauchy problem for the fractional ordinary differential equation in \(\mathbb{R}^{n}\) is the initial value problem \[\left\{\begin{array}{l}{}^{C}\!D^{\alpha}_{t}\varphi(t)=f\big{(} \varphi(t),t\big{)},\quad\text{for }t\in[0,T],\\ \varphi(0)=\xi.\end{array}\right. \tag{20}\] The well-posedeness for (20) is well-known and has been extensively studied in the literature, for instance see [25, 36] as few examples. In this section, we investigate a time-fractional ODE system that generalizes the Cauchy problem (20). Our main goal here is to establish the existence and uniqueness of a solution to this generalized time-fractional ODE system. This result serves as an essential tool for applying the Faedo-Galerkin method in Section 4, and notably, to the best of the authors' knowledge, there is currently no formal proof available. With this in mind, we consider \(\{\alpha_{j}\}_{j=1}^{n}\subset(0,1],f:\Omega\times[0,T]\subset\mathbb{R}^{n+ 1}\to\mathbb{R}^{n}\) and \(\xi=(\xi_{1},\cdots,\xi_{n})\in\mathbb{R}^{n}\) given and we look for \(\varphi=(\varphi_{1},\cdots,\varphi_{n}):[0,T]\to\mathbb{R}^{n}\) (the unknown function) satisfying the time-fractional ODE system given by the following set of equations: \[\left\{\begin{array}{ccc}{}^{C}\!D_{t}^{\alpha_{1}}\varphi_{1}(t)&=&f_{1}\left( \varphi_{1}(t),\cdots,\varphi_{n}(t),t\right),&\text{for }t\in[0,T],\\ \vdots&\vdots&\text{for }t\in[0,T],\\ {}^{C}\!D_{t}^{\alpha_{n}}\varphi_{n}(t)&=&f_{n}\left(\varphi_{1}(t),\cdots, \varphi_{n}(t),t\right),&\text{for }t\in[0,T],\end{array}\right. \tag{21}\] subjected to the initial condition \[\varphi(0)=\xi. \tag{22}\] Let us begin by introducing the notion of a solution to the Cauchy problem (21)-(22). **Definition 5**.: _A function \(\varphi=(\varphi_{1},\cdots,\varphi_{n}):[0,T]\to\mathbb{R}^{n}\) is said to be a solution of the Cauchy problem (21)-(22) on \([0,T]\) if it satisfies the following conditions:_ 1. \(\varphi\) _and_ \(t\mapsto({}^{C}\!D_{t}^{\alpha_{1}}\varphi(t),\ldots,{}^{C}\!D_{t}^{\alpha_{n} }\varphi(t))\) _belong to_ \(C([0,T];\mathbb{R}^{n})\)_;_ 2. \(\{\varphi(t):t\in[0,T]\}\subset\Omega\)_;_ 3. \(\varphi\) _satisfies the equations (_21_) for all_ \(t\in[0,T]\) _and the initial condition (_22_)._ Now we present an auxiliar result that connects the solution of (21)-(22) with the solution of an integral equation. **Proposition 6**.: _Let \(f:\Omega\times[0,T]\subset\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) be a continuous function. Then \(\varphi=(\varphi_{1},\cdots,\varphi_{n}):[0,T]\to\mathbb{R}^{n}\) is a solution of (21)-(22) in \([0,T]\) if, and only if, \(\varphi\in C([0,T];\mathbb{R}^{n})\) and for all \(j\in\{1,\cdots,n\}\) the function \(\varphi_{j}\) satisfies the integral equation_ \[\varphi_{j}(t)=\xi_{j}+\frac{1}{\Gamma(\alpha_{j})}\int_{0}^{t}{(t-s)^{\alpha_ {j}-1}}f_{j}\big{(}\varphi(s),s\big{)}ds,\quad\forall t\in[0,T]. \tag{23}\] Proof.: Assuming that \(\varphi\) is a solution of (21)-(22) in the interval \([0,T]\), we can observe that \(t\mapsto J_{t}^{1-\alpha_{j}}[\varphi_{j}(t)-\varphi_{j}(0)]\) belongs to \(C^{1}([0,T];\mathbb{R})\) for each \(1\leq j\leq n\). Consequently, by applying \(J_{t}^{\alpha_{j}}\) to the \(j\)-th equation in (21) and employing (11) we come to \[\varphi_{j}(t)-\varphi_{j}(0)=J_{t}^{\alpha_{j}}f_{j}(\varphi(t),t),\quad \forall t\in[0,T].\] This equality and (22) yields (23). Conversely assume that \(\varphi\in C([0,T];\mathbb{R}^{n})\) and (23) holds, for all \(1\leq j\leq n\). Then the continuity of each \(t\mapsto f_{j}(\varphi(t),t)\) gives \(t\mapsto J_{t}^{\alpha_{j}}f_{j}(\varphi(t),t)\) belongs to \(C([0,T];\mathbb{R})\), for all \(1\leq j\leq n\). Applying \({}^{C}\!D_{t}^{\alpha_{j}}\) to both sides of (23) and taking into account (10) we get \[{}^{C}\!D_{t}^{\alpha_{j}}\varphi_{j}(t)=f_{j}(\varphi(t),t),\quad\forall t\in [0,T],1\leq j\leq n,\] which means that \(\varphi\) satisfies the equations (21) for all \(t\in[0,T]\). Finally, to show that \(\varphi\) satisfies (22) we observe that \[|J_{t}^{\alpha_{j}}f_{j}(\varphi(t),t)|_{\mathbb{R}}\leq\frac{1}{\Gamma( \alpha_{j})}\int_{0}^{t}(t-s)^{\alpha_{j}-1}\|f(\varphi(\cdot),\cdot)\|_{C([0,T];\mathbb{R}^{n})}\,ds=C\,t^{\alpha_{j}},\] where \(C=\|f(\varphi(\cdot),\cdot)\|_{C([0,T];\mathbb{R}^{n})}/\Gamma(\alpha_{j}+1)\). From this inequatily we have \[\lim_{t\to 0^{+}}|J_{t}^{\alpha_{j}}f_{j}(\varphi(t),t)|_{\mathbb{R}}=0,\] and therefore \[\lim_{t\to 0^{+}}\frac{1}{\Gamma(\alpha_{j})}\int_{0}^{t}\left(t-s\right)^{ \alpha_{j}-1}f_{j}\big{(}\varphi(s),s\big{)}\,ds=0\,.\] This leads to (22) by taking the limit as \(t\to 0^{+}\) in (23). Whence we have proved that \(\varphi\) is a solution of (21)-(22) in the interval \([0,T]\) and the proof is completed. We say that a function \(f:\Omega\times[0,T]\subset\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) is Lipschitz function on the first variable if there exists a constant \(\ell>0\) such that \[\|f(x,t)-f(y,t)\|_{\mathbb{R}^{n}}\leq\ell\,\|x-y\|_{\mathbb{R}^{n}},\text{ for all }t\in[0,T]\text{ and }x,y\in\Omega.\] Now we face up to the main result of this section: **Theorem 7**.: _Let \(f:\Omega\times[0,T]\subset\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) be continuous and Lipschitz function on the first variable. Then for each \(\xi\in\Omega\) there exists an unique \(\varphi\) solution of the Cauchy problem (21)-(22) in the interval \([0,T]\)._ Proof.: Without loss of generality we can assume that \(\{\alpha_{j}\}_{j=1}^{n}\subset(0,1]\) satisfies \(\alpha_{1}\leq\alpha_{2}\leq\cdots\leq\alpha_{n}\). We note that the operator \(\mathcal{T}:C\left([0,T];\mathbb{R}^{n}\right)\to C\left([0,T];\mathbb{R}^{n}\right)\), given by \[\mathcal{T}(\varphi(t))=\xi+\left(\begin{array}{c}J_{t}^{\alpha_{1}}f_{1}( \varphi(t),t)\\ \vdots\\ J_{t}^{\alpha_{n}}f_{n}(\varphi(t),t)\end{array}\right),\text{ for all }t\in[0,T],\] is well-defined. Our strategy in proving the theorem is classical, i.e., we show that for \(m\in\mathbb{N}\), sufficiently large, the operator \(\mathcal{T}^{m}\) is a contraction. For \(\varphi\), \(\psi\in C\left([0,T];\mathbb{R}^{n}\right)\) we have \[\left\|\mathcal{T}\varphi(t)-\mathcal{T}\psi(t)\right\|_{\mathbb{R}^{n}}\leq \ell\left(\sum_{k=1}^{n}J_{t}^{\alpha_{k}}\right)\left\|\varphi(t)-\psi(t) \right\|_{\mathbb{R}^{n}},\] for every \(t\in[0,T]\). Then, by employing the semigroup property of the RL fractional integral we obtain \[\left\|\mathcal{T}\varphi(t)-\mathcal{T}\psi(t)\right\|_{\mathbb{R}^{n}}\leq \ell\left(\sum_{k=1}^{n}J_{t}^{\alpha_{k}-\alpha_{1}}\right)J_{t}^{\alpha_{1} }\left\|\varphi(t)-\psi(t)\right\|_{\mathbb{R}^{n}},\text{ for all }t\in[0,T].\] The aforementioned estimate enables us to conclude recursively that \[\left\|\mathcal{T}^{m}\varphi(t)-\mathcal{T}^{m}\psi(t)\right\|_{\mathbb{R}^ {n}}\leq\ell^{m}\left(\sum_{k=1}^{n}J_{t}^{\alpha_{k}-\alpha_{1}}\right)^{m}J _{t}^{m\alpha_{1}}\left\|\varphi(t)-\psi(t)\right\|_{\mathbb{R}^{n}},\] for all \(t\in[0,T]\) and \(m\in\mathbb{N}\). This inequality yields \[\|\mathcal{T}^{m}\varphi-\mathcal{T}^{m}\psi\|_{C([0,T];\mathbb{R}^{ n})}\\ \leq\ell^{m}\left\|\sum_{k=1}^{n}J_{t}^{\alpha_{k}-\alpha_{1}} \right\|_{\mathcal{L}(C([0,T];\mathbb{R}^{n}))}^{m}\left\|J_{t}^{m\alpha_{1}} \left\|\varphi(\cdot)-\psi(\cdot)\right\|_{\mathbb{R}^{n}}\right\|_{C([0,T]; \mathbb{R})},\text{ for all }m\in\mathbb{N}. \tag{24}\] On the other hand, using the properties of the RL fractional integral operator we find \[\left\|\sum_{k=1}^{n}J_{t}^{\alpha_{k}-\alpha_{1}}\right\|_{\mathcal{L}(C([0,T ];\mathbb{R}^{n}))}\leq\sum_{k=1}^{n}\left\|J_{t}^{\alpha_{k}-\alpha_{1}} \right\|_{\mathcal{L}(C([0,T];\mathbb{R}^{n}))}\leq 2\sum_{k=1}^{n}T^{\alpha_{k}- \alpha_{1}}. \tag{25}\] In order to obtain the above estimate, we have used that \(\Gamma(\alpha)\geq 1/2\) for \(\alpha>0\). Now observe that for all \(1\leq k\leq n\) we have \[T^{(\alpha_{k}-\alpha_{1})}\leq\left\{\begin{array}{cl}1,&\text{if }0<T\leq 1,\\ T^{(\alpha_{n}-\alpha_{1})},&\text{if }T\geq 1,\end{array}\right.\] since \(\alpha_{1}\leq\alpha_{2}\leq\cdots\leq\alpha_{n}\). From this and (25) we obtain the estimate \[\left\|\sum_{k=1}^{n}J_{t}^{\alpha_{k}-\alpha_{1}}\right\|_{\mathcal{L}(C([0,T ];\mathbb{R}^{n}))}\leq T_{M},\text{ where }T_{M}=2n\max\left\{T^{(\alpha_{n}- \alpha_{1})},1\right\}.\] It follows from this inequality and (24) that \[\left\|\mathcal{T}^{m}\varphi-\mathcal{T}^{m}\psi\right\|_{C([0,T];\mathbb{R} ^{n})}\leq\frac{(\ell T_{M}T^{\alpha_{1}})^{m}}{\Gamma(m\alpha_{1}+1)}\left\| \varphi-\psi\right\|_{C([0,T];\mathbb{R}^{n})}. \tag{26}\] Hence, for sufficiently large \(m\in\mathbb{N}\), we have \[\frac{(\ell\,T_{M}\,T^{\alpha_{1}})^{m}}{\Gamma(m\alpha_{1}+1)}<1, \tag{27}\] since \([(\ell T_{M}T^{\alpha_{1}})^{m}/\Gamma(m\alpha_{1}+1)]\to 0\) as \(m\to\infty\), corresponding to the general term of the series defining the Mittag-Leffler function \(E_{\alpha_{1}}(z)\) with \(z=\ell\,T_{M}\,T^{\alpha_{1}}\). Using (26) and (27) we obtain that \(\mathcal{T}^{m}\) is a contraction in \(C([0,T];\mathbb{R}^{n})\), for such suitable \(m\). Therefore, the Banach Fixed Point Theorem ensures the existence of a function \(\phi\in C([0,T];\mathbb{R}^{n})\) that is the unique fixed point of \(\mathcal{T}^{m}\), and consequently, of \(\mathcal{T}\). Thus, we have \(\phi(t)=\mathcal{T}\phi(t)\) for every \(t\in[0,T]\), which can be expressed as \[\phi_{j}(t)=\xi_{j}+\frac{1}{\Gamma(\alpha_{j})}\int_{0}^{t}(t-s)^{\alpha_{j} -1}f_{j}\big{(}\phi(s),s\big{)}ds,\text{ for all }t\in[0,T]\text{ and }1\leq j\leq n.\] Finally, Proposition 6 completes the proof. ### A Brief Digression To conclude this section, we find important to highlight a particular case of Theorem 7. Specifically, we want to address the scenario where \(f:\Omega\times[0,T]\subset\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) is given by \(f(x,t)=Ax\), with \(A\in M^{n}(\mathbb{R})\) (the set of square matrices with real entries). In this context, a natural question arises regarding the representation of the matrix function corresponding to the unique solution \(\phi(t)\) obtained for the Cauchy problem (21)-(22) in \([0,\infty)\). It is well-known in the theory (cf. [14]) that when \(\alpha_{1}=\alpha_{2}=\ldots=\alpha_{n}=\alpha\in(0,1]\), the solution to (21)-(22) can be represented by the Mittag-Leffler matrix function \[\phi(t)=E_{\alpha}(t^{\alpha}A)\xi,\quad\forall t\geq 0, \tag{28}\] where \(E_{\alpha}:\mathbb{C}\to\mathbb{C}\) is the analytic Mittag-Leffler function. It must be pointed out that the representation (28) is consistent with the classical case when \(\alpha=1\), which corresponds to the exponential matrix. We can understand (28) as a natural generalization of one dimensional case, where the Mittag-Leffler function (\(\alpha\in(0,1)\)) and the exponential function (\(\alpha=1\)) are solutions to their respective Cauchy problems (21)-(22) in \([0,\infty)\). However, when considering the Cauchy problem (21)-(22) with distinct orders of differentiation, it is not valid to assume that they are direct generalizations of the one dimensional case, as such an assumption would not even make sense. Therefore, to understand and derive the matrix function that satisfies (21)-(22) with distinct orders of differentiation, it is imperative to comprehend the case \(n=2\). With that in mind, let us proceed with this discussion by assuming that \(A\in M^{2}(\mathbb{R})\) is in its Jordan normal form. In other words, we are considering just the matrices \[A_{1}=\left[\begin{array}{cc}\lambda&0\\ 0&\mu\end{array}\right],\quad A_{2}=\left[\begin{array}{cc}\lambda&1\\ 0&\lambda\end{array}\right]\quad\text{and}\quad A_{3}=\left[\begin{array}{ cc}\lambda&\mu\\ -\mu&\lambda\end{array}\right],\] where \(\lambda,\mu\in\mathbb{R}\). _Case \(A_{1}\):_ In this scenario, it becomes apparent that the unique solution \(\phi(t)\) to (21)-(22) can be expressed by \[\phi(t)=\left(\begin{array}{c}E_{\alpha_{1}}(t^{\alpha_{1}}\lambda)\xi_{1} \\ E_{\alpha_{2}}(t^{\alpha_{2}}\mu)\xi_{2}\end{array}\right),\] for every \(t\in[0,\infty)\). _Case \(A_{2}\):_ By applying similar reasoning as before, we can deduce that \(\phi_{2}(t)=E_{\alpha_{2}}(t^{\alpha_{2}}\lambda)\xi_{2}\), for every \(t\in[0,\infty)\). Once we have determined \(\phi_{2}(t)\), the first line of the problem becomes a non-homogeneous fractional ordinary differential equation of order \(\alpha_{1}\). Therefore, the fractional version of the variation of constants formula allows us to obtain the expression \[\phi(t)=\left(\begin{array}{c}E_{\alpha_{1}}(t^{\alpha_{1}}\lambda)\xi_{1}+\int _{0}^{t}(t-s)^{\alpha_{1}-1}E_{\alpha_{1},\alpha_{1}}((t-s)^{\alpha_{1}}\lambda) E_{\alpha_{2}}(s^{\alpha_{2}}\lambda)\xi_{2}\,ds\\ E_{\alpha_{2}}(t^{\alpha_{2}}\lambda)\xi_{2}\end{array}\right),\] for every \(t\in[0,\infty)\). _Case \(A_{3}\):_ This particular case poses the greatest challenge in our analysis. Let us consider the scenario where \(\lambda=0\). In this situation, we can employ the equations from (21)-(22) to demonstrate that \(\phi(t)\) satisfies the integral equation \[\phi(t)=\xi+\left(\begin{array}{c}\dfrac{t^{\alpha_{1}}\mu\xi_{2}}{\Gamma( \alpha_{1}+1)}\\ \dfrac{t^{\alpha_{2}}\mu\xi_{1}}{\Gamma(\alpha_{2}+1)}\end{array}\right)+J_{t }^{\alpha_{1}+\alpha_{2}}\phi(t), \tag{29}\] for every \(t\in[0,\infty)\). Computing the solution of the integral equation (29) is not a straightforward task. However, when \(\alpha_{1}+\alpha_{2}\leq 1\), we can deduce that the solution of (29) reduces to the solution of the non-homogeneous fractional ordinary differential equation \[{}^{C}\!D_{t}^{\alpha_{1}+\alpha_{2}}\phi(t)=-\mu^{2}\phi(t)+\mu\left( \begin{array}{c}\dfrac{t^{-\alpha_{2}}\xi_{2}}{\Gamma(1-\alpha_{2})}\\ \dfrac{t^{-\alpha_{1}}\xi_{1}}{\Gamma(1-\alpha_{1})}\end{array}\right)\quad \text{and}\quad\phi(0)=\xi.\] Using the fractional version of the variation of constants formula, we finally deduce that \[\phi(t)=E_{\alpha_{1}+\alpha_{2}}(-t^{\alpha_{1}+\alpha_{2}}\mu^ {2})\xi\\ +\mu\int_{0}^{t}(t-s)^{\alpha_{1}+\alpha_{2}-1}E_{\alpha_{1}+ \alpha_{2},\alpha_{1}+\alpha_{2}}(-(t-s)^{\alpha_{1}+\alpha_{2}}\mu^{2})\left( \begin{array}{c}\dfrac{s^{-\alpha_{2}}\xi_{2}}{\Gamma(1-\alpha_{2})}\\ \dfrac{s^{-\alpha_{1}}\xi_{1}}{\Gamma(1-\alpha_{1})}\end{array}\right)\,ds,\] for every \(t\in[0,\infty)\). The cases where \(\alpha_{1}+\alpha_{2}>1\) or \(\lambda\neq 0\) pose significant challenges and are omitted here. In this brief section, we simply want to highlight the challenge of obtaining a closed formula for the matrix solution of problem (21)-(22) when \(n=2\). This suggests that the general case of \(n>2\) is even more complex and requires a more comprehensive investigation, which is currently lacking in the existing literature and will be conducted in our future research. ## 4. Well-Posedness Theory This section focuses on the initial boundary value problem (1)-(6), the central subject of this paper. Here our main goal is to establish the well-posedeness of this problem. **Theorem 8**.: _Let \(\alpha\in(0,1]\) be a real number and let \(f,g,h\in C\left(\overline{\Gamma_{1}}\right)\) be such that \(f\) and \(h\) are positive functions and \(g\) is a non-negative function. If \(u_{0}\in H^{1}_{\Gamma_{0}}(\Omega)\cap H^{2}(\Omega)\), \(u_{1}\in H^{1}_{\Gamma_{0}}(\Omega)\) and \(\delta_{0}\in L^{2}(\Gamma_{1})\), then for all \(T>0\) arbitrarily fixed there exists an unique pair of functions \((u,\delta)\), with \(u:\Omega\times[0,T]\rightarrow\mathbb{R}\) and \(\delta:\Gamma_{1}\times[0,T]\rightarrow\mathbb{R}\), in the class_ \[\begin{split}& u,u_{t}\in L^{\infty}\left(0,T;H^{1}_{\Gamma_{0}}( \Omega)\right),\ ^{C}\!D^{\alpha}_{t}u_{t}\in L^{\infty}\left(0,T;L^{2}(\Omega)\right)\\ &\text{and}\ u(t)\in\mathcal{H}_{\Delta}(\Omega),\ \text{for a.e.}\ t\in(0,T);\end{split} \tag{30}\] \[\delta,\delta_{t}\in L^{\infty}\left(0,T;L^{2}(\Gamma_{1})\right)\ \text{and}\ \delta_{tt}\in L^{2}\left(0,T;L^{2}(\Gamma_{1})\right); \tag{31}\] _which satisfies_ \[\ {}^{C}\!D^{\alpha}_{t}u_{t}(t)-\Delta u(t)=0,\ \text{in}\ L^{2}( \Omega),\ \text{for a.e.}\ t\in(0,T); \tag{32}\] \[f\delta_{tt}(t)+g\delta_{t}(t)+h\delta(t)=-\gamma_{0}(u_{t}(t)) \text{ in}\ L^{2}(\Gamma_{1}),\ \text{for a.e.}\ t\in(0,T);\] (33) \[\int_{\Gamma_{1}}\delta_{t}(t)\gamma_{0}(\varphi)d\Gamma_{1}= \left\langle\gamma_{1}(u(t)),\gamma_{0}(\varphi)\right\rangle_{H^{-\frac{1}{ 2}}(\Gamma)\times H^{\frac{1}{2}}(\Gamma)},\forall\varphi\in H^{1}_{\Gamma_{0 }}(\Omega)\] \[\text{and a.e.}\ t\in(0,T);\] (34) \[u(0)=u_{0}\text{ and }u_{t}(0)=u_{1},\ \text{in}\ L^{2}( \Omega);\] (35) \[\delta(0)=\delta_{0}\text{ and }\delta_{t}(0)=\gamma_{1}(u_{0})_{|_{ \Gamma_{1}}},\ \text{in}\ L^{2}(\Gamma_{1}). \tag{36}\] _Moreover the pair of functions \((u,\delta)\) depends continuously on the initial data and the parameters \(f,g\) and \(h\)._ Proof.: We have divided the proof into 5 steps: (1) Approximate problem; (2) A priori estimates; (3) Passage to the limit - Existence; (4) Uniqueness; and (5) Continuous dependence. **Step 1: Approximate problem.** Let \((w_{j})_{j\in\mathbb{N}}\) be an orthonormal basis of \(L^{2}(\Omega)\) given by the eigenfunctions of the operator \(-\Delta\) with domain \(H^{1}_{\Gamma_{0}}(\Omega)\cap H^{2}(\Omega)\). Thus \((w_{j})_{j\in\mathbb{N}}\) is also a complete orthogonal system in \(H^{1}_{\Gamma_{0}}(\Omega)\) and \(H^{1}_{\Gamma_{0}}(\Omega)\cap H^{2}(\Omega)\). For each \(m\in\mathbb{N}\) we set \(W_{m}=[w_{1},\cdots,w_{m}]\) the linear subspace of \(H^{1}_{\Gamma_{0}}(\Omega)\cap H^{2}(\Omega)\) spanned by \(\{w_{1},\cdots,w_{m}\}\). Similarly, let \((z_{j})_{j\in\mathbb{N}}\) be an orthonormal basis of the Hilbert space \(L^{2}(\Gamma_{1})\), constructed such that \(\gamma_{0}(W_{m})|_{\Gamma_{1}}\subset Z_{m}\) for every \(m\in\mathbb{N}\), where \(Z_{m}=[z_{1},\cdots,z_{m}]\) represents the linear subspace of \(L^{2}(\Gamma_{1})\) spanned by \(\{z_{1},\cdots,z_{m}\}\). Now, for each \(m\in\mathbb{N}\) fixed arbitrary, we seek functions \(u_{m}:\Omega\times[0,T]\to\mathbb{R}\) and \(\delta_{m}:\Gamma_{1}\times[0,T]\to\mathbb{R}\) in the form \[u_{m}(x,t)=\sum_{k=1}^{m}a_{km}(t)\,w_{k}(x)\quad\text{and}\quad\delta_{m}(x,t)= \sum_{k=1}^{m}b_{km}(t)\,z_{k}(x)\] which are solutions to the approximate problem \[\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),w_{j}\right)+\left(( u_{m}(t),w_{j})\right)-\left(\delta_{m}^{\prime}(t),\gamma_{0}(w_{j})\right)_{ \Gamma_{1}}=0,\qquad 1\leq j\leq m\,; \tag{37}\] \[\left(f\delta_{m}^{\prime\prime}(t)+g\delta_{m}^{\prime}(t)+h \delta_{m}(t),z_{j}\right)_{\Gamma_{1}}+\left(\gamma_{0}(u_{m}^{\prime}(t)),z _{j}\right)_{\Gamma_{1}}=0,\qquad 1\leq j\leq m\,;\] (38) \[u_{m}(0)=u_{0m},\ u_{m}^{\prime}(0)=u_{1m},\ \delta_{m}(0)=\delta_{0m},\ \delta_{m}^{\prime}(0)=\delta_{1m}, \tag{39}\] where \[u_{0m}=\sum_{k=1}^{m}(u_{0},w_{k})w_{k}\to u_{0}\ \text{in}\ H_{ \Gamma_{0}}^{1}(\Omega)\cap H^{2}(\Omega), \tag{40}\] \[u_{1m}=\sum_{k=1}^{m}(u_{1},w_{k})w_{k}\to u_{1}\ \text{in}\ H_{ \Gamma_{0}}^{1}(\Omega)\] (41) \[\delta_{0m}=\sum_{k=1}^{m}(\delta_{0},z_{k})_{\Gamma_{1}}z_{k} \to\delta_{0}\ \text{in}\ L^{2}(\Gamma_{1})\] (42) \[\delta_{1m}=\gamma_{1}(u_{0m})_{|_{\Gamma_{1}}}=\sum_{k=1}^{m}c_{ km}z_{k}\to\gamma_{1}(u_{0})_{|_{\Gamma_{1}}}\ \text{in}\ L^{2}(\Gamma_{1}), \tag{43}\] for some \(c_{km}\in\mathbb{R}\). By setting \(y_{jm}(t)=a_{jm}^{\prime}(t)\) and \(v_{jm}(t)=b_{jm}^{\prime}(t)\) for each \(1\leq j\leq m\), and reinterpreting the variational identities (37)-(38) and the initial conditions (39), we obtain an equivalent linear fractional ODE system in \(\mathbb{R}^{4m}\): \[\left\{\begin{array}{rcl}{}^{C}\!D_{t}^{\alpha}y_{m}(t)&=&-A_{1,m}\,a_{m}(t) +A_{2,m}\,v_{m}(t),\\ a_{m}^{\prime}(t)&=&y_{m}(t),\\ v_{m}^{\prime}(t)&=&-(A_{3,m})^{-1}\big{[}A_{4,m}\,y_{m}(t)+A_{5,m}\,v_{m}(t)+ A_{6,m}\,b_{m}(t)\big{]},\\ b_{m}^{\prime}(t)&=&v_{m}(t),\end{array}\right. \tag{44}\] with the initial conditions \[\left\{\begin{array}{rcl}y_{m}(0)&=&u_{1m},\\ a_{m}(0)&=&u_{0m},\\ v_{m}(0)&=&\delta_{1m},\\ b_{m}(0)&=&\delta_{0m},\end{array}\right. \tag{45}\] where \[y_{m}(t)=\left(y_{1m}(t),\cdots,y_{mm}(t)\right), a_{m}(t)=\left(a_{1m}(t),\cdots,a_{mm}(t)\right),\] \[v_{m}(t)=\left(v_{1m}(t),\cdots,v_{mm}(t)\right), b_{m}(t)=\left(b_{1m}(t),\cdots,b_{mm}(t)\right),\] and \[A_{1,m}=\left[\left((w_{j},w_{i})\right)\right]_{i,j=1}^{m}, A_{2,m}=\left[\left(z_{j},\gamma_{0}(w_{i})\right)_{\Gamma_{1}} \right]_{i,j=1}^{m}, A_{3,m}=\left[(fz_{j},z_{i})_{\Gamma_{1}}\right]_{i,j=1}^{m},\] \[A_{4,m}=\left[\left(\gamma_{0}(w_{j}),z_{i}\right)_{\Gamma_{1}} \right]_{i,j=1}^{m}, A_{5,m}=\left[\left(gz_{j},z_{i}\right)_{\Gamma_{1}}\right]_{i,j=1}^{ m}, A_{6,m}=\left[\left(hz_{j},z_{i}\right)_{\Gamma_{1}}\right]_{i,j=1}^{m}.\] We emphasize that the continuity and positivity of \(f\) ensure \((fz_{j},z_{i})_{\Gamma_{1}}>0\) for all \(i,j\in\{1,\ldots,m\}\). This, combined with the orthonormality of \((z_{j})_{j\in\mathbb{N}}\) in \(L^{2}(\Gamma_{1})\), guarantees the invertibility of the matrix \(A_{3,m}\). It is not difficult to notice that the Cauchy problem (44)-(45) satisfies the assumption of Theorem 7. Therefore we can deduce the existence and uniqueness of solution for (44)-(45). The aforementioned conclusions establish that \(y_{m}\), \({}^{C}\!D_{t}^{\alpha}y_{m}\), \(a_{m}^{\prime}\), \(v_{m}^{\prime}\), \({}^{C}\!D_{t}^{\alpha}v_{m}^{\prime}\) and \(b_{m}^{\prime}\) belong to \(C([0,T];\mathbb{R}^{m})\), and therefore \[u_{m}^{\prime},{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}\in C^{1} \left([0,T];H_{\Gamma_{0}}^{1}(\Omega)\cap H^{2}(\Omega)\right),\] \[{}^{C}\!D_{t}^{\alpha}\delta_{m}^{\prime\prime}\in C\left([0,T];L ^{2}(\Gamma_{1})\right)\text{ and }\delta_{m}\in C^{2}\left([0,T];L^{2}(\Gamma_{1}) \right),\] \[\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),w\right)+\left((u_ {m}(t),w)\right)-\left(\delta_{m}^{\prime}(t),\gamma_{0}(w)\right)_{\Gamma_{1 }}=0,\text{ for all }w\in W_{m}, \tag{46}\] \[\left(f\delta_{m}^{\prime\prime}(t)+g\delta_{m}^{\prime}(t)+h \delta_{m}(t),z\right)_{\Gamma_{1}}+\left(\gamma_{0}(u_{m}^{\prime}(t)),z \right)_{\Gamma_{1}}=0,\text{ for all }z\in Z_{m}, \tag{47}\] for each \(m\in\mathbb{N}\). **Step 2: A priori estimates. Estimate 1.** Taking \(w=2u_{m}^{\prime}\in W_{m}\) in (46) and \(z=2\delta_{m}^{\prime}\in Z_{m}\) in (47) we deduce \[2\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),u_{m}^{\prime}(t) \right)+2\left((u_{m}(t),u_{m}^{\prime}(t))\right)+2\left(f\delta_{m}^{\prime \prime}(t),\delta_{m}^{\prime}(t)\right)_{\Gamma_{1}}\\ +2\left|g^{\frac{1}{2}}\delta_{m}^{\prime}(t)\right|_{\Gamma_{1} }^{2}+2\left(h\delta_{m}(t),\delta_{m}^{\prime}(t)\right)_{\Gamma_{1}}=0.\] To address the first term on the left side of the above identity, we observe that \[\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),u_{m}^{\prime}(t)\right)=\sum_{j =1}^{m}\left[{}^{C}\!D_{t}^{\alpha}a_{jm}^{\prime}(t)\right]a_{jm}^{\prime}(t)\] and \[{}^{C}\!D_{t}^{\alpha}\left|u_{m}^{\prime}(t)\right|^{2}=\sum_{j=1}^{m}{}^{C} \!D_{t}^{\alpha}\left[a_{jm}^{\prime}(t)\right]^{2},\] since \((w_{j})_{j\in\mathbb{N}}\) is an orthonormal basis in \(L^{2}(\Omega)\). Then, supported by Proposition 3, we apply Theorem 2 to establish the inequality \[{}^{C}\!D_{t}^{\alpha}\left|u_{m}^{\prime}(t)\right|^{2}+2\left|g^{\frac{1}{2} }\delta_{m}^{\prime}(t)\right|_{\Gamma_{1}}^{2}+\frac{d}{dt}\left[\left\|u_{m} (t)\right\|^{2}+\left|f^{\frac{1}{2}}\delta_{m}^{\prime}(t)\right|_{\Gamma_{1 }}^{2}+\left|h^{\frac{1}{2}}\delta_{m}(t)\right|_{\Gamma_{1}}^{2}\right]\leq 0.\] Taking into account (12), the convergences (40)-(43) and the continuity of the trace map \(\gamma_{1}\) (i.e. \(|\gamma_{1}(u)|_{\Gamma_{1}}\leq c_{1}\left\|u\right\|_{H_{\Gamma_{0}}^{1}( \Omega)\cap H^{2}(\Omega)}\)), when applying the integral operator \(J_{t}^{1}\) to both sides of the last inequality we get \[J_{t}^{1-\alpha}\left|u_{m}^{\prime}(t)\right|^{2}+\left\|u_{m}(t) \right\|^{2}+f_{0}\left|\delta_{m}^{\prime}(t)\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\left|u_{m}^{\prime}(0) \right|^{2}+\left\|u_{m}(0)\right\|^{2}+f_{1}\left|\delta_{m}^{\prime}(0) \right|_{\Gamma_{1}}^{2}+h_{1}\left|\delta_{m}(0)\right|_{\Gamma_{1}}^{2}\] \[=\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\left|u_{1m}\right|^{2}+ \left\|u_{0m}\right\|^{2}+f_{1}\left|\gamma_{1}(u_{0m})\right|_{\Gamma_{1}}^{2 }+h_{1}\left|\delta_{0m}\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\left\|u_{1m}\right\|^{2 }+(1+f_{1}c_{1}^{2})\left\|u_{0m}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{ 2}(\Omega)}^{2}+h_{1}\left|\delta_{0m}\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\left\|u_{1}\right\|^{2 }+(1+f_{1}c_{1}^{2})\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{ 2}(\Omega)}^{2}+h_{1}\left|\delta_{0}\right|_{\Gamma_{1}}^{2}, \tag{48}\] where \[f_{0}=\min_{x\in\overline{\Gamma_{1}}}f(x),\,f_{1}=\max_{x\in\overline{\Gamma _{1}}}f(x),\,\,\text{and}\,\,h_{1}=\max_{x\in\overline{\Gamma_{1}}}h(x).\] Finally, from (16) and (48) we can see that there exists a constant \(K_{1}>0\), that depends on \(\alpha,T,f_{0},f_{1},h_{1}\) and \(c_{1}\), such that \[\left\|{}^{C}\!D_{t}^{\alpha}u_{m}\right\|_{L^{\infty}(0,T;L^{2}( \Omega))}^{2}+\left\|u_{m}\right\|_{L^{\infty}\left(0,T;H_{\Gamma_{0}}^{1}( \Omega)\right)}^{2}+\left\|\delta_{m}^{\prime}\right\|_{L^{\infty}(0,T;L^{2}( \Gamma_{1}))}^{2}\\ \leq K_{1}\left[\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega) \cap H^{2}(\Omega)}^{2}+\left\|u_{1}\right\|^{2}+\left|\delta_{0}\right|_{ \Gamma_{1}}^{2}\right], \tag{49}\] which completes estimate 1. Additionally, we can estimate \(\delta_{m}\) in \(L^{\infty}(0,T;L^{2}(\Gamma_{1}))\) based on the estimate of \(\|\delta_{m}^{\prime}\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1}))}\). In fact, the Fundamental Theorem of Calculus implies \[\|\delta_{m}\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1}))} = \left\|J_{t}^{1}\left[D_{t}^{1}\delta_{m}\right]+\delta_{m}(0) \right\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1}))}\] \[\leq \|J_{t}^{1}\delta_{m}^{\prime}\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1} ))}+\|\delta_{m}(0)\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1}))}\] \[\leq \frac{T}{\Gamma(2)}\|\delta_{m}^{\prime}\|_{L^{\infty}(0,T;L^{2}( \Gamma_{1}))}+|\delta_{m}(0)|_{\Gamma_{1}}\] \[\leq T\|\delta_{m}^{\prime}\|_{L^{\infty}(0,T;L^{2}(\Gamma_{1}))}+| \delta_{0}|_{\Gamma_{1}}.\] therefore, (49) gives us the desired estimate. **Estimate 2**.: We apply \(D_{t}^{1}=\frac{d}{dt}\) in (46) and \({}^{C}\!D_{t}^{\alpha}\) in (47). In the resulting equations we take \(w=2{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}\in W_{m}\) and \(z=2\delta_{m}^{\prime\prime}\in Z_{m}\), and combine them to obtain \[2\left(\left[{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\right]^{ \prime},{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\right)+2\left(\left(u_{m}^{ \prime}(t),{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\right)\right)+2\left(f\,{ }^{C}\!D_{t}^{\alpha}\delta_{m}^{\prime\prime}(t),\delta_{m}^{\prime\prime}( t)\right)_{\Gamma_{1}}\\ +2\left(g\,{}^{C}\!D_{t}^{\alpha}\delta_{m}^{\prime}(t),\delta_{ m}^{\prime\prime}(t)\right)_{\Gamma_{1}}+2\left(h\,{}^{C}\!D_{t}^{\alpha}\delta_{m}(t), \delta_{m}^{\prime\prime}(t)\right)_{\Gamma_{1}}=0. \tag{50}\] Since \(\delta_{m}^{\prime}\in C^{1}([0,T];L^{2}(\Gamma_{1}))\), making use of (10) and (13) we can write \[\left(g\,{}^{C}\!D_{t}^{\alpha}\delta_{m}^{\prime}(t),\delta_{m}^{\prime \prime}(t)\right)_{\Gamma_{1}}=\left(J_{t}^{1-\alpha}g^{\frac{1}{2}}\delta_{m} ^{\prime\prime}(t),{}^{C}\!D_{t}^{1-\alpha}J_{t}^{1-\alpha}g^{\frac{1}{2}} \delta_{m}^{\prime\prime}(t)\right)_{\Gamma_{1}}.\] As done before in Estimate 1, considering Proposition 3 and Theorem 2, from (50), the equality above, and Young's inequality, it follows that for any \(\varepsilon>0\), \[\frac{d}{dt}\left|{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\right|^{2 }+{}^{C}\!D_{t}^{\alpha}\left\|u_{m}^{\prime}(t)\right\|^{2}+{}^{C}\!D_{t}^{ \alpha}\left|f^{\frac{1}{2}}\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^ {2}+{}^{C}\!D_{t}^{1-\alpha}\left|J_{t}^{1-\alpha}g^{\frac{1}{2}}\delta_{m}^{ \prime\prime}(t)\right|_{\Gamma_{1}}^{2}\] \[\leq 2\left|h^{\,C}\!D_{t}^{\alpha}\delta_{m}(t)\right|_{\Gamma_{1 }}\left|\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}\leq\frac{1}{ \varepsilon}\left|h^{\,C}\!D_{t}^{\alpha}\delta_{m}(t)\right|_{\Gamma_{1}}^{ 2}+\varepsilon\left|\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{h_{1}\,T^{1-\alpha}}{\varepsilon\Gamma(2-\alpha)}J_{t} ^{1-\alpha}|\delta_{m}^{\prime}(t)|_{\Gamma_{1}}^{2}+\varepsilon\left|\delta_ {m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2},\] where in the last inequality we have used (16). Integrating this inequality over \((0,t)\) and applying (12) it follows that \[\left|{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\right|^{2}+J_{t}^{1 -\alpha}\left\|u_{m}^{\prime}(t)\right\|^{2}+J_{t}^{1-\alpha}\left|f^{\frac{1 }{2}}\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}+J_{t}^{\alpha}\left| J_{t}^{1-\alpha}g^{\frac{1}{2}}\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{h_{1}T^{1-\alpha}}{\varepsilon\Gamma(2-\alpha)}J_{t}^{2 -\alpha}|\delta_{m}^{\prime}(t)|_{\Gamma_{1}}^{2}+\varepsilon J_{t}^{1}\left| \delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}+\left|{}^{C}\!D_{t}^{ \alpha}u_{m}^{\prime}(0)\right|^{2}+\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\|u_ {m}^{\prime}(0)\|^{2}\] \[+\frac{T^{1-\alpha}}{\Gamma(2-\alpha)}\left|f^{\frac{1}{2}}\delta _{m}^{\prime\prime}(0)\right|_{\Gamma_{1}}^{2}+\frac{T^{\alpha}}{\Gamma(1+ \alpha)}\left|\left(J_{t}^{1-\alpha}g^{\frac{1}{2}}\delta_{m}^{\prime\prime} \right)(0)\right|_{\Gamma_{1}}^{2}. \tag{51}\] Our job now is to estimate the terms, in the right hand side of the above inequality, involving \({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0)\), \(\delta_{m}^{\prime\prime}(0)\) and \(\left(J_{t}^{1-\alpha}\delta_{m}^{\prime\prime}\right)(0)\). We start noting that the regularity of \(\delta_{m}^{\prime\prime}\) allows us to affirm that \(\left(J_{t}^{1-\alpha}\delta_{m}^{\prime\prime}\right)(0)=0\). Furthermore, the approximated equation (46) in \(t=0\) with \(w={}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0)\) gives us \[\left|{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0)\right|^{2} = -\left(\left(u_{m}(0),{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0) \right)\right)+\left(\delta_{m}^{\prime}(t),\gamma_{0}({}^{C}\!D_{t}^{\alpha} u_{m}^{\prime}(0))\right)_{\Gamma_{1}}\] \[= \left(\Delta u_{m}(0),{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0) \right)\leq\frac{1}{2}\left|\Delta u_{0m}\right|^{2}+\frac{1}{2}\left|{}^{C}\! D_{t}^{\alpha}u_{m}^{\prime}(0)\right|^{2},\] what gives us the estimate \[\left|{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(0)\right|^{2}\leq\left|\Delta u_{0 m}\right|^{2}\leq\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{2}(\Omega)}^{2}. \tag{52}\] Taking account the continuity of \(\gamma_{0}\) (i.e. \(\left|\gamma_{0}(u)\right|_{\Gamma_{1}}\leq c_{0}\left\|u\right\|_{H_{\Gamma_{ 0}}^{1}(\Omega)}\)) and \(\gamma_{1}\), a similar approach on equation (47) with \(z=f\delta_{m}^{\prime\prime}(0)\) allows us deduce that \[\left|f\delta_{m}^{\prime\prime}(0)\right|_{\Gamma_{1}}^{2} = \left(f\delta_{m}^{\prime\prime}(0),-g\delta_{m}^{\prime}(0)-h \delta_{m}(0)-\gamma_{0}(u_{m}^{\prime}(0))\right)_{\Gamma_{1}}\] \[\leq \left|f\delta_{m}^{\prime\prime}(0)\right|_{\Gamma_{1}}\left|-g \gamma_{1}(u_{0m})-h\delta_{0m}-\gamma_{0}(u_{1m})\right|_{\Gamma_{1}}\] \[\leq \frac{1}{2}\left|f\delta_{m}^{\prime\prime}(0)\right|_{\Gamma_{1} }^{2}+\left(g_{1}c_{1}\right)^{2}\left\|u_{0m}\right\|^{2}+2h_{1}^{2}\left| \delta_{0m}\right|_{\Gamma_{1}}^{2}+2c_{0}^{2}\left\|u_{1m}\right\|^{2}.\] Therefore we conclude \[\left|\delta_{m}^{\prime\prime}(0)\right|_{\Gamma_{1}}^{2}\leq\frac{2(g_{1}c_{1}) ^{2}}{f_{0}}\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{2}(\Omega)} ^{2}+\frac{4c_{0}^{2}}{f_{0}}\left\|u_{1}\right\|^{2}+\frac{4h_{1}^{2}}{f_{0}} \left|\delta_{0}\right|_{\Gamma_{1}}^{2}. \tag{53}\] Thus, with (52) and (53) in (51), inequality (7) guarantees that \[\big{|}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\big{|}^{2}+\frac{1}{T^{ \alpha}\Gamma(1-\alpha)}\left[J_{t}^{1}\left\|u_{m}^{\prime}(t)\right\|^{2}+f_{0 }J_{t}^{1}\left|\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}\right]+ \frac{g_{0}}{T^{1-\alpha}\Gamma(\alpha)}J_{t}^{1}\left|J_{t}^{1-\alpha}\delta _{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}\] \[\leq\frac{h_{1}}{\varepsilon}\left[\frac{T^{1-\alpha}}{\Gamma(2- \alpha)}\right]^{2}J_{t}^{1}|\delta_{m}^{\prime}(t)|_{\Gamma_{1}}^{2}+\varepsilon J _{t}^{1}\left|\delta_{m}^{\prime\prime}(t)\right|_{\Gamma_{1}}^{2}+C_{2}\left[ \left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{2}(\Omega)}^{2}+ \left\|u_{1}\right\|^{2}+\left|\delta_{0}\right|_{\Gamma_{1}}^{2}\right],\] where \(C_{2}>0\) is a constant. Choosing \(\varepsilon\) sufficiently small, such that \(0<\varepsilon<f_{0}/T^{\alpha}\Gamma(1-\alpha)\), for some \(C_{3}>0\), we get \[\big{|}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t)\big{|}^{2}+J_{t}^{1} \left\|u_{m}^{\prime}(t)\right\|^{2}+J_{t}^{1}\left|\delta_{m}^{\prime\prime} (t)\right|_{\Gamma_{1}}^{2}\\ \leq C_{3}\left[\left\|\delta_{m}^{\prime}\right\|_{L^{2}(0,T;L^{ 2}(\Gamma_{1}))}^{2}+\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega)\cap H^{2 }(\Omega)}^{2}+\left\|u_{1}\right\|^{2}+\left|\delta_{0}\right|_{\Gamma_{1}}^{ 2}\right].\] From this inequality and (49) there exists a constant \(K_{2}>0\), that depends on \(\alpha,T,f_{0},g_{1},h_{1},c_{0}\) and \(c_{1}\), such that \[\big{\|}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}\big{\|}_{L^{\infty}(0,T;L^{2}(\Omega))}^{2}+\left\|u_{m}^{\prime}\right\|_{L^{2}(0,T;H_{\Gamma_{0}} ^{1}(\Omega))}^{2}+\left\|\delta_{m}^{\prime\prime}\right\|_{L^{2}(0,T;L^{2}( \Gamma_{1}))}^{2}\\ \leq K_{2}\left[\left\|u_{0}\right\|_{H_{\Gamma_{0}}^{1}(\Omega) \cap H^{2}(\Omega)}^{2}+\left\|u_{1}\right\|^{2}+\left|\delta_{0}\right|_{ \Gamma_{1}}^{2}\right]. \tag{54}\] Moreover, since there exists \(C_{4}>0\) such that \(|u|\leq C_{4}\|u\|\), using (11) and the continuity of the RL fractional integral of order \(\alpha\) (cf. [10, Theorem 3.1]) we obtain \[\|u_{m}^{\prime}\|_{L^{\infty}(0,T;L^{2}(\Omega))} = \|J_{t}^{\alpha}{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}+u_{m}^{ \prime}(0)\|_{L^{\infty}(0,T;L^{2}(\Omega))} \tag{55}\] \[\leq \|J_{t}^{\alpha}{}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}\|_{L^{\infty }(0,T;L^{2}(\Omega))}+\|u_{m}^{\prime}(0)\|_{L^{\infty}(0,T;L^{2}(\Omega))}\] \[\leq \frac{T^{\alpha}}{\Gamma(\alpha+1)}\|^{C}\!D_{t}^{\alpha}u_{m}^{ \prime}\|_{L^{\infty}(0,T;L^{2}(\Omega))}+|u_{m}^{\prime}(0)|\] \[\leq \frac{T^{\alpha}}{\Gamma(\alpha+1)}\|^{C}\!D_{t}^{\alpha}u_{m}^{ \prime}\|_{L^{\infty}(0,T;L^{2}(\Omega))}+C_{4}\|u_{1}\|.\] In this manner, (54) enables us to estimate \(u_{m}^{\prime}\) in \(L^{\infty}\left(0,T;L^{2}(\Omega)\right)\), finishing the a priori estimates. **Step 3: Passage to the limit - Existence.** The a priori estimates, give us \[\left(u_{m}\right)_{m\in\mathbb{N}}\text{ is bounded in }L^{\infty} \left(0,T;H_{\Gamma_{0}}^{1}(\Omega)\right);\] \[\left(u_{m}^{\prime}\right)_{m\in\mathbb{N}}\text{ is bounded in }L^{\infty} \left(0,T;L^{2}(\Omega)\right)\text{ and in }L^{2}\left(0,T;H_{\Gamma_{0}}^{1}(\Omega)\right);\] \[\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}\right)_{m\in\mathbb{N}} \text{ is bounded in }L^{\infty}\left(0,T;L^{2}(\Omega)\right);\] \[\left(\delta_{m}\right)_{m\in\mathbb{N}}\text{ and }\left(\delta_{m}^{ \prime}\right)_{m\in\mathbb{N}}\text{ are bounded in }L^{\infty}\left(0,T;L^{2}(\Gamma_{1})\right);\] \[\left(\delta_{m}^{\prime\prime}\right)_{m\in\mathbb{N}}\text{ is bounded in }L^{2}\left(0,T;L^{2}(\Gamma_{1})\right).\] Therefore, with a standard procedure, we can find subsequences of \(\left(u_{m}\right)_{m\in\mathbb{N}}\) and \(\left(\delta_{m}\right)_{m\in\mathbb{N}}\), for which we still use the same notations, and functions \(u\) and \(\delta\) such that \[u_{m}\overset{\star}{\rightharpoonup}u\text{ in }L^{\infty} \left(0,T;H^{1}_{\Gamma_{0}}(\Omega)\right), \tag{56}\] \[u^{\prime}_{m}\overset{\star}{\rightharpoonup}u^{\prime}\text{ in }L^{\infty} \left(0,T;L^{2}(\Omega)\right)\text{ and }u^{\prime}_{m}\rightharpoonup u^{\prime}\text{ in }L^{2} \left(0,T;H^{1}_{\Gamma_{0}}(\Omega)\right),\] (57) \[\delta_{m}\overset{\star}{\rightharpoonup}\delta\text{ and }\delta^{\prime}_{m}\overset{\star}{\rightharpoonup}\delta^{\prime}\text{ \ in }L^{\infty} \left(0,T;L^{2}(\Gamma_{1})\right),\] (58) \[\delta^{\prime\prime}_{m}\rightharpoonup\delta^{\prime\prime} \text{ in }L^{2}(0,T;L^{2}(\Gamma_{1}))\,. \tag{59}\] Even more, by the continuity of the application \(\gamma_{0}:H^{1}_{\Gamma_{0}}(\Omega)\to L^{2}(\Gamma_{1})\) we have \[\gamma_{0}(u^{\prime}_{m})\rightharpoonup\gamma_{0}(u^{\prime})\text{ in }L^{2} \left(0,T;L^{2}(\Gamma_{1})\right). \tag{60}\] Besides the convergences in (56)-(60), we must also consider the boundedness of the sequence \(\left({}^{C}\!D_{t}^{\alpha}u^{\prime}_{m}\right)_{m\in\mathbb{N}}\) in \(L^{\infty}\left(0,T;L^{2}(\Omega)\right)\). It's worth noting that, due to the non-standard behavior of the Caputo fractional derivative, this argument is unconventional and requires additional justification. Since \(\left({}^{C}\!D_{t}^{\alpha}u^{\prime}_{m}\right)_{m\in\mathbb{N}}\) is bounded in \(L^{\infty}\left(0,T;L^{2}(\Omega)\right)\), we can extract a subsequence, denoted the same way, and find a function \(v\) such that \[{}^{C}\!D_{t}^{\alpha}u^{\prime}_{m}\overset{\star}{\rightharpoonup}v\text{ in }L^{\infty}\left(0,T;L^{2}(\Omega)\right). \tag{61}\] The main task now is to prove that we can compute the Caputo fractional derivative of order \(\alpha\) of \(u^{\prime}\) and that \(v={}^{C}\!D_{t}^{\alpha}u^{\prime}\). At first, observe that from (61) and the definition of Caputo fractional derivative, for all \(w\in H^{1}_{\Gamma_{0}}(\Omega)\) and \(\theta\in\mathcal{D}(0,T)\), we have \[\int_{0}^{T}\frac{d}{dt}J^{1-\alpha}_{t}\left(u^{\prime}_{m}(t)-u^{\prime}_{m}( 0),w\right)\theta(t)dt\to\int_{0}^{T}\left(v(t),w\right)\theta(t)dt\,.\] On the other hand, the continuity of the RL fractional integral, along with (41) and (57), enables us to deduce that \[\int_{0}^{T}\frac{d}{dt}J^{1-\alpha}_{t} \left(u^{\prime}_{m}(t)-u^{\prime}_{m}(0),w\right)\theta(t)dt\] \[=\int_{0}^{T}J^{1-\alpha}_{t}\left(u^{\prime}_{m}(t)-u_{1m},w \right)\theta^{\prime}(t)\,dt\to\int_{0}^{T}J^{1-\alpha}_{t}\left(u^{\prime} (t)-u_{1},w\right)\theta^{\prime}(t)\,dt.\] Hence, from the uniqueness of limit \[\int_{0}^{T}\frac{d}{dt}J^{1-\alpha}_{t}\left(u^{\prime}(t)-u_{1},w\right) \theta(t)\,dt=\int_{0}^{T}\left(v(t),w\right)\theta(t)\,dt\,,\] for all \(w\in H^{1}_{\Gamma_{0}}(\Omega)\) and \(\theta\in\mathcal{D}(0,T).\) This identity shows that \(D_{t}^{\alpha}(u^{\prime}-u_{1})=v\) in \(L^{\infty}(0,T;L^{2}(\Omega))\). Finally, applying Theorem 4 we get that \(u^{\prime}\in C([0,T];L^{2}(\Omega))\) and \(u^{\prime}(0)=u_{1}\) in \(L^{2}(\Omega)\). From this we may conclude that \({}^{C}\!D_{t}^{\alpha}u^{\prime}(t)=v(t),\) for a.e. \(t\in(0,T),\) and therefore we have \[{}^{C}\!D_{t}^{\alpha}u^{\prime}_{m}\stackrel{{\star}}{{\rightharpoonup}}{{}^{C} \!D_{t}^{\alpha}u^{\prime}}\text{ in }L^{\infty}\left(0,T;L^{2}(\Omega)\right), \tag{62}\] as we wanted. Now, we can take the limit in the approximate problem (46)-(47). The approximate equation (46) and the convergences (56), (58), and (62) yield \[\left({}^{C}\!D_{t}^{\alpha}u^{\prime}(t),\varphi\right)+\left((u(t),\varphi) \right)-\left(\delta^{\prime}(t),\gamma_{0}(\varphi)\right)_{\Gamma_{1}}=0, \tag{63}\] for every \(\varphi\in H^{1}_{\Gamma_{0}}(\Omega)\) and a.e. \(t\in(0,T)\). In particular, \[\left\langle-\Delta u(t),\varphi\right\rangle_{\mathcal{D}^{\prime}(\Omega) \times\mathcal{D}(\Omega)}=\left((u(t),\varphi)\right)=-\left({}^{C}\!D_{t}^ {\alpha}u^{\prime}(t),\varphi\right),\,\text{for all }\varphi\in\mathcal{D}( \Omega)\,,\] that implies \[-\Delta u(t)=-{}^{C}\!D_{t}^{\alpha}u^{\prime}(t)\;\;\text{in}\,L^{2}(\Omega) \,,\text{ for a.e.}\,t\in(0,T), \tag{64}\] since \({}^{C}\!D_{t}^{\alpha}u^{\prime}(t)\in L^{2}(\Omega)\) for a.e. \(t\in(0,T).\) This proves (32) and shows that \(u(t)\in\mathcal{H}_{\Delta}(\Omega)\) for a.e. \(t\in(0,T).\) Taking into account (47) and the convergences (58)-(60) we conclude \[\left(f\delta^{\prime\prime}(t)+g\delta^{\prime}(t)+h\delta(t)+\gamma_{0}(u^{ \prime}(t)),\psi\right)_{\Gamma_{1}}=0,\quad\forall\psi\in L^{2}(\Gamma_{1}),\text{ a.e. in }(0,T).\] Consequently, by Du Bois-Raymond Lemma, \[f\delta^{\prime\prime}(t)+g\delta^{\prime}(t)+h\delta(t)+\gamma_{0}(u^{\prime }(t))=0\text{ in }L^{2}(\Gamma_{1}),\text{ for a.e. }t\in(0,T), \tag{65}\] and (33) is proved. In order to prove (34) we multiply (64) by \(\varphi\in H^{1}_{\Gamma_{0}}(\Omega)\) and integrate over \(\Omega\), that leads to \[\left({}^{C}\!D_{t}^{\alpha}u^{\prime}(t),\varphi\right)-\left(\Delta u(t), \varphi\right)=0,\quad\text{for a.e. }t\in(0,T).\] Therefore, the generalized Green identity gives \[\left({}^{C}\!D_{t}^{\alpha}u^{\prime}(t),\varphi\right)+\left((u(t),\varphi) \right)-\left\langle\gamma_{1}(u(t)),\gamma_{0}(\varphi)\right\rangle_{H^{- \frac{1}{2}}(\Gamma)\times H^{\frac{1}{2}}(\Gamma)}=0, \tag{66}\] for almost every \(t\in(0,T)\) and for every \(\varphi\in H^{1}_{\Gamma_{0}}(\Omega)\). We compare (63) and (66) to conclude (34). To verify that the functions \(u\) and \(\delta\) satisfy the initial conditions (35) and (36) we first observe that we have already proven \(u^{\prime}(0)=u_{1}\) in \(H^{1}_{\Gamma_{0}}(\Omega)\). The regularity of \(u\), given by (30), implies \(u\in C([0,T];H^{1}_{\Gamma_{0}}(\Omega))\) and the convergences (40), (56) and (57) yields \(u(0)=u_{0}.\) Similarly we have \(\delta\in C([0,T];L^{2}(\Gamma_{1}))\) and from convergences (43), (58) and (59) we obtain \(\delta^{\prime}(0)=\gamma_{1}(u_{0})|_{\Gamma_{1}}\) in \(L^{2}(\Omega)\). This ends Step 3. **Step 4: Uniqueness.** Let \((u,\delta)\) and \((v,\varrho)\) be two pairs of functions satisfying (30)-(36). Putting \(w=u-v\) and \(\zeta=\delta-\varrho\), we observe that the pair \((w,\zeta)\) has the regularity described in (30), (31) and satisfies \[{}^{C}\!D_{t}^{\alpha}w_{t}(t)-\Delta w(t)=0\,\,\,\text{in}\,\,L^{ 2}(\Omega)\,,\,\,\text{for a.e.}\,\,t\in(0,T); \tag{67}\] \[f\zeta_{tt}(t)+g\zeta_{t}(t)+h\zeta(t)=-\gamma_{0}(w_{t}(t))\,\, \text{in}\,\,L^{2}(\Gamma_{1}),\,\,\text{for a.e.}\,\,t\in(0,T);\] (68) \[\int_{\Gamma_{1}}\zeta_{t}(t)\gamma_{0}(\varphi)d\Gamma_{1}= \left\langle\gamma_{1}(w(t)),\gamma_{0}(\varphi)\right\rangle_{H^{-\frac{1}{2} }(\Gamma)\times H^{\frac{1}{2}}(\Gamma)},\] \[\text{for all}\,\,\varphi\in H^{1}_{\Gamma_{0}}(\Omega)\,\,\, \text{and a.e.}\,\,\,\in(0,T);\] (69) \[w(0)=w_{t}(0)=0\,\,\text{in}\,\,L^{2}(\Omega)\,\,\text{and}\,\, \zeta(0)=\zeta_{t}(0)=0\,\,\text{in}\,\,L^{2}(\Gamma_{1}). \tag{70}\] Taking the \(L^{2}(\Omega)\) inner product, in both sides of (67), by \(2w^{\prime}(t)\); using generalized Green\({}^{\prime}\)s formula and considering (69) we obtain \[2\left({}^{C}\!D_{t}^{\alpha}w^{\prime}(t),w^{\prime}(t)\right)+2\left((w(t),w ^{\prime}(t))\right)-2\left(\zeta^{\prime}(t),\gamma_{0}(w^{\prime}(t))\right) _{\Gamma_{1}}=0\,.\] This equality, after taking the \(L^{2}(\Gamma_{1})\) inner product, in both sides of (68), by \(2\zeta^{\prime}(t)\), implies that \[2\left({}^{C}\!D_{t}^{\alpha}w^{\prime}(t),w^{\prime}(t)\right) +2\left((w(t),w^{\prime}(t))\right)+2\left(f\zeta^{\prime\prime}(t),\zeta^{ \prime}(t)\right)_{\Gamma_{1}}\\ +2\left(g\zeta^{\prime}(t),\zeta^{\prime}(t)\right)_{\Gamma_{1}}+ 2\left(h\zeta(t),\zeta^{\prime}(t)\right)_{\Gamma_{1}}=0. \tag{71}\] From now on, we proceed as in Estimate 1 and obtain \[J_{t}^{1-\alpha}\left|w^{\prime}(t)\right|^{2}+\left\|w(t)\right\|^{2}+\left| \zeta^{\prime}(t)\right|_{\Gamma_{1}}^{2}+\left|\zeta(t)\right|_{\Gamma_{1}}^ {2}\leq 0\,,\text{for a.e.}\,\,t\in(0,T)\,,\] where to arrive at the above inequality, we have considered the null initial conditions (70). Thus we have \(w(t)=0\) in \(H^{1}_{\Gamma_{0}}(\Omega)\) and \(\zeta(t)=0\) in \(L^{2}(\Gamma_{1})\) for a.e. \(t\in(0,T)\), which show that the pairs of functions \((u,\delta)\) and \((v,\varrho)\) are equal, concluding the proof of the uniqueness. **Step 5: Continuous dependence.** We now say that the unique pair of function \((u,\delta)\), constructed in the previous steps, constitutes a solution to the problem (1)-(6). With this step, we conclude the well-posedness of the problem (1)-(6) and demonstrate that the solution \((u,\delta)\) depends continuously on the parameters \(f,\,\,g,\,\,h,\,\,u_{0},\,\,u_{1}\) and \(\delta_{0}\). Initially we observe that from the weakly lower semicontinuity of the norms and estimates (49), (54) and (55) we get \[\left\|{}^{C}\!D_{t}^{\alpha}u^{\prime}\right\|_{L^{\infty}(0,T;L^{2}(\Omega)) }^{2}+\left\|u^{\prime}\right\|_{L^{\infty}(0,T;L^{2}(\Omega))}^{2}+\left\|u^{ \prime}\right\|_{L^{2}\left(0,T;H^{1}_{\Gamma_{0}}(\Omega)\right)}^{2}+\left\| u\right\|_{L^{\infty}\left(0,T;H^{1}_{\Gamma_{0}}(\Omega)\right)}^{2}\] \[+\|f-\tilde{f}\|_{C(\overline{\Gamma}_{1})}+\|g-\tilde{g}\|_{C( \overline{\Gamma}_{1})}+\|h-\tilde{h}\|_{C(\overline{\Gamma}_{1})}\Big{]}.\] ## 5. Closing Remarks We have not yet discussed in this article the equation used to describe the time-fractional wave equation in (1). It is important to note that our approach differs from the conventional one found in established literature (e.g., [30] and related references), which is represented by the equation \[{}^{C}\!D_{t}^{1+\alpha}u(x,t)-\Delta u(x,t)=0,\quad\text{in }\Omega\times(0,T),\] (1') where \(0<\alpha<1\). These two formulations differ fundamentally because, as clarified in item \((v)\) of Proposition 1, the equality \({}^{C}\!D_{t}^{1+\alpha}f(t)={}^{C}\!D_{t}^{\alpha}f^{\prime}(t)\), for a.e. \(t\in[0,T]\), is valid only when the function \(f\) possesses adequate regularity. In our case, for example, we cannot establish that \(J_{t}^{1-\alpha}u(t)\in W^{2,1}(0,T;L^{2}(\Omega))\), which prevents us from applying (15). Nevertheless, the constructions and convergences we have achieved during our study are sufficient to demonstrate that our solution \(u\) indeed satisfies (1'). Let us begin by considering the approximated solutions \(u_{m}\) of our main theorem, \(\varphi\in H_{\Gamma_{0}}^{1}(\Omega)\) and \(\theta\in\mathcal{D}(0,T)\). Then we have \[\int_{0}^{T}\left({}^{C}\!D_{t}^{1+\alpha}u_{m}(t),\varphi\right) \theta(t)dt=\int_{0}^{T}\frac{d^{2}}{dt^{2}}J_{t}^{1-\alpha}\left(u_{m}(t)-u_ {m}(0)-tu_{m}^{\prime}(0),\varphi\right)\theta(t)dt\\ =\int_{0}^{T}J_{t}^{1-\alpha}\left(u_{m}(t)-u_{m}(0)-tu_{m}^{ \prime}(0),\varphi\right)\theta^{\prime\prime}(t)dt.\] Additionally, since \(u_{m}\) is sufficiently regular, by (15) we have \[\int_{0}^{T}\left({}^{C}\!D_{t}^{1+\alpha}u_{m}(t),\varphi\right)\theta(t)dt= \int_{0}^{T}\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),\varphi\right) \theta(t)dt.\] In this context, we observe from the convergences on Step 3 and the continuity of the RL fractional integral of order \(\alpha\) that \[\int_{0}^{T}\left({}^{C}\!D_{t}^{\alpha}u_{m}^{\prime}(t),\varphi\right) \theta(t)dt\rightarrow\int_{0}^{T}\left({}^{C}\!D_{t}^{\alpha}u^{\prime}(t), \varphi\right)\theta(t)dt\] and \[\int_{0}^{T}J_{t}^{1-\alpha}\left(u_{m}(t)-u_{m}(0)-tu_{m}^{ \prime}(0),\varphi\right)\theta^{\prime\prime}(t)dt\\ \rightarrow\int_{0}^{T}J_{t}^{1-\alpha}\left(u(t)-u(0)-tu^{ \prime}(0),\varphi\right)\theta^{\prime\prime}(t)dt.\] Therefore, the uniqueness of the limit implies that \[{}^{C}\!D_{t}^{\alpha+1}u(x,t)={}^{C}\!D_{t}^{\alpha}u^{\prime}(x,t),\] for almost every \((x,t)\in\Omega\times(0,T)\). As a result, we are further investigating, after establishing the appropriate solution concepts, the existence and uniqueness of a strong solution to problem (2)-(6) while considering (1') in place of (1).
2310.00134
Brauer's problem 21 for principal blocks
Problem 21 of Brauer's list of problems from 1963 asks whether for any positive integer k there are finitely many isomorphism classes of groups that occur as the defect group of a block with k irreducible characters. We solve this problem for principal blocks. Another long-standing open problem (from 1982) in this area asks whether the defect group of a block with 3 irreducible characters is necessarily the cyclic group of order 3. In most cases we reduce this problem to a question on simple groups that is closely related to the recent solution of Brauer's height zero conjecture.
Alexander Moretó, Noelia Rizo, A. A. Schaeffer Fry
2023-09-29T20:50:00Z
http://arxiv.org/abs/2310.00134v1
# Brauer's problem 21 for principal blocks ###### Abstract. Problem 21 of Brauer's list of problems from 1963 asks whether for any positive integer \(k\) there are finitely many isomorphism classes of groups that occur as the defect group of a block with \(k\) irreducible characters. We solve this problem for principal blocks. Another long-standing open problem (from 1982) in this area asks whether the defect group of a block with 3 irreducible characters is necessarily the cyclic group of order 3. In most cases we reduce this problem to a question on simple groups that is closely related to the recent solution of Brauer's height zero conjecture. 2010 Mathematics Subject Classification: Primary 20C15, 20C20 The authors thank the Isaac Newton Institute for Mathematical Sciences (INI) in Cambridge and the organizers of the Summer 2022 INI program Groups, Representations, and Applications: New Perspectives, supported by EPSRC grant EP/R014604/1, where part of this work was completed. The second and third-named authors also thank the National Science Foundation Grant No. DMS-1928930, which supported them while they were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Summer of 2023. The first and second-named authors are supported by Ministerio de Ciencia e Innovacion (Grants PID2019-103854GB-I00 and PID2022-137612NB-I00 funded by MCIN/AEI/10.13039/501100011033 and "ERDF A way of making Europe" ). The first-named author also acknowledges support by Generalitat Valenciana CIAICO/2021/163. The second-named author is supported by a CDEIGEN grant CIDEIG/2022/29 funded by Generalitat Valenciana. The third-named author also gratefully acknowledges support from the National Science Foundation, Award No. DMS-2100912, and her former institution, Metropolitan State University of Denver, which holds the award and allows her to serve as PI. She also thanks the first and second authors and the CARGRUPS research team at U. Valencia for a productive stay in March 2023. The authors thank A. Maroti and G. Navarro for many useful conversations on Theorem C Recall that Landau's theorem asserts that the order of a finite group is bounded from above in terms of the number of conjugacy classes. As pointed out by Brauer [1], Landau's argument provides the bound \(|G|\leqslant 2^{2^{k(G)}}\). Brauer's Problem 3 asks for substantially better bounds. This problem has also generated a large amount of research. L. Pyber [22] found an asymptotically substantially better bound, although it is still not known whether there exists a bound of the form \(|G|\leqslant c^{k(G)}\) for some constant \(c\). We refer the reader to [1] for the best known bound as of the writing of this article. Note that Brauer's Problem 21 asks for a blockwise version of Landau's theorem. As Brauer did with Landau's theorem, it also seems interesting to ask for asymptotically good bounds for the order of a defect group in terms of the number of characters in the block. Our proof of Theorem A provides an explicit bound that surely will be far from best possible. For almost simple groups, we obtain a better bound in Theorem 2.1. Given a Brauer \(p\)-block \(B\) of a finite group \(G\) with defect group \(D\), we will write \(k(B)\) to denote the number of irreducible complex characters in \(B\). R. Brauer himself proved that if \(k(B)=1\) then \(D\) is the trivial group ([26, Theorem 3.18]). More than 40 years later, J. Brandt proved that if \(k(B)=2\) then \(D\) is the cyclic group of order 2. However, despite a large amount of work in the area in recent years, the conjecture remains open when \(k(B)\geqslant 3\). It has been speculated since Brandt's [1] 1982 paper that if \(k(B)=3\) then the defect group is cyclic of order 3. It seems that it was known to Kulshammer that this follows from the Alperin-McKay conjecture since 1990 [15]. A proof of this fact appeared in [11], where Kulshammer, G. Navarro, B. Sambale and P. H. Tiep formally state Brandt's speculation as a conjecture. We present a condition on quasisimple groups that would imply the Kulshammer-Navarro-Sambale-Tiep conjecture (that is, that \(k(B)=3\) implies that the defect group is of size 3). **Condition B**.: _Let \(p\) be an odd prime and let \(S\) be a non-abelian simple group of order divisible by \(p\). We say that Condition B holds for \((S,p)\) if the following holds: let \(K\) be a quasisimple group of order divisible by \(p\) with center \(Z\), a cyclic \(p^{\prime}\)-group, and \(K/Z=S\). Let \(B\) be a non-principal faithful \(p\)-block of \(K\) with \(|\mathrm{cd}(B)|>1\) and let \(D\) be a defect group of \(B\), not cyclic and elementary abelian. Then there are at least 4 irreducible characters in \(B\) not \(\mathrm{Aut}(K)\)-conjugate._ **Theorem C**.: _Let \(p\) be a prime. If \(p\) is odd, suppose that Condition B holds for \((S,p)\) for all non-abelian composition factors \(S\) of \(G\). Then the Kulshammer-Navarro-Sambale-Tiep conjecture holds for \(G\)._ We remark that this reduction and Condition B have played an influential role in the recent solution of Brauer's height zero conjecture [11]. In fact, the fundamental Theorem B of [11] is a slightly weaker version of Condition B: it shows there always exist 3 irreducible characters in \(B\) not \(\mathrm{Aut}(K)\)-conjugate. In fact, we will see in Remark 5.7 that this is tight. Although Condition B seems to hold in many situations, we will see an example of a family of simple groups for which Condition B does not hold, for \(p=5\). In Section 2, we prove Brauer Problem 21 for the principal blocks of almost simple groups, which is used in Section 3 to prove Theorem A. In Section 4, we prove Theorem C. We conclude the paper by discussing Condition B in Section 5. ## 2. BP21 for almost simple groups The following is the main result of this section. **Theorem 2.1**.: _Let \(p\) be a prime. Let \(S\leqslant A\leqslant\operatorname{Aut}(S)\), where \(S\) is a finite nonabelian simple group, and \(p\mid|S|\). Let \(P\in\operatorname{Syl}_{p}(A)\) and let \(k:=k(B_{0}(A))\) be the number of irreducible complex characters in the principal block of \(A\). Then we have:_ 1. \(|P|\leqslant k^{2(k^{2}+2k)}\)_._ 2. \(|P\cap S|\leqslant k^{2k^{2}}\)_._ Note that in the context of Theorem 2.1, any character in \(\operatorname{Irr}(B_{0}(S))\) lies below some character in \(\operatorname{Irr}(B_{0}(A))\) by [22, Theorem (9.4)], so that \(k(B_{0}(A))\geqslant k_{\operatorname{Aut}(S)}(B_{0}(S))\), where we write \(k_{\operatorname{Aut}(S)}(B_{0}(S))\) for the number of distinct \(\operatorname{Aut}(S)\)-orbits intersecting \(\operatorname{Irr}(B_{0}(S))\). _Remark 2.2_.: We further remark that, by the results of [16, 17, 18], we may assume for Theorem 2.1 that \(k(B_{0}(A))\geqslant 7\). The following is the main result of [16], from which we obtain a bound for \(p\) in terms of the number of irreducible characters in a given principal block. **Lemma 2.3** (Hung-Schaeffer Fry).: _Let \(p\) be a prime and let \(G\) be a finite group with \(p\mid|G|\). Let \(B_{0}\) denote the principal \(p\)-block of \(G\). Then_ \[k(B_{0})^{2}\geqslant 4(p-1).\] _In particular, \(p\leqslant\frac{1}{4}k(B_{0})^{2}+1\leqslant\frac{1}{2}k(B_{0})^{2}\), with the last inequality strict for \(k(B_{0})>2\)._ Next, we consider the case of cyclic Sylow subgroups. **Lemma 2.4**.: _Let \(p\) be a prime and let \(G\) be a finite group with \(p\mid|G|\). Assume that a Sylow \(p\)-subgroup \(P\in\operatorname{Syl}_{p}(G)\) is cyclic, and let \(B_{0}\) denote the principal \(p\)-block of \(G\). Then_ \[|P|<k(B_{0})^{2}.\] Proof.: In this case, by Dade's theory of blocks with cyclic defect group [1, Theorem 5.1.2], we have \(k(B_{0})=e+\frac{|P|-1}{e}\), where \(e=l(B_{0})\) is the number of irreducible \(p\)-Brauer characters in \(B_{0}\). Since \(1\leqslant l(B_{0})<k(B_{0})\) (see [16, Theorem 15.29]), this yields \[|P|=k(B_{0})e-e^{2}+1\leqslant k(B_{0})e<k(B_{0})^{2},\] as claimed. ### Notation and Additional Preliminaries Let \(q\) be a power of a prime. By a group of Lie type, we will mean a finite group obtained as the group \(\mathbf{G}^{F}\) of fixed points of a connected reductive algebraic group \(\mathbf{G}\) over \(\bar{\mathbb{F}}_{q}\) under a Steinberg morphism \(F\colon\mathbf{G}\to\mathbf{G}\) endowing \(\mathbf{G}\) with an \(\mathbb{F}_{q}\)-structure. In our situation of finite simple groups, we will often take \(\mathbf{G}\) to further be simple and simply connected, so that \(\mathbf{G}^{F}\) is, with some exceptions dealt with separately, the full Schur covering group of a simple group \(S=\mathbf{G}^{F}/\mathbf{Z}(\mathbf{G}^{F})\). Writing \(G=\mathbf{G}^{F}\), we let \(G^{*}\) denote the group \((\mathbf{G}^{*})^{F}\), where the pair \((\mathbf{G}^{*},F)\) is dual to \((\mathbf{G},F)\), with respect to some maximally split torus \(\mathbf{T}\) of \(\mathbf{G}\). Given a semisimple element \(s\in G^{*}\) (that is, an element of order relatively prime to \(q\)), we obtain a rational Lusztig series \(\mathcal{E}(G,s)\) of irreducible characters of \(G\) associated to the \(G^{*}\)-conjugacy class of \(s\) When \(s=1\), the set \(\mathcal{E}(G,1)\) is comprised of the so-called unipotent characters. Each series \(\mathcal{E}(G,s)\) contains so-called semisimple characters, and if \(\mathbf{C_{G^{*}}}(s)\) is connected, there is a unique semisimple character, which we will denote by \(\chi_{s}\). The following lemma will help us obtain many semisimple characters in the principal block. Here, we write \(\mathbf{Z}(G)=\mathbf{Z}(G)_{p}\times\mathbf{Z}(G)_{p^{\prime}}\), where \(\mathbf{Z}(G)_{p}\in\operatorname{Syl}_{p}(\mathbf{Z}(G))\). **Lemma 2.5**.: _Let \(p\) be a prime and let \(G:=\mathbf{G}^{F}\) be a group of Lie type defined over \(\mathbb{F}_{q}\) with \(p\nmid q\) and such that \(\mathbf{Z}(\mathbf{G})\) is connected or such that \(p\) is good for \(\mathbf{G}\) and \(\mathbf{C_{G^{*}}}(s)\) is connected. Let \(s\in G^{*}\) be a semisimple element with order a power of \(p\). Then the corresponding semisimple character \(\chi_{s}\in\operatorname{Irr}(G)\) lies in the principal \(p\)-block \(B_{0}(G)\) of \(G\) and is trivial on \(\mathbf{Z}(G)_{p^{\prime}}\)._ Proof.: The first statement, also noted in [10, Theorem 5.1], is due to Hiss [10, Corollary 3.4], and the second follows from [1, 11.1(d)]. Throughout, for \(q\) an integer and \(p\) a prime not dividing \(q\), we let \(d_{p}(q)\) denote the order of \(q\) modulo \(p\) if \(p\) is odd, and \(d_{2}(q)\) is the order of \(q\) modulo \(4\). For the remainder of Section 2, we will let \(G=\mathbf{G}^{F}\) for \(\mathbf{G}\) a simple, simply connected reductive group and \(F\colon\mathbf{G}\to\mathbf{G}\) a Steinberg endomorphism such that \(G/\mathbf{Z}(G)\) is a simple group of Lie type. Further, we will address the case that \(S\) is a simple group with an exceptional Schur multiplier (see [10, Table 6.1.3] for the list of such \(S\)), sporadic, or alternating separately in the proof of Theorem (b) below, and hence until then, we assume further that \(\mathbf{Z}(G)\) is a nonexceptional Schur multiplier for the simple group of Lie type \(S:=G/\mathbf{Z}(G)\). Let \(\widehat{S}\) denote the group of inner-diagonal automorphisms of \(S\). ### Exceptional Groups We first consider the exceptional groups, by which we mean the groups \(S=\operatorname{G}_{2}(q)\), \({}^{2}\!\operatorname{B}_{2}(q^{2})\), \({}^{2}\!\operatorname{G}_{2}(q^{2})\), \(\operatorname{F}_{4}(q)\), \({}^{2}\!\operatorname{F}_{4}(q^{2})\), \({}^{3}\!\operatorname{D}_{4}(q)\), \(\operatorname{E}_{6}(q)\), \({}^{2}\!\operatorname{E}_{6}(q)\), \(\operatorname{E}_{7}(q)\), and \(\operatorname{E}_{8}(q)\), when \(p\) is a prime not dividing \(q\). Let \(P\in\operatorname{Syl}_{p}(S)\). Then either \(P\) may be identified with a Sylow \(p\)-subgroup of \(G\) or \((p,\mathbf{G})\in\{(3,\operatorname{E}_{6}),(2,\operatorname{E}_{7})\}\) and \(|P|=|\hat{P}|/p\) with \(\hat{P}\) a Sylow \(p\)-subgroup of \(G\). If \(G\) is not of Suzuki or Ree type (i.e. \(G\) is not one of \({}^{2}\!\operatorname{B}_{2}(q^{2})\), \({}^{2}\!\operatorname{G}_{2}(q^{2})\), or \({}^{2}\!\operatorname{F}_{4}(q^{2})\)), let \(e:=d_{p}(q)\) and let \(\Phi_{e}:=\Phi_{e}(q)\) denote the \(e\)th cyclotomic polynomial in \(q\). If \(G\) is a Suzuki or Ree group, instead let \(\Phi_{e}:=\Phi^{(p)}\) as in [16, Section 8]. In either case, let \(p^{b}\) be the the highest power of \(p\) dividing \(\Phi_{e}\) and let \(m_{e}\) denote the largest positive integer such that \(\Phi_{e}^{\,\,m_{e}}\) divides the order polynomial of \((\mathbf{G},F)\). From [10, Theorem 4.10.2], we see that \(\hat{P}\) contains a normal abelian subgroup \(P_{T}\lhd\hat{P}\) such that \(\hat{P}/P_{T}\) is isomorphic to a subgroup of the Weyl group \(W=\mathbf{N_{G}}(\mathbf{T})/\mathbf{T}\). We also have \(\hat{P}=P_{T}\) if and only if \(P_{T}\) is abelian (see [16, Proposition 2.2]). Similarly, a Sylow \(p\)-subgroup of the dual group \(G^{*}\) contains a group isomorphic to \(P_{T}\). **Proposition 2.6**.: _Let \(p=2\) and let \(S\) be an exceptional group of Lie type as above with \(2\nmid q\). Let \(P\in\operatorname{Syl}_{2}(S)\) and write \(k_{0}:=k_{\operatorname{Aut}(S)}(B_{0}(S))\). Then \(|P|\leqslant 2^{14+8k_{0}}\). (In particular, Theorem 2.1(b) holds in this case.)_ Proof.: First consider the case \(S={}^{2}\!\operatorname{G}_{2}(q^{2})\) with \(q^{2}=3^{2n+1}>3\). Then we have \(|P|=8\), so the statement is clear. Hence we assume that \(S\) is not of Suzuki or Ree type. Let \(H=G^{*}=(\mathbf{G^{*}})^{F}\) and notice that \(S=[H,H]\) and \(\mathbf{Z}(\mathbf{G^{*}})\) is connected. Then notice that the semisimple characters \(\chi_{s}\in\operatorname{Irr}(H)\) of \(H\) for \(s\in H^{*}=G\) of \(2\)-power order lie in \(B_{0}(H)\) by Lemma 2.5. Let \(2^{b+1}\mid\mid(q^{2}-1)\) and note that \(G\) contains an element of order \(2^{b}\). For \(1\leqslant i\leqslant b\), let \(s_{i}\in G\) be of order \(2^{i}\), so that the semisimple characters \(\chi_{s_{i}}\) of \(H\) for \(1\leqslant i\leqslant b\) lie in \(B_{0}(H)\). Further, since the \(|s_{i}|\) are distinct, these lie in distinct \(\operatorname{Aut}(S)\)-conjugacy classes, using [13, Corollary 2.5]. Then choosing an irreducible constituent \(\chi_{s_{i}}^{\prime}\) on \(S\) for each \(i\), we obtain \(b\) characters in \(B_{0}(S)\) in distinct \(\operatorname{Aut}(S)\)-classes. Considering in addition the trivial character, we obtain \(k_{0}\geqslant b+1\). On the other hand, letting \(r\) be the rank of \(\mathbf{G}\), we have \(r\leqslant 8\) and \(|P_{T}|\leqslant(2^{b+1})^{r}\leqslant(2^{b+1})^{8}\) by the description of \(P_{T}\) in [11, Theorem 4.10.2]. Further, \(|\hat{P}/P_{T}|\leqslant|W|_{2}\leqslant 2^{14}\). Hence \(|P|\leqslant|\hat{P}|\leqslant 2^{14}\cdot 2^{8k_{0}}\), as stated. Recalling that \(k\geqslant 7\) (see Remark 2.2), in the situation of Theorem 2.1(b) we have \(|P|\leqslant 7^{5}\cdot 7^{3k}\leqslant k^{5+3k}\). Now, when \(p\) is odd, a similar argument can be used. However, we aim for a better bound. In this case, [11, Theorem 4.10.2] further tells us that \(P_{T}\) has a complement \(P_{W}\) in \(\hat{P}\) and we have \(P_{T}\cong C_{p^{b}}^{m_{e}}\) unless \((p,G)=(3,{}^{3}\mathrm{D}_{4}(q))\), in which case \(P_{T}\cong C_{3^{a}}\times C_{3^{a+1}}\). In the following, let \(W(\mathrm{E}_{8})\) denote the Weyl group \(W\) obtained in the case that \(\mathbf{G}=\mathrm{E}_{8}\). **Proposition 2.7**.: _Let \(S\) be an exceptional group of Lie type as above, and let \(P\in\operatorname{Syl}_{p}(S)\) with \(p\) an odd prime not dividing \(q\). Let \(k_{0}:=k_{\operatorname{Aut}(S)}(B_{0}(S))\). Then if \(P\) is cyclic, we have \(|P|<p^{k_{0}}\). Otherwise, we have_ \[|P|\leqslant C_{ex}\cdot k_{0}^{2}\] _for some constant \(C_{ex}\leqslant 36|W(\mathrm{E}_{8})|^{2}\). In particular, when \(S\leqslant A\leqslant\operatorname{Aut}(S)\) with \(k(B_{0}(A))\geqslant 5\), this yields \(|P|\leqslant k(B_{0}(A))^{k(B_{0}(A))^{2}}\) in either case._ It should be noted, however, that in the last statement, \(P\in\operatorname{Syl}_{p}(S)\), rather than \(\operatorname{Syl}_{p}(A)\). Proof.: Keep the notation above. Suppose first that a Sylow \(p\)-subgroup of \(G\) is abelian. If \(P\) is cyclic, then \(P=\hat{P}=P_{T}=C_{p^{b}}\). Here we may argue similarly to Proposition 2.6 to obtain \(b<k_{0}\), and hence \(|P|<p^{k_{0}}\). Lemma 2.4 further yields the last statement in this case. Hence, we may assume that \(P\) is not cyclic, so that \(m_{e}\geqslant 2\). By the discussion preceeding [12, Theorem 5.4], we have \[k_{0}\geqslant\frac{p^{bm_{e}}}{gdp^{b}|W_{e}|}, \tag{1}\] where \(g\) is the size of the subgroup of \(\operatorname{Out}(S)\) of graph automorphisms, \(d:=[\widetilde{S}:S]\) is the size of the group of diagonal automorphisms, and \(|W_{e}|\) is the so-called relative Weyl group for a Sylow \(\Phi_{e}\)-torus of \(G\). Since \(d\leqslant 3\), \(g\leqslant 2\), and \(|W_{e}|\) is bounded by the size of the largest Weyl group for the types under consideration, \(|W(\mathrm{E}_{8})|\), we have \[k_{0}\geqslant\frac{p^{bm_{e}}}{6|W(E_{8})|p^{b}}.\] Notice that \(p^{b(m_{e}-1)}\geqslant p^{bm_{e}/2}\) for \(m_{e}\geqslant 2\). Then we have \(\sqrt{|P|}\leqslant\sqrt{|\hat{P}|}\leqslant 6|W(E_{8})|k_{0}\), and hence the statement holds. We now assume that \(\hat{P}\) is nonabelian. By considering only the semisimple characters of \(G\) corresponding to elements of \(G^{*}\) found in a copy of \(P_{T}\), the exact same arguments as in [11, Section 5] yield that the bound (1) still holds in this case. By considering the degree polynomials, we see that in each case, we have \(\sqrt{|P|}\leq p^{b(m_{e}-1)}\) again, except possibly if \(G=\mathrm{G}_{2}(q)\), \(p=3\), \(m_{e}=2\), and \(|P|=p^{2b+1}\). Then \(\sqrt{|P|}=\sqrt{3}\cdot 3^{b}=\sqrt{3}\cdot p^{b(m_{e}-1)}\leqslant\sqrt{3}|W_{e}|k_ {0}\), where the last inequality is because \(d=1=g\) in this case. In all cases, then, we see that the statement holds. ### Classical Groups We now turn to the case of classical groups. In this section, let \(G=\mathbf{G}^{F}\) be a group of Lie type defined over \(\mathbb{F}_{q}\), where \(q\) is a power of a prime \(q_{0}\) and \(\mathbf{G}\) is a simple, simply connected reductive group of type \(\mathrm{A}_{n-1}\) with \(n\geqslant 2\), \(\mathrm{C}_{n}\) with \(n\geqslant 2\), \(\mathrm{B}_{n}\) with \(n\geqslant 3\), or type \(\mathrm{D}_{n}\) with \(n\geqslant 4\) but \(G\neq{}^{3}\mathrm{D}_{4}(q)\), and such that \(G\) is a nonexceptional Schur covering group for the simple group \(S:=G/\mathbf{Z}(G)\). That is, \(G=\mathrm{SL}_{n}^{\epsilon}(q)\), \(\mathrm{Sp}_{2n}(q)\), \(\mathrm{Spin}_{2n+1}(q)\), or \(\mathrm{Spin}_{2n}^{\pm}(q)\), and \(S=\mathrm{PSL}_{n}^{\epsilon}(q)\), \(\mathrm{PSp}_{2n}(q)\), \(\mathrm{P}\Omega_{2n+1}(q)\), or \(\mathrm{P}\Omega_{2n}^{\pm}(q)\), respectively, for the corresponding values of \(n\). Let \(H\) be the related groups \(H:=\mathrm{GL}_{n}^{\epsilon}(q),\mathrm{Sp}_{2n}(q),\mathrm{SO}_{2n+1}(q)\), respectively \(\mathrm{SO}_{2n}^{\pm}(q)\). We remark that, taking \(\Omega:=\mathrm{O}^{\prime\!\!\!0}_{0}(H)\), we have \(\Omega\) is perfect and \(S=\Omega/\mathbf{Z}(\Omega)=G/\mathbf{Z}(G)\). We also have \(\mathbf{Z}(\Omega)\leqslant\mathbf{Z}(H)\) and further \(H/\Omega\) and \(\mathbf{Z}(H)\) are both \(2\)-groups if \(H\neq\mathrm{GL}_{n}^{\epsilon}(q)\). Note that the dual group of \(H\) is \(H^{*}=\mathrm{GL}_{n}^{\epsilon}(q)\), \(\mathrm{SO}_{2n+1}(q)\), \(\mathrm{Sp}_{2n}(q)\), and \(\mathrm{SO}_{2n}^{\pm}(q)\), respectively. Let \(p\neq q_{0}\) be a prime and write \(\widetilde{P}\) for a Sylow \(p\)-subgroup of \(H^{*}\). We remark that if \(X\in\{G,S\}\), then \(|P|\leqslant|\widetilde{P}|\) for \(P\in\mathrm{Syl}_{p}(X)\). #### 2.3.1. Sylow \(p\)-Subgroups of Symmetric Groups Since the Sylow \(p\)-subgroups of classical groups are closely related to those of symmetric groups, we begin with a discussion of the latter. Let \(w\) be a positive integer with \(p\)-adic expansion \[w=a_{0}+a_{1}p+a_{2}p^{2}+\cdots+a_{t}p^{t}, \tag{2}\] where \(0\leqslant a_{i}<p\) for \(0\leqslant i\leqslant t-1\) and \(0<a_{t}<p\). Let \(Q\in\mathrm{Syl}_{p}(\mathfrak{S}_{w})\). We have \(Q=\prod_{i=0}^{t}Q_{i}^{a_{i}}\),, where \(Q_{i}\) is a Sylow \(p\)-subgroup of the symmetric group \(\mathfrak{S}_{p^{i}}\). Moreover \(|Q_{i}|=p^{p^{i-1}+p^{i-2}+\cdots+p+1}\leqslant p^{p^{i}}\) for each \(1\leqslant i\leqslant t\). Then with this, we see \[|Q|=(w!)_{p}\leqslant p^{w}. \tag{3}\] #### 2.3.2. Unipotent Characters Recall that \(\mathcal{E}(G,1)\) is the set of unipotent characters of \(G\). Since unipotent characters are trivial on \(\mathbf{Z}(G)\), we may say that \(\chi\in\mathrm{Irr}(S)\) is a unipotent character of \(S\) if it is the deflation of some unipotent character of \(G\). The following observation will be useful in the cases of defining characteristic and when \(p=2\). **Lemma 2.8**.: _Let \(S\) be one of the groups \(S=\mathrm{PSL}_{n}^{\epsilon}(q)\) with \(n\geqslant 2\), \(\mathrm{PSp}_{2n}(q)\) with \(n\geqslant 2\), \(\mathrm{P}\Omega_{2n+1}(q)\) with \(n\geqslant 3\), \(\mathrm{P}\Omega_{2n}^{+}(q)\) with \(n\geqslant 4\), or \(\mathrm{P}\Omega_{2n}^{-}(q)\) with \(n\geqslant 4\). Then there are at least \(n\) non-\(\mathrm{Aut}(S)\)-conjugate unipotent characters of \(S\)._ Proof.: The unipotent characters of \(G\) are described in [13, Section 13.8]. From this, we have the number of unipotent characters in the case \(\mathrm{PSL}_{n}^{\epsilon}(q)\) is the number of partitions \(\pi(n)\) of \(n\). In the remaining cases, the unipotent characters of \(G\) lying in the principal series are in bijection with the characters of the Weyl group \(W(\mathrm{C}_{n})\), \(W(\mathrm{B}_{n})\), \(W(\mathrm{D}_{n})\), or \(W(\mathrm{B}_{n-1})\), respectively, each of which contains a quotient group isomorphic to a symmetric group \(\mathfrak{S}_{n}\) (resp. \(\mathfrak{S}_{n-1}\) in the case of \(W(B_{n-1})\)). In each of these cases, there are also non-principal series unipotent characters. Then the number of unipotent characters is more than \(\pi(n)\) (resp. \(\pi(n-1)\)) in these cases. Note that \(\pi(n)\geq n\), with strict inequality for \(n\geq 4\), and that further \(\pi(n)\geq 2n\) for \(n\geq 7\). With the exception of \(\operatorname{PSp}_{4}(q)\) with \(q\) even and \(\operatorname{P}\Omega_{2n}^{+}(q)\), all unipotent characters of the groups under consideration are \(\operatorname{Aut}(S)\)-invariant (see [12, Theorem 2.5]), and we see that there are at least \(n\) such characters in each case. For \(\operatorname{PSp}_{4}(q)\) with \(q\) even, there are six unipotent characters, with two of them interchanged by the exceptional graph automorphism. For \(\operatorname{P}\Omega_{2n}^{+}(q)\) with \(n\geq 5\), we see there are at least \(2n\) unipotent characters (explicitly for \(n=5,6\), and since \(\pi(n)\geq 2n\) for \(n\geq 7\)), and the \(\operatorname{Aut}(S)\)-orbits have size at most \(2\). The group \(\operatorname{P}\Omega_{8}^{+}(q)\) has \(14\) unipotent characters and the \(\operatorname{Aut}(S)\)-orbits have size at most \(3\). In all cases, then, we see that there are at least \(n\)\(\operatorname{Aut}(S)\)-orbits of unipotent characters. #### 2.3.3. Bounds in the Case of Classical Groups for Defining Characteristic or \(p=2\) **Corollary 2.9**.: _Let \(S\) be one of the groups as in Lemma 2.8. Assume that \(p\mid q\) or that \(p=2\) and \(q\) is odd. Then \(k_{\operatorname{Aut}(S)}(B_{0}(S))\geq n\)._ Proof.: In defining characteristic, we have \(\operatorname{Irr}(B_{0}(S))=\operatorname{Irr}(S)\backslash\{\operatorname{ St}_{S}\}\), where \(\operatorname{St}_{S}\) is the Steinberg character (see [11, Theorem 6.18]). In the case \(p=2\) and \(q\) is odd, we have \(B_{0}(S)\) is the unique block containing unipotent characters, by [11, Theorem 21.14]. Then the statement follows from Lemma 2.8 and the fact that \(\operatorname{Irr}(S)\) contains non-unipotent characters. **Lemma 2.10**.: _Let \(S\) be as in Lemma 2.8 with \(q\) odd, and let \(2^{b+1}\) be the largest power of \(2\) dividing \(q^{2}-1\). Let \(B_{0}(S)\) be the principal \(2\)-block of \(S\). Then \(b+1\leq k_{\operatorname{Aut}(S)}(B_{0}(S))\)._ Proof.: As before, let \(S=G/\mathbf{Z}(G)\) with \(G=\mathbf{G}^{F}\) of simply connected type. Recall that \(B_{0}(G)\) is the unique unipotent block of \(G\) by [11, Theorem 21.14], and hence \(B_{0}(G)\) is exactly the union of rational Lusztig series \(\mathcal{E}(G,s)\) with \(s\in G^{*}\) having order a power of \(2\). From the structure of the Sylow \(2\)-subgroup of \(G^{*}\) described in [10, Theorem 4.10.2] (see also [11]), we can see that the group \(\mathbf{O}^{\prime\prime}_{0}(G^{*})\) contains an element of order \(2^{b-1}\). For \(1\leq i\leq b-1\), let \(s_{i}\in\mathbf{O}^{\prime\prime}_{0}(G^{*})\) be of order \(2^{i}\). Then the semisimple characters \(\chi_{s_{i}}\) for \(1\leq i\leq b-1\) lie in \(B_{0}(G)\) and are trivial on \(\mathbf{Z}(G)\) by the dual version of [12, Proposition 11.4.12 and Remark 11.4.14]. Further, these lie in distinct \(\operatorname{Aut}(S)\)-conjugacy classes, using [13, Corollary 2.5]. Combining with Lemma 2.8, we see \(k_{\operatorname{Aut}(S)}(B_{0}(S))\geq b-1+n\geq b+1\). #### 2.3.4. Bounds in the Case of Classical Groups for Nondefining Characteristic with \(p\) Odd Now let \(p\) be an odd prime not dividing \(q\) and let \(d:=d_{p}(q)\). If \(G=\operatorname{SL}_{n}^{\epsilon}(q)\), let \(e:=d_{p}(\epsilon q)\), and otherwise let \(e:=d_{p}(q^{2})\). Further, let \(b\geq 1\) be the largest integer such that \(p^{b}\) divides \((\epsilon q)^{e}-1\), respectively \(q^{2e}-1\). We begin by discussing the Sylow \(p\)-subgroups \(\widetilde{P}\) of \(H^{*}\), which have been described by Weir [10]. First, consider the case \(H=\operatorname{GL}_{n}^{\epsilon}(q)\). Let \(n=ew+r\), where \(r,w\) are positive integers with \(0\leq r<e\) and \(w\) is written with \(p\)-adic expansion as in (2). A Sylow \(p\)-subgroup of \(H=\operatorname{GL}_{n}^{\epsilon}(q)\) is then of the form \(\widetilde{P}=\prod_{i=0}^{t}P_{i}^{a_{i}}\), where \(P_{i}\in\operatorname{Syl}_{p}(\operatorname{GL}_{ep^{i}}^{\epsilon}(q))\) is of the form \(\widetilde{P}=\prod_{i=0}^{s}P_{i}^{a_{i}}\). We have \(P_{i}\in\operatorname{Syl}_{p}(\operatorname{GL}_{ep^{i}}^{\epsilon}(q))\). We have \(P_{i}\in\operatorname{Syl}_{p}(\operatorname{GL}_{ep^{i}}^{\epsilon}(q))\). \(C_{p^{b}}\wr Q_{i}\). Hence \(\widetilde{P}\) contains a subgroup of the form \(\bar{P}=C_{p^{b}}^{w}\). Further, \(\widetilde{P}\cap\operatorname{SL}_{n}^{\epsilon}(q)\) is a Sylow \(p\)-subgroup of \(G=\operatorname{SL}_{n}^{\epsilon}(q)\). Now consider \(H=\operatorname{Sp}_{2n}(q)\), \(\operatorname{SO}_{2n+1}(q)\), and \(\operatorname{SO}_{2n}^{\epsilon}(q)\). The structure of \(\widetilde{P}\) in these case builds off of the case of linear groups above. If \(H^{*}\in\{\operatorname{SO}_{2n+1}(q),\operatorname{Sp}_{2n}(q)\}\), we have \(\widetilde{P}\) is already a Sylow \(p\)-subgroup of \(\operatorname{GL}_{2n+1}(q)\) (and hence of \(\operatorname{GL}_{2n}(q)\)) when \(d\) is even, and are Sylow subgroups of the naturally-embedded \(\operatorname{GL}_{n}(q)\) if \(d\) is odd. In particular, writing \(n=ew+r\) with \(r,w\) as before, we have \(\widetilde{P}\) is again of the form \(\widetilde{P}\cong P_{0}^{a_{0}}\times\cdots\times P_{t}^{a_{t}}\), where each \(P_{i}\) is a Sylow \(p\)-subgroup of \(\operatorname{GL}_{dp^{i}}(q)\) (and hence again \(P_{i}\cong C_{p^{b}}\wr Q_{i}\)). If \(H^{*}=\operatorname{SO}_{2n}^{\pm}(q)\), then we have embeddings \(\operatorname{SO}_{2n-1}(q)\leqslant H^{*}\leqslant\operatorname{SO}_{2n+1}(q)\), and \(\widetilde{P}\) is a Sylow subgroup of either \(\operatorname{SO}_{2n-1}(q)\) or \(\operatorname{SO}_{2n+1}(q)\). In this case, letting \(m\in\{n,n-1\}\) so that \(\widetilde{P}\) is a Sylow subgroup of \(\operatorname{SO}_{2m+1}(q)\) and now writing \(m=ew+r\) with \(w\) again written as in (2), \(\widetilde{P}\) can again be written \(\widetilde{P}\cong P_{0}^{a_{0}}\times\cdots\times P_{t}^{a_{t}}\) with each \(P_{i}\) a Sylow subgroup of \(\operatorname{GL}_{dp^{i}}(q)\). In all cases, we remark that \(p^{t}\leqslant w\leqslant p^{t+1}\) and that \(t=0\) corresponds to the case that a Sylow \(p\)-subgroup of \(H\) is abelian. Further, \(\widetilde{P}\) contains a subgroup of the form \(\bar{P}\cong C_{p^{b}}^{w}\). **Lemma 2.11**.: _With the notation above, we have \(b+1\leqslant k_{\operatorname{Aut}(S)}(B_{0}(S))\)._ Proof.: We will show that there are at least \(b\) characters in \(\operatorname{Irr}(B_{0}(S))\backslash\{1_{S}\}\) lying in distinct \(\operatorname{Aut}(S)\)-orbits. First, let \(G=\operatorname{SL}_{n}^{\epsilon}(q)\) and let \(\widetilde{G}:=H=\operatorname{GL}_{n}^{\epsilon}(q)\), and note \(\widetilde{G}^{*}\cong\widetilde{G}\). We have \(\operatorname{Aut}(S)=\widetilde{S}\rtimes\mathcal{D}\), where \(\mathcal{D}\) is an appropriate group of graph and field automorphisms and \(\widetilde{S}:=\widetilde{G}/\mathbf{Z}(\widetilde{G})\). Recall that a Sylow \(p\)-subgroup \(\widetilde{P}\) of \(\widetilde{G}\) contains a subgroup of the form \(C_{p^{b}}^{w}\). Assume for the moment that \(e>1\), so that \(p\nmid|\mathbf{Z}(\widetilde{G})|\) and \(p\nmid[\widetilde{G}:G]\). Hence, for \(1\leqslant j\leqslant b\), we may let \(s_{j}\in\widetilde{G}^{*}\cong\widetilde{G}\) be an element of order \(p^{j}\). The corresponding semisimple character \(\chi_{s_{j}}\) of \(\widetilde{G}\) is trivial on \(\mathbf{Z}(\widetilde{G})\) and lies in \(B_{0}(\widetilde{G})\), using Lemma 2.5. Hence, each \(\chi_{s_{j}}\) can be viewed as a character of \(B_{0}(\widetilde{S})\). Further, note that since \(p\nmid|\mathbf{Z}(\widetilde{G})|\), \(s_{i}\) and \(s_{j}^{\alpha}z\) cannot be \(\widetilde{G}\)-conjugate for any \(i\neq j\) and any \(\alpha\in\mathcal{D}\) and \(z\in\mathbf{Z}(\widetilde{G})\). If instead \(e=1\), we have \(n=w\geqslant 2\). For \(1\leqslant j\leqslant b\), let \(\lambda_{j}\in C_{p^{b}}\leqslant\mathbb{F}_{q^{2}}^{\times}\) with \(|\lambda_{j}|=p^{j}\), and let \(s_{j}\) be an element of \(C_{p^{b}}^{n}\leqslant\widetilde{P}\) of the form \(\operatorname{diag}(\lambda_{j},\lambda_{j}^{-1},1,\ldots 1)\), where \(1\) appears as an eigenvalue with multiplicity \(n-2\). Then again \(\chi_{s_{j}}\in\operatorname{Irr}(B_{0}(\widetilde{G}))\) by Lemma 2.5 and is trivial on \(\mathbf{Z}(\widetilde{G})\) by the dual version of [1, Proposition 11.4.12, and Remark 11.4.14], since \(s_{j}\in[\widetilde{G},\widetilde{G}]=G\). Further, we again see that \(s_{j}\) is not \(\widetilde{G}\)-conjugate to \(s_{i}^{\alpha}z\) for any \(i\neq j\), \(\alpha\in\mathcal{D}\), and \(z\in\mathbf{Z}(\widetilde{G})\), by considering the eigenvalues. In either case, we let \(\chi_{i}\) for \(1\leqslant i\leqslant b\) be a constituent of \(\chi_{s_{i}}\) restricted to \(S\). Then \(\chi_{i}\) cannot be \(\operatorname{Aut}(S)\)-conjugate to \(\chi_{j}\) for \(i\neq j\), using [13, Corollary 2.5] along with [14, Proposition 11.4.12 and Remark 11.4.14]. Hence, we see at least \(b\) distinct \(\operatorname{Aut}(S)\)-orbits represented in \(\operatorname{Irr}(B_{0}(S))\backslash\{1_{S}\}\). Now let \(G\) be one of the remaining groups as in the beginning of the section. Then \(|\mathbf{Z}(G)|\) is a power of \(2\). In each case, a Sylow \(p\)-subgroup of \(G\) (or, equivalently, of \(S\)) and of \(G^{*}\) contains a subgroup of the form \(C_{p^{b}}\). Here, we may again, for each \(1\leqslant j\leqslant b\), let \(s_{j}\in G^{*}\) be a semisimple element of order \(p^{j}\). Then since each \((|s_{j}|,|\mathbf{Z}(G)|)=1\), we have by [14, Exercise 20.16] that \(\mathbf{C}_{\mathbf{G}^{*}}(s_{j})\) is connected since \(p\geqslant 3\) is good for \(\mathbf{G}\) and hence the corresponding semisimple character \(\chi_{s_{j}}\) of \(G\) lies in \(B_{0}(G)\) and is trivial on \(\mathbf{Z}(G)\) by Lemma 2.5. That is, we may again view \(\chi_{s_{j}}\) as a character in \(\operatorname{Irr}(B_{0}(S))\backslash\{1_{G}\}\). Since \(s_{i}\) cannot be \(\operatorname{Aut}(G^{*})\)-conjugate to \(s_{j}\) for any \(i\neq j\), we see \(\chi_{s_{i}}\) and \(\chi_{s_{j}}\) cannot be \(\operatorname{Aut}(S)\)-conjugate as before and we again have \(k_{\operatorname{Aut}(S)}(B_{0}(S))\geq b+1\). **Lemma 2.12**.: _With the above notation, we have at least \(w\) unipotent characters in \(B_{0}(S)\) that are not \(\operatorname{Aut}(S)\)-conjugate, and hence \(w\leq k_{\operatorname{Aut}(S)}(B_{0}(S))\). If \(t\geq 1\), this yields \(p\leq p^{t}\leq k_{\operatorname{Aut}(S)}(B_{0}(S))\)._ Proof.: We remark first that the unipotent characters of \(H\) are irreducible on restriction to \(\Omega\) and are trivial on \(\mathbf{Z}(H)\). (See, e.g. [1, Proposition 2.3.15].) In the case \(H=\operatorname{GL}_{n}^{\epsilon}(q)\), we have the number of unipotent characters in \(B_{0}(H)\) is \(k(e,w)\), by [13, Proposition (2.3)], where \(k(e,w)\) can be computed as in [12, Lemma 1]. This yields at least \(k(e,w)\geq w\) unipotent characters in \(B_{0}(S)\), which are all \(\operatorname{Aut}(S)\)-invariant (see [13, Theorem 2.5]). If \(H=\operatorname{Sp}_{2n}(q)\) or \(\operatorname{SO}_{2n+1}(q)\), we see from [13, Section 5.2] that the number of unipotent characters in \(B_{0}(H)\) is \(k(2e,w)>2w\), which again are \(\operatorname{Aut}(S)\)-invariant by [13, Theorem 2.5] unless \(H=\operatorname{Sp}_{4}(q)\) with \(q\) even. In the latter case, the unipotent characters are at worst permuted in pairs by \(\operatorname{Aut}(S)\), and hence again there are at least \(w\) non-\(\operatorname{Aut}(S)\)-conjugate such characters. If \(H=\operatorname{SO}_{2n}^{\pm}(q)\), then \(B_{0}(H)\) contains either at least \(k(2e,w)\) unipotent characters or at least \((k(2e,w)+3k(e,w/2))/2\) when \(w\) is even, using [13, Section 5.3 and Lemma 5.6]. One can see that these numbers are again at least \(2w\), and by [13, Theorem 2.5], again the unipotent characters are at worst permuted in pairs by \(\operatorname{Aut}(S)\) unless \(H=\operatorname{SO}_{8}^{+}(q)\). In the latter case, [11, Lemma 3.10] gives the claim. ### The Proof of Theorem 2.1 The following will be useful in the proof of Theorem 2.1, as well as in the proof of Theorem 1 below. Here for a group \(G\), we write \(k_{p}(G)\) to denote the number of conjugacy classes of \(p\)-elements of \(G\). **Lemma 2.13**.: _Let \(G\) be a finite group. Then \(k_{p}(G)\leq k(B_{0}(G))\). In particular, the number of chief factors of \(G\) of order divisible by \(p\) is at most \(k(B_{0}(G))\)._ Proof.: Let \(\{x_{1},\ldots,x_{t}\}\) be a set of representatives of the non-central conjugacy classes of \(p\)-elements of \(G\). By [14, Theorem 4.14], \(B_{0}(\mathbf{C}_{G}(x_{i}))^{G}\) is defined for every \(i=1,\ldots,t\). By [14, Theorem 5.12]) and Brauer's third main theorem ([14, Theorem 6.7]), we have that \[k(B_{0}(G))=l(B_{0}(G))|\mathbf{Z}(G)|_{p}+\sum_{i=1}^{t}l(B_{0}(\mathbf{C}_{ G}(x_{i})))\geq|\mathbf{Z}(G)|_{p}+t=k_{p}(G),\] as wanted. Finally, we can prove Theorem 2.1. Proof of Theorem 2.1.: Recall from Remark 2.2 that we may assume that \(k:=k(B_{0}(A))\geq 7\). If \(S\) is a sporadic group, Tits group, group of Lie type with exceptional Schur multiplier, or alternating group \(\mathfrak{A}_{n}\) with \(n\leq 7\), then the result is readily checked using GAP and its Character Table Library [GAP]. We therefore assume that \(S\) is not one of these groups. Throughout, let \(P\in\operatorname{Syl}_{p}(A)\) and \(P_{0}\in\operatorname{Syl}_{p}(S)\) such that \(P_{0}=P\cap S\). (I) If \(S\) is an alternating group with \(n\geq 8\), then \(A\in\{\mathfrak{A}_{n},\mathfrak{S}_{n}\}\). Note that \(2k\geq k(B_{0}(\mathfrak{S}_{n}))\). Let \(n=pw+r\) with \(0\leq r<p\). Then we have \(|P|\leq(2,p)\cdot|P_{0}|=(n!)_{p}=((pw)!)_{p}\leq p^{pw}\) by (3). Further, by [14, Theorem 1.10] and [15, Lemma 1 and p. 44], we have \[k(B_{0}(\mathfrak{S}_{n}))=k(B_{0}(\mathfrak{S}_{pw}))=k(p,w)>\pi(w)p\geq wp,\] where \(\pi(w)\) denotes the number of partitions of \(w\). Then \[|P_{0}|\leq|P|\leq p^{pw}<p^{2k}\leq\left(\frac{k^{4}}{4}\right)^{k}\] when combined with Lemma 2.3, yielding a bound stronger than (a)-(c) in this case. From now on, we assume \(S\) is a simple group of Lie type. Let \(S=G/\mathbf{Z}(G)\), where \(G=\mathbf{G}^{F}\) for a simple, simply connected reductive group \(\mathbf{G}\) and a Steinberg endomorphism \(F\colon\mathbf{G}\to\mathbf{G}\), where \(\mathbf{Z}(G)\) is the full, nonexceptional Schur covering group of \(S\). Write \(k_{0}:=k_{\operatorname{Aut}(S)}(B_{0}(S))\) so that \(k_{0}\leq k\). (II) We will first show (b). First, assume \(S\) is defined in characteristic \(p\), so that \(|P_{0}|=q^{|\Phi^{+}|}\), where \(\Phi^{+}\) is the set of positive roots of \(\mathbf{G}\) (see [13, Proposition 24.3]). We have \(\operatorname{Irr}(B_{0}(S))=\operatorname{Irr}(S)\backslash\{\operatorname{ St}_{S}\}\), where \(\operatorname{St}_{S}\) is the Steinberg character (see [1, Theorem 6.18]), and \(k_{0}\geq\frac{q^{r}}{|\mathbf{Z}(G)|\cdot|\operatorname{Out}(S)|}\), as in [11, Section 2D]. Let \(f\) be the integer (or half-integer, in the case of Suzuki and Ree groups \({}^{2}\mathrm{G}_{2}(q^{2})\), \({}^{2}\mathrm{F}_{4}(q^{2})\), \({}^{2}\mathrm{B}_{2}(q^{2})\)) such that \(q=p^{f}\), and note that \(\sqrt{q^{r}}=p^{rf/2}\leq p^{rf}/f=q^{r}/f\), unless \(q=8\) and \(r=1\). In the latter case, \(S=\operatorname{PSL}_{2}(8)\) and \(|P_{0}|=8\), so the statement holds. So, we assume \((q,r)\neq(8,1)\). Here, we include the full argument for the groups \(\operatorname{PSL}_{n}^{\epsilon}(q)\) (\(n\geq 2\)), which correspond to \(\mathbf{G}\) of type \(\mathrm{A}_{n-1}\). Table 2.4 gives relevant values for various groups of Lie type, and from this information, the arguments in the other cases are similar. So, let \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\). Then \(|P_{0}|=q^{n(n-1)/2}\); \(r=n-1\); \(|\operatorname{Out}(S)|\leq 2f\cdot(n,q-\epsilon)\leq 2fn\); and \(|\mathbf{Z}(G)|=(n,q-\epsilon)\leq n\). By Corollary 2.9, we have \(k_{0}\geq n\). Together, this gives \[k_{0}\geq\frac{q^{n-1}}{2n^{2}f}\geq\frac{q^{(n-1)/2}}{2n^{2}}\geq\frac{q^{(n -1)/2}}{2k_{0}^{2}}.\] Then \(q^{(n-1)/2}\leq 2k_{0}^{3}\leq k_{0}^{4}\), so \(|P_{0}|=q^{n(n-1)/2}\leq k_{0}^{4n}\leq k_{0}^{4k_{0}}\leq k_{0}^{2k_{0}^{2}}\). Finally, we may assume \(S\) is a group of Lie type defined in characteristic different than \(p\). If \(S\) is of exceptional type, then Propositions 2.6 and 2.7 yield (b). Hence, we may assume \(S\) is of classical type, and we let \(H,\widetilde{P}\), and \(\bar{P}\) be as in Section 2.3. Recall that we have \(|P_{0}|\leq|\widetilde{P}|\). If \(p=2\), we further have \(|\widetilde{P}|\leq|\operatorname{GL}_{n}(q^{2})|_{2}\leq 2^{(b+1)n}(n!)_{2}\leq 2 ^{(b+2)n}\), where \(2^{b+1}\) is the largest power of \(2\) dividing \(q^{2}-1\) and the last inequality is from (3). In particular using Lemma 2.10 and Corollary 2.9, in this case \(|P_{0}|\leq 2k_{0}^{2}+k_{0}<k_{0}^{2k_{0}^{2}}\). Now we assume \(p\) is odd. If \(\widetilde{P}\) is abelian, note that \(\widetilde{P}=\bar{P}\) in the notation before. Then \(|P_{0}|\leq p^{bw}<k^{2k_{0}^{2}}\), from Lemmas 2.3, 2.11, and 2.12, and the statement holds. We are left with the case that \(S\) is classical and \(t\geq 1\). Then by Lemmas 2.11 and 2.12, along with (3), we see that \[|P_{0}|\leq p^{bw}\cdot(w!)_{p}\leq p^{bw}\cdot p^{w}=p^{(b+1)w}\leq k_{0}^{k_ {0}^{2}},\] which completes the proof of (b). (III) We now complete the proof of (a). Let \(G\) be defined over \(\mathbb{F}_{q}\), where \(q=q_{0}^{f}\) for some prime \(q_{0}\) and integer \(f\). (In the case of Suzuki and Ree groups, we instead let \(q^{2}:=q_{0}^{f}\) with \(f\) an odd integer.) Further, write \(f:=p^{f^{\prime}}\cdot m\) with \((m,p)=1\). From part (b), recall that \(|P_{0}|\leq k^{2k^{2}}\). Note that \(|P/P_{0}|=|A/S|_{p}\) and this number is at most \(p^{f^{\prime}+1}\) unless \(S=\mathrm{D}_{n}(q)\) or \({}^{2}\mathrm{D}_{n}(q)\) with \(p=2\) and \(|A/S|_{2}\leq 2^{f^{\prime}+3}\) or \(S=\mathrm{PSL}_{n}^{\,\epsilon}(q)\) with \(n\geq 3\) and \(p\mid(n,q-\epsilon)\), in which case \(|A/S|_{p}\) divides \(2p^{b+f^{\prime}}\) with \(p^{b}\mid\mid(q-\epsilon)\). Recall that \(\mathrm{Aut}(S)=\widetilde{S}\rtimes\mathcal{D}\) with \(\mathcal{D}\) a group of field and graph automorphisms as before. A Sylow \(p\)-subgroup of \(\widetilde{S}A\cap\mathcal{D}\) contains a cyclic group of size \(p^{f^{\prime\prime}}\), where \(f^{\prime\prime}\leq f^{\prime}\) and \(|\widetilde{S}A\cap\mathcal{D}|_{p}\leq p^{f^{\prime\prime}+1}\). Then \(A\) must also contain an element of order \(p^{f^{\prime\prime}}\), and hence elements of orders \(p^{i}\) for \(1\leq i\leq f^{\prime\prime}\). Then \(k_{p}(A)\geq f^{\prime\prime}\), and hence \(k\geq f^{\prime\prime}\) by Lemma 2.13. Now, if \(S\) is not one of the exceptions mentioned above, we have \(|A|_{p}\leq|\widetilde{S}A|_{p}=|\widetilde{S}|_{p^{\prime}}\cdot|\widetilde{ S}A\cap\mathcal{D}|_{p}\leq|P_{0}|\cdot p^{f^{\prime\prime}+1}\). If \(S=\mathrm{D}_{n}(q)\) or \({}^{2}\mathrm{D}_{n}(q)\) with \(p=2\), we have \(|\widetilde{S}A|_{2}\leq|P_{0}|\cdot 2^{f^{\prime\prime}+3}\). If \(S=\mathrm{PSL}_{n}^{\,\epsilon}(q)\) with \(n\geq 3\) and \(p\mid(n,q-\epsilon)\), we have \(|\widetilde{S}A|_{p}\leq|P_{0}|\cdot p^{b+f^{\prime\prime}+1}\), where \(p^{b}\mid\mid(q-\epsilon)\). Then using (b) and Lemmas 2.10, and 2.11, we have in each case that \[|P|=|A|_{p}\leq k^{2k^{2}}\cdot p^{f^{\prime\prime}+k}.\] Combining the above with Lemma 2.3, we obtain \(|P|\leq k^{2k^{2}}\cdot p^{2k}<k^{2k^{2}}\cdot k^{4k}\), completing the proof. ## 3. Proof of Theorem A In this section we complete the proof of Theorem A. We begin with some additional general observations that will be useful in the proof. **Lemma 3.1**.: _Let \(G\) be a finite group and let \(N\preccurlyeq G\). If \(b\in\mathrm{Bl}(N)\) is covered by \(B\in\mathrm{Bl}(G)\) then \(k(b)\leq|G:N|k(B)\)._ Proof.: This is a direct consequence of [11, Theorem 9.4] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Type of **G** & Size of \(\Phi^{+}\) & Rank \(r\) & upper bound for \(|\mathbf{Z}(G)|\) & upper bound for \(|\mathbf{Out}(S)|\) \\ \hline \hline A\({}_{n-1}\), \(n\geq 2\) & \(n(n-1)/2\) & \(n-1\) & \(n\) & \(2fn\) \\ \hline B\({}_{n}\) or C\({}_{n}\), \(n\geq 3\) & \(n^{2}\) & \(n\) & \(2\) & \(2f\) \\ \hline B\({}_{2}\) & 4 & 2 & 2 & \(2f\) \\ \hline D\({}_{n}\), \(n\geq 5\) & \(n(n-1)\) & \(n\) & 4 & \(8f\) \\ D\({}_{4}\) & 12 & 4 & 4 & \(24f\) \\ \hline F\({}_{4}\) & 24 & 4 & 1 & \(2f\) \\ \hline G\({}_{2}\) & 6 & 2 & 1 & \(2f\) \\ \hline E\({}_{6}\) & 36 & 6 & 3 & \(6f\) \\ \hline E\({}_{7}\) & 63 & 7 & 2 & \(2f\) \\ \hline E\({}_{8}\) & 120 & 8 & 1 & \(f\) \\ \hline \end{tabular} \end{table} Table 1. Relevant Data for Bounding \(|P_{0}|\) in Defining Characteristic **Lemma 3.2**.: _Let \(G\) be a finite group and suppose that \(N=S_{1}\times\cdots\times S_{n}\) is a normal subgroup, where where \(S_{i}\) is simple nonabelian and \(p\) divides \(|S_{i}|\) for all \(i\). Then \(n\leqslant k(B_{0}(G))\)._ Proof.: Let \(1\neq x_{i}\in S_{i}\) be a \(p\)-element for every \(i\). Note that \(G\) acts on \(\{S_{1},\ldots,S_{n}\}\) by conjugation. Therefore, the elements \[(x_{1},1,\ldots,1),(x_{1},x_{2},1,\ldots,1),\ldots,(x_{1},\ldots,x_{n})\] are representatives of \(n\) different conjugacy classes of \(p\)-elements of \(G\). By Lemma 2.13, \(n\leqslant k(B_{0}(G))\). **Lemma 3.3**.: _Suppose that \(S_{1},\ldots,S_{n}\) are nonabelian simple groups of order divisible by a prime number \(p\) and let \(S_{1}\times\cdots\times S_{n}\leqslant G\leqslant\operatorname{Aut}(S_{1}) \times\cdots\times\operatorname{Aut}(S_{n})\). Let \(k=k(B_{0}(G))\), where \(B_{0}(G)\) is the principal \(p\)-block of \(G\). Then_ \[|G|_{p}\leqslant k^{4k^{3}}.\] Proof.: Write \(A_{i}=\operatorname{Aut}(S_{i})\) and \(A=A_{1}\times\cdots\times A_{n}\). Let \(\pi_{i}\) be the restriction to \(G\) of the projection from \(A\) onto \(A_{i}\) for every \(i\). Set \(K_{i}=\operatorname{Ker}\pi_{i}\). Notice that \(G/K_{i}\) is isomorphic to an almost simple group \(G_{i}\) with socle \(S_{i}\). Furthermore, the intersection of the \(K_{i}^{\prime}s\) is trivial, so \(G\) embeds into the direct product of the groups \(G/K_{i}\). Furthermore, \(B_{0}(G/K_{i})\subseteq B_{0}(G)\) for every \(i\). By Theorem 2.1, we have that \[|G/K_{i}|_{p}\leqslant k^{4k^{2}}\] for every \(i\). Since \(S_{i}\) is normal in \(G\) for all \(i\), by Lemma 3.2 we have that \(n\leqslant k\), and hence \[|G|_{p}\leqslant\prod_{i=1}^{n}|G/K_{i}|_{p}\leqslant k^{4k^{3}},\] as desired. We define the function \[f(k)=k^{k}\cdot(k!k)^{4(k!k)^{3}}.\] **Theorem 3.4**.: _Let \(G\) be a finite group and let \(R\) be the \(p\)-solvable radical of \(G\). Then \(|G:R|_{p}\leqslant f(k)\), where \(k=k(B_{0}(G))\)._ Proof.: Without loss of generality, we may assume that \(R=1\). Let \(F=\mathbf{F}^{*}(G)\) be the generalized Fitting subgroup, which in this case is a direct product of non-abelian simple groups of order divisible by \(p\). Write \(F=S_{1}\times\cdots\times S_{n}\). By Lemma 3.2, we obtain that \(n\leqslant k\). Since \(\mathbf{C}_{G}(F)\leqslant\mathbf{Z}(F)=1\), it follows that \(G\) embeds into \(\Gamma=\operatorname{Aut}(F)\). Note that \(A=\operatorname{Aut}(S_{1})\times\cdots\times\operatorname{Aut}(S_{n})\) is a normal subgroup of \(\Gamma\) and \(\Gamma/A\) is isomorphic to a subgroup of \(\mathsf{S}_{n}\). In particular, \[|\Gamma/A|\leqslant n!\leqslant k!.\] Put \(N=G\cap A\) and note that \(|G:N|\leqslant k!\). By the well-known Legendre's inequality, we have that \((k!)_{p}\leqslant p^{k}\), so \((k!)_{p}\leqslant k^{k}\). Write \(k^{\prime}=k(B_{0}(N))\). It follows from Lemma 3.3 that \[|G|_{p}=|G:N|_{p}|N|_{p}\leqslant k^{k}\cdot k^{\prime 4k^{\prime 3}}.\] Now, Lemma 3.1 implies that \[|G|_{p}\leqslant k^{k}\cdot(k!k)^{4(k!k)^{3}},\] as wanted. Recall that the socle \(\operatorname{Soc}(G)\) of a finite group \(G\) is the product of the minimal normal subgroups of \(G\). We can write \(\operatorname{Soc}(G)=A(G)\times T(G)\), where \(A(G)\) is the product of the abelian minimal normal subgroups of \(G\) and \(T(G)\) is the product of the non-abelian minimal normal subgroups of \(G\). Note that \(T(G)\) is a direct product of non-abelian simple groups. Finally, we set \(g(k)=2^{2^{k}}f(k)^{k}\). The following completes the proof of Theorem A. **Theorem 3.5**.: _Let \(G\) be a finite group. Let \(k=k(B_{0}(G))\). Then \(|G|_{p}\leqslant g(k)\)._ Proof.: Let \(O_{1}=\mathbf{O}_{p^{\prime}}(G)\). Set \(E_{1}/O_{1}=T(G/O_{1})\), where \(T(G/O_{1})\) is the non-abelian part of the socle of \(G/O_{1}\). For \(i>1\), we define \(O_{i}/E_{i-1}=\mathbf{O}_{p^{\prime}}(G/E_{i-1})\) and \(E_{i}/O_{i}=T(G/O_{i})\), so that we have a normal series \(1\leqslant O_{1}\leqslant E_{1}\leqslant O_{2}\leqslant E_{2}\leqslant\cdots\). Note that if \(O_{i}<E_{i}\) then \(E_{i}/O_{i}\) is a direct product of simple groups of order divisible by \(p\). By Lemma 2.13 we conclude that \(O_{k+1}=E_{k+1}=O_{k+2}=\cdots\). Set \(O=O_{k+1}\). Note that \(F/O=\mathbf{F}^{*}(G/O)\) is a \(p\)-group. Since \(\mathbf{C}_{G/O}(F/O)\leqslant F/O\), [11, Corollary V.3.11] implies that \(B_{0}(G/O)\) is the unique \(p\)-block of \(G/O\). Since \(B_{0}(G/O)\subseteq B_{0}(G)\), Landau's theorem implies that \[|G/O|\leqslant 2^{2^{k}}.\] Now, for \(i\leqslant k\), let \(C_{i}/O_{i}=\mathbf{C}_{G/O_{i}}(E_{i}/O_{i})\), so that \(G/C_{i}\) is isomorphic to a subgroup of \(\operatorname{Aut}(E_{i}/O_{i})\) that contains \(E_{i}C_{i}/C_{i}\cong E_{i}/O_{i}\). Notice that the \(p\)-solvable radical of \(G/C_{i}\) is trivial, so by Theorem 3.4 applied to \(G/C_{i}\), we have that \[|E_{i}/O_{i}|_{p}\leqslant|G/C_{i}|_{p}\leqslant f(k).\] It follows that \[|G|_{p}=|G:O|_{p}\prod_{i=1}^{k}|E_{i}/O_{i}|_{p}\leqslant 2^{2^{k}}f(k)^{k},\] as wanted. **Remark 3.6**.: Arguing in a similar way, we can see that if a finite group \(G\) does not have simple groups of Lie type in characteristic different from \(p\) as composition factors, then \(|G:\mathbf{O}_{p^{\prime}}(G)|\) can be bounded above in terms of \(k(B_{0}(G))\). We sketch the proof. First, we know that if \(p\) is a prime and \(S\) is a simple group of Lie type in characteristic \(p\) or an alternating group, then \(|S|\) is bounded from above in terms of \(|S|_{p}\). Therefore, the same happens for almost simple groups with socle of Lie type in characteristic \(p\) or alternating. Now, let \(R\) be the \(p\)-solvable radical of a finite group \(G\). We can argue as in the proof of Theorem 3.4 to see that \(|G:R|\) is bounded from above in terms of \(k(B_{0}(G))\). Using Lemma 3.1, we see that \(k(B_{0}(R))\) is bounded from above in terms of \(k(B_{0}(G))\). Since \(R\) is \(p\)-solvable, \(\operatorname{Irr}(B_{0}(R))=\operatorname{Irr}(R/\mathbf{O}_{p^{\prime}}(R))\). Using that \(\mathbf{O}_{p^{\prime}}(R)=\mathbf{O}_{p^{\prime}}(G)\) and Landau's theorem, we deduce that \(|R:\mathbf{O}_{p^{\prime}}(G)|\) is bounded from above in terms of \(k(B_{0}(R))\). The result follows. **Remark 3.7**.: We have already mentioned that the case \(p=2\) of Brauer's Problem 21 was already known by [10] and [12]. However, this relies on Zelmanov solution of the restricted Burnside problem. As discussed in [11] the bounds that are attainable in this problem are of a magnitude that is incomprehensibly large. The bound that we have obtained for principal blocks, although surely far from best possible, is much better than any bound that relies on the restricted Burnside problem. Recently, there has been a large interest in studying relations among (principal) blocks for different primes. For instance, what can we say about the set of irreducible characters that belong to some principal block? The groups with the property that all irreducible characters belong to some principal block were determined in [1]. As a consequence of Brauer's Problem 21 for principal blocks, we see that for any integer \(k\) there are finitely many groups with at most \(k\) irreducible characters in some principal block. Note that this is a strong form of Landau's theorem. In this corollary, given a prime \(p\), we write \(B_{p}(G)\) to denote the principal \(p\)-block of \(G\). **Corollary 3.8**.: _The order of a finite group is bounded from above in terms of \(|\bigcup_{p}\operatorname{Irr}(B_{p}(G))|\)._ Proof.: By Theorem A, we know that for any prime \(p\), \(|G|_{p}\) is bounded from above in terms of \(k(B_{p}(G))\). It follows that \(|G|_{p}\) is bounded from above in terms of \(|\bigcup_{p}\operatorname{Irr}(B_{p}(G))|\). In particular, if \(p\) is a prime divisor of \(|G|\), then \(p\) is bounded from above in terms of \(|\bigcup_{p}\operatorname{Irr}(B_{p}(G))|\). The result follows. ## 4. Blocks with three irreducible characters In this section, we prove Theorem C. As usual, if \(B\) is a \(p\)-block of a finite group \(G\), \(l(B)\) is the number of irreducible \(p\)-Brauer characters in \(B\). By [13], we know that if \(k(B)=3\) and \(l(B)=1\), then the defect group is cyclic of order \(3\). So we are left with the case \(l(B)=2\). **Lemma 4.1**.: _Let \(N\lhd G\) and let \(B\) be a \(p\)-block of \(G\) with defect group \(D\). Suppose that \(B\) covers a \(G\)-invariant block \(b\) of \(N\) such that \(D\) is a defect group of \(b\). If \(b\) is nilpotent then \(k(B)=k(B^{\prime})\), where \(B^{\prime}\in\operatorname{Bl}(\mathbf{N}_{G}(D))\) is the Brauer first main correspondent of \(B\)._ Proof.: Since \(D\) is a defect group of \(b\), we have that the Harris-Knorr correspondent of \(B\) (see [14, Theorem 9.28]) with respect to \(b\) is \(B^{\prime}\), the Brauer first main correspondent of \(B\). By the work in [10] (see the explanation at the beginning of [11, Section 3], for instance) we have that \(B\) and \(B^{\prime}\) are Morita equivalent, and hence, they have the same number of irreducible characters. We write \(\operatorname{cd}(B)\) to denote the set of degrees of the irreducible (ordinary) characters in \(B\). We write \(k_{0}(B)\) to denote the number of irreducible (ordinary) characters of height zero in \(B\). The following is Theorem C. **Theorem 4.2**.: _Let \(G\) be a finite group and let \(B\) be a \(p\)-block of \(G\). Suppose that Condition B holds for \((S,p)\) for all simple non-abelian composition factors \(S\) of \(G\). Let \(D\) be a defect group of \(B\). If \(k(B)=3\), then \(|D|=3\)._ Proof.: We proceed by induction on \(|G|\). Notice that we may assume \(D>1\) is elementary abelian by Brauer theorem (see [14, Theorem 3.18], for instance) and [1, Corollary 7.2] and that \(l(B)=2\) by [13]. _Step 0. We may assume that \(p\) is odd._ Suppose that \(p=2\). By [12, Corollary 1.3(i)], if \(|D|>2\) we have that \(4\) divides \(k_{0}(B)\leq k(B)=3\), which is absurd. Hence we have that \(|D|=2\). But in this case we know that \(k(B)=2\), by [1]. This is a contradiction, so \(p\) is odd. _Step 1. We may assume \(\mathbf{O}_{p}(G)=1\)._ Let \(M=\mathbf{O}_{p}(G)\), and let \(\bar{B}\in\operatorname{Bl}(G/M)\) dominated by \(B\) with defect group \(D/M\) ([20, Theorem 9.9(b)]). Since \(M\) is a \(p\)-group, it has just one \(p\)-block, the principal one, so \(B\) covers \(B_{0}(M)\). By [20, Theorem 9.4] if \(1_{M}\neq\theta\in\operatorname{Irr}(M)\), there is \(\chi\in\operatorname{Irr}(B)\) over \(\theta\), hence \(\chi\) does not lie in \(\bar{B}\). Then \(k(\bar{B})\leq 2\). If \(k(\bar{B})=1\), then \(D=M\) and \(D\) is normal in \(G\). In this case \(|D|=3\) by [20, Theorem 4.1]. If \(k(\bar{B})=2\), then by [1], we have \(p=2\). This contradicts Step 0. _Step 2. If \(N\) is a normal subgroup of \(G\), and \(b\) is a \(p\)-block of \(N\) covered by \(B\), we may assume that \(b\) is \(G\)-invariant._ Let \(G_{b}\) be the stabilizer of \(b\) in \(G\). By the Fong-Reynolds correspondence ([20, Theorem 9.14]), if \(c\) is the block of \(G_{b}\) covering \(b\) such that \(c^{G}=B\), we have that \(k(c)=k(B)=3\) and if \(E\) is a defect group of \(c\), then \(E\) is a defect group of \(B\). If \(G_{b}<G\), by induction we are done. _Step 3. We may assume that if \(N\) is a normal subgroup of \(G\) and \(b\) is a \(p\)-block of defect zero of \(N\) covered by \(B\), then \(N\) is central and cyclic. In particular, we may assume that \(\mathbf{Z}(G)=\mathbf{O}_{p^{\prime}}(G)\) is cyclic._ Write \(b=\{\theta\}\). Since \(\theta\) is of defect zero, we have that \((G,N,\theta)\) is an ordinary-modular character triple and there exists \((G^{*},N^{*},\theta^{*})\) an isomorphic ordinary-modular character triple with \(N^{*}\) a \(p^{\prime}\)-group central in \(G^{*}\) and cyclic (see [20, Problems 8.10 and 8.13]). Notice also that since \(G^{*}/N^{*}\cong G/N\), the set of non-abelian composition factors of \(G^{*}\) is contained in the set of non-abelian composition factors of \(G\), so Condition B holds for all non-abelian composition factors of \(G^{*}\). If \[*:\operatorname{Irr}(G|\theta)\to\operatorname{Irr}(G^{*}|\theta^{*})\] is the bijection given by the isomorphism of character triples and \(B=\{\chi_{1},\chi_{2},\chi_{3}\}\), we have that \(B^{*}=\{\chi_{1}^{*},\chi_{2}^{*},\chi_{3}^{*}\}\) is a \(p\)-block of \(G^{*}\). Now, if \(D^{*}\) is a defect group of \(B^{*}\) and \(|D^{*}|=3\), we claim that \(|D|=3\) (notice that in this case \(p=3\), so we just need to prove that \(|D|=p\)). Indeed, let \(\chi\in\operatorname{Irr}(B)\) of height zero. Since isomorphism of character triples preserves ratios of character degrees and all the characters in \(B^{*}\) are of height zero (because \(D^{*}\) has prime order), we have \[\frac{|G:D|_{p}}{|N|_{p}}=\left(\frac{\chi(1)}{\theta(1)}\right)_{p}=\chi^{*}( 1)_{p}=|G^{*}:D^{*}|_{p}=\frac{|G^{*}:N^{*}|_{p}}{p}=\frac{|G:N|_{p}}{p}.\] Since \(b\) is \(G\)-invariant, we have that \(D\cap N\) is a defect group of \(b\) by [20, Theorem 9.26], so \(D\cap N=1\) because \(b\) has defect zero. Now, we have \[|D|=\frac{|G|_{p}}{|G:D|_{p}}=\frac{|G:N|_{p}|N|_{p}}{|G:D|_{p}}=p,\] as claimed. Hence we may assume that \(N\) is central and cyclic. In particular, by Step 1 we have that \(\mathbf{Z}(G)=\mathbf{O}_{p^{\prime}}(G)\) is cyclic. _Step 4. There is a unique \(G\)-conjugacy class of non-trivial elements in \(D\)._ Let \(b\) be the \(p\)-block of \(N\) covered by \(B\). Since \(b\) is \(G\)-invariant, we have that \(D\cap N\) is a defect group of \(b\) by [23, Theorem 9.26]. By a theorem of Brauer ([23, Theorem 5.12]) we have that \[k(B)=l(B)|\mathbf{Z}(G)|_{p}+\sum_{i=1}^{k}\sum_{\begin{subarray}{c}b\in \operatorname{Bl}(\mathbf{C}_{G}(x_{i}))\\ b^{G}=B\end{subarray}}l(b),\] where \(\{x_{1},x_{2},\ldots,x_{k}\}\) are the representatives of the non-central \(G\)-conjugacy classes of \(p\)-elements of \(G\). Since \(|\mathbf{Z}(G)|_{p}=1\) by Step 1, we have \[k(B)=l(B)+\sum_{i=1}^{k}\sum_{\begin{subarray}{c}b\in\operatorname{Bl}( \mathbf{C}_{G}(x_{i}))\\ b^{G}=B\end{subarray}}l(b).\] By [23, Theorem 4.14], if \(x_{i}\in D\), then there is \(b\in\operatorname{Bl}(\mathbf{C}_{G}(x_{i}))\) such that \(b^{G}=B\). Since \(l(B)=2\) and \(k(B)=3\), we have that there is just one \(G\)-conjugacy class of non-trivial elements in \(D\). _Step 5. If \(N\) is a non-central normal subgroup of \(G\), then \(D\leq N\). In particular, if \(b\) is the only block of \(N\) covered by \(B\), then \(D\) is a defect group of \(b\)._ Let \(b\) be the \(p\)-block of \(N\) covered by \(B\). Again, since \(b\) is \(G\)-invariant, we have that \(D\cap N\) is a defect group of \(b\) (by [23, Theorem 9.26]). Since \(D\cap N>1\) (otherwise \(b\) is of defect zero and \(N\) is central by Step 3), we have that there is an element \(1\neq x\in D\cap N\). If \(1\neq y\in D\), \(y\) is \(G\)-conjugate to \(x\) by Step 4 and thus \(y\in N\), as wanted. _Step 6. If \(N\) is a normal subgroup of \(G\), \(b\) is the block of \(N\) covered by \(B\) and all the irreducible characters in \(b\) have the same degree, then \(N\) is central._ Suppose that \(N\) is not central. By Step 5 we have that \(D\) is a defect group of \(b\). By [13, Proposition 1 and Theorem 3] we have that \(D\) is abelian and has inertial index \(1\). By [1, 1.ex.3], we know that \(b\) is nilpotent. Hence by Lemma 4.1 we have that \(k(B^{\prime})=k(B)=3\), where \(B^{\prime}\) is the Brauer first main correspondent of \(B\) in \(\mathbf{N}_{G}(D)\). If \(\mathbf{N}_{G}(D)<G\), by induction we are done. Hence we may assume that \(D\lhd G\), but this is a contradiction with Step 1. Therefore \(N\) is central. _Step 7. If \(N\) is a normal subgroup of \(G\), and \(b\) is the unique block of \(N\) covered by \(B\), then \(\operatorname{Irr}(b)\) has at most three \(G/\mathbf{C}_{G}(N)\)-orbits_. Suppose that there are more than three \(G/\mathbf{C}_{G}(N)\)-orbits in \(\operatorname{Irr}(b)\), and let \(\theta_{i}\in\operatorname{Irr}(b)\) be a representative for these orbits (so there are at least four of them). By [23, Theorem 9.4] we can take \(\chi_{i}\in\operatorname{Irr}(B)\) lying over \(\theta_{i}\). By Clifford's theorem, the \(\chi_{i}\) are all different. But this is a contradiction since \(k(B)=3\). _Step 8. We may assume that \(D\) is not cyclic_. Otherwise, by Dade's theory of blocks with cyclic defect [1], we have that \(k(B)=k_{0}(B)=k_{0}(B^{\prime})=k(B^{\prime})\) where \(B^{\prime}\in\operatorname{Bl}(\mathbf{N}_{G}(D)|D)\) is the Brauer correspondent of \(B\), and hence we may assume that \(D\) is normal in \(G\). In this case we are done by Step 1. _Step 9. Write \(Z=\mathbf{Z}(G)\) and \(\overline{G}=G/Z\). Then \(\overline{G}\) has a unique minimal normal subgroup \(\overline{K}=K/Z\), which is simple._ Let \(K/Z\) be a minimal normal subgroup of \(G/Z\). Since \(Z=\mathbf{O}_{p^{\prime}}(G)\), we have that \(K/Z\) is not a \(p^{\prime}\)-group. Since \(\mathbf{O}_{p}(G)=1\), \(K/Z\) is not a \(p\)-group. Hence \(K/Z\) is semisimple. Notice that \(K/Z\) is the unique minimal normal subgroup of \(G/Z\). Indeed, if \(K_{1}/Z,K_{2}/Z\) are minimal normal subgroups of \(G/Z\), then by Step 5, \(D\subseteq K_{1}\cap K_{2}=Z=\mathbf{O}_{p^{\prime}}(G)\) and hence \(D=1\), a contradiction. Write \(\overline{K}=K/Z\). Then \(\overline{K}=\overline{S_{1}}\times\cdots\times\overline{S_{t}}\), where \(\overline{S_{i}}\) is non-abelian simple and \(\overline{S_{i}}=\overline{S_{1}}^{g_{i}}\) for some \(g_{i}\in G\). Write \(\overline{S_{i}}=S_{i}/Z\) and notice that \(S_{i}=S_{1}^{g_{i}}\). Notice that since \(S_{i}/\mathbf{Z}(S_{i})=S_{i}/Z\) is simple, we have that \(S_{i}^{\prime}\) is a component of \(G\) and hence \([S_{i}^{\prime},S_{j}^{\prime}]=1\) whenever \(i\neq j\) ([18, Theorem 9.4]). Furthermore, \(S_{i}=S_{i}^{\prime}Z\), so \([S_{i},S_{j}]=1\) whenever \(i\neq j\). We want to show that \(t=1\). By Step 5 we have that \(D\) is a defect group of \(b\), the only block in \(K\) covered by \(B\). If \(D\cap S_{i}=1\) for all \(i=1,\ldots,t\), then \(D=1\), a contradiction. Hence there is \(i\) such that \(D\cap S_{i}>1\). Without loss of generality we may assume that \(D\cap S_{1}>1\). Let \(b_{1}\) be the only block of \(S_{1}\) covered by \(b\) and notice that, since \(b_{1}\) is \(K\)-invariant, \(D\cap S_{1}\) is a defect group of \(b_{1}\). We claim that \(D\nleq S_{1}\). Suppose otherwise. Notice that \(D^{g_{i}}\) is a defect group of \(b^{g_{i}}=b\) and hence \(D^{g_{i}}=D^{k}\) for some \(k\in K\). Now, \(D^{g_{i}}=D^{k}\subseteq S_{1}^{g_{i}}\cap S_{1}^{k}=S_{i}\cap S_{1}=Z\), which is a \(p^{\prime}\)-group. This is a contradiction, so \(D\nleq S_{1}\). Let \(1\neq x\in D\cap S_{1}\). If \(D\cap S_{i}=1\) for all \(i\neq 1\), we have that \(D=D\cap S_{1}\) which is a contradiction by the previous paragraph. Hence there is \(i\neq 1\) such that \(D\cap S_{i}\neq 1\). Let \(1\neq x_{i}\in D\cap S_{i}\). Now \(xx_{i},x\in D\) and by Step 4 we have that \(x\) and \(xx_{i}\) are \(G\)-conjugate, which is not possible. Hence \(t=1\), as wanted. _Step 10. Final step._ Now \(K^{\prime}\) is a quasi simple group with center a cyclic \(p^{\prime}\)-group. If \(b\) is the unique block of \(K^{\prime}\) covered by \(B\), we have that \(D\) is a defect group of \(b\) by Step 5 and hence is not cyclic elementary abelian by Step 8. We claim that \(b\) is faithful. Let \(X=\ker(b)\). By Theorem [20, Theorem 6.10], we have that \(X\leq Z\cap K^{\prime}\). Now, let \(\psi\in\operatorname{Irr}(b)\), then \(\psi\) lies over \(1_{X}\) and hence, there is \(\chi\in\operatorname{Irr}(B)\) lying over \(1_{X}\). Now, by [20, Theorem 9.9 (c)] we have that \(k(\bar{B})=k(B)=3\), where \(\bar{B}\) is the block of \(G/X\) containing \(\chi\). If \(X>1\), by induction we obtain that \(|D|=|DX/X|=3\), and we are done. Hence we may assume that \(X=1\). By Condition B, there are at least four \(\operatorname{Aut}(K^{\prime})\)-conjugacy classes of irreducible characters in \(b\), which is a contradiction by Step 7. ## 5. On Condition B We end the paper with a discussion on Condition B. In [21, Theorem B], a statement similar to Condition B but requiring only 3 distinct orbits is proven. Unfortunately, for groups of Lie type in non-defining characteristic, the strategy used there is not quite sufficient to obtain 4 orbits. In fact, we will see that this is not always attainable. However, here we address several situations in which we do obtain Condition B. **Proposition 5.1**.: _Let \(p\geq 3\) be prime. Let \(K\) be a quasisimple group with \(\mathbf{Z}(K)\) a cyclic \(p^{\prime}\)-group and socle \(K/\mathbf{Z}(K)\) a simple sporadic group, the Tits group \({}^{2}\mathrm{F}_{4}(2)^{\prime}\), \(\mathrm{G}_{2}(2)^{\prime}\), \({}^{2}\mathrm{G}_{2}(3)^{\prime}=\mathrm{A}_{1}(8)\), a simple group of Lie type with exceptional Schur multiplier, or an alternating group \(\mathfrak{A}_{n}\) with \(5\leq n\leq 13\). Let \(B\) be a \(p\)-block for \(K\) with noncyclic, positive defect. Then \(|\mathrm{cd}(B)|\geq 4\), with the following exceptions when \(p=3\):_ * \(K=2.\mathfrak{A}_{7}\)_;_ \(B\) _is Block 3 in GAP;_ \(|\mathrm{cd}(B)|=3\)_; and_ \(k_{\mathrm{Aut}(K)}(B)=4\)__ * \(K=2.\mathfrak{A}_{8}\)_;_ \(B\) _is Block 5 in GAP;_ \(|\mathrm{cd}(B)|=3\)_; and_ \(k_{\mathrm{Aut}(K)}(B)=4\)__ * \(K=2.\mathfrak{A}_{11}\)_;_ \(B\) _is Block 5 in GAP;_ \(|\mathrm{cd}(B)|=3\)_; and_ \(k_{\mathrm{Aut}(K)}(B)=4\)__ * \(K=2.\mathfrak{A}_{13}\)_;_ \(B\) _is Block 5 in GAP;_ \(|\mathrm{cd}(B)|=3\)_; and_ \(k_{\mathrm{Aut}(K)}(B)=4\)__ * \(K={}^{2}\mathrm{G}_{2}(3)^{\prime}=\mathrm{A}_{1}(8)\)_;_ \(B=B_{0}(K)\)_;_ \(|\mathrm{cd}(B)|=3\)_, and_ \(k_{\mathrm{Aut}(K)}(B)=4\)_._ _In particular, Condition B is true for \(K\)._ Proof.: This can be seen using the GAP Character Table Library. We note that the groups with exceptional Schur multipliers are listed in [12, Table 6.1.3]. **Theorem 5.2**.: _Let \(p\geq 3\) be prime. Let \(K\) be a quasisimple group with \(K/\mathbf{Z}(K)\cong\mathfrak{A}_{n}\), an alternating group with \(n>11\). Let \(B\) be a \(p\)-block for \(K\) with noncyclic, positive defect. Then \(k_{\mathrm{Aut}(K)}(B)\geq 4\). In particular, Condition B is true if \(K\) is a covering group for \(S\cong\mathfrak{A}_{n}\) for \(n>11\)._ Proof.: The proof here is essentially the same as that of [14, Proposition 3.4]. Let \(\hat{\mathfrak{A}}_{n}\) and \(\hat{\mathfrak{S}}_{n}\) denote the double covers, respectively, of \(\mathfrak{A}_{n}\) and \(\mathfrak{S}_{n}\). Recall that \(\mathrm{Aut}(S)=\mathfrak{S}_{n}\) and \(\mathrm{Aut}(\hat{\mathfrak{A}}_{n})=\hat{\mathfrak{S}}_{n}\). Following [16], a \(p\)-block of \(\mathfrak{S}_{n}\) has \(k(p,w)\) ordinary irreducible characters, and a \(p\)-block of \(\hat{\mathfrak{S}}_{n}\) lying over the nontrivial character of \(\mathbf{Z}(\hat{\mathfrak{S}}_{n})\) (a "spin block") has \(k^{\pm}(\bar{p},w)\) ordinary irreducible characters, where \(w\) is the so-called "weight" of the block. We remark that our assumption that a defect group is noncyclic forces \(w\geq 2\). From [16, (3.11) and Section 13], we see that these numbers are larger than \(6\) (and hence there are strictly more than \(3\)\(\mathrm{Aut}(K)\)-orbits represented in a given block \(B\) of \(K\)) if \(p\geq 3\) and \(w\geq 2\), except for the case \((p,w)=(3,2)\) and \(B\) is a spin block, in which case \(k^{\pm}(\bar{3},2)=6\). In this case, [16, Proposition 13.19] forces at least one of the characters in the block of \(\hat{\mathfrak{S}}_{n}\) to restrict to the sum of two characters of \(\hat{\mathfrak{A}}_{n}\), and hence our block again contains characters from strictly more than \(3\)\(\mathrm{Aut}(K)\)-orbits. **Proposition 5.3**.: _Condition B holds for \(K\) a quasisimple group with \(S:=K/\mathbf{Z}(K)\) of Lie type defined in characteristic \(p\) with a non-exceptional Schur multiplier._ Proof.: We may assume that \(K\) is not an exceptional cover of \(S:=K/\mathbf{Z}(K)\), as the latter have been discussed in Proposition 5.1. Now, every \(p\)-block of \(K\) is either maximal defect or defect zero, by [14, Theorem.]. Hence the defect groups of \(B\) are Sylow \(p\)-subgroups of \(K\). Now, the condition that a Sylow \(p\)-subgroup is abelian and non-cyclic forces \(S=\mathrm{PSL}_{2}(p^{a})\) for some integer \(a\geq 2\), so we may assume that \(K=\mathrm{SL}_{2}(p^{a})\) is the Schur covering group of \(S\). In this situation, the blocks of maximal defect are in bijection with the characters of \(\mathbf{Z}(K)\). Namely, we have \(B_{0}(K)\), which contains all members of \(\mathrm{Irr}(K|1_{\mathbf{Z}(K)})\backslash\{\mathrm{St}\}\) and a second block of maximal defect containing all characters of \(K\) that are nontrivial on \(\mathbf{Z}(K)\). (See [14, Section 5].) By inspection (see [13, Tab. 2.6]) there are four degrees for characters in \(B_{0}(K)\), and three in the second block of maximal defect. Hence, it suffices to show that there are two semisimple characters \(\chi_{s}\) of the same degree \(q\pm 1\) that are not \(\mathrm{Aut}(K)\)-conjugate and nontrivial on \(\mathbf{Z}(K)\). (The latter is equivalent to \(s\notin[K^{*},K^{*}]\) using [15, Proposition 11.4.12 and Remark 11.4.14]). Since \(a\geq 2\), \(p^{a}-1\) must have at least two distinct divisors, so we consider \(x_{1},x_{2}\in C_{p^{a}-1}\) with these orders. Let \(s_{i}:=\mathrm{diag}(x_{i},1)\in\widehat{K}^{*}:=\mathrm{GL}_{2}(p^{a})\) for \(i=1,2\). Note that \(s_{i}\notin[\widetilde{K},\widetilde{K}]=\mathrm{SL}_{2}(p^{a})\). Further, \(s_{1}^{\alpha}\) cannot be conjugate to \(s_{2}z\) for any \(z\in\mathbf{Z}(\widetilde{K})\) and \(\alpha\in\mathrm{Aut}(K)\). Hence the two semisimple characters \(\chi_{s_{i}}\) of \(\widetilde{K}\) for \(i=1,2\) cannot be \(\mathrm{Aut}(K)\)-conjugate and restrict to distinct characters of \(K\). Hence constituents of these restrictions are not \(\mathrm{Aut}(K)\)-conjugate. This leaves us to consider groups \(S\) of Lie type in non-defining characteristic. Recall that by Proposition 5.1, we may assume that \(S\) does not have an exceptional Schur multiplier. Hence the Schur covering group of \(S\) is of the form \(G=\mathbf{G}^{F}\), where \(\mathbf{G}\) is a simple, simply connected algebraic group and \(F\colon\mathbf{G}\to\mathbf{G}\) is a Frobenius endomorphism endowing \(\mathbf{G}\) with an \(\mathbb{F}_{q}\)-rational structure, where \(p\nmid q\). Given a semisimple \(s\in G^{*}\) of \(p^{\prime}\)-order, a fundamental result of Broue-Michel shows that the set \(\mathcal{E}_{p}(G,s)\) is a union of \(p\)-blocks of \(G\), where \(\mathcal{E}_{p}(G,s)\) is obtained as the union of series \(\mathcal{E}(G,st)\) as \(t\) runs over elements of \(p\)-power order in \(\mathbf{C}_{G^{*}}(s)\). (See [1, Theorem 9.12].) We first dispense of the Suzuki and Ree groups. **Proposition 5.4**.: _Condition B holds when \(S=K/Z\) a Suzuki, Ree, or triality group \({}^{2}\mathrm{B}_{2}(q)\), \({}^{2}\mathrm{G}_{2}(q)\), \({}^{2}\mathrm{F}_{4}(q)\), or \({}^{3}\mathrm{D}_{4}(q)\) with \(p\geqslant 3\) a prime dividing \(|S|\) and not dividing \(q\)._ Proof.: Note that the Schur multiplier for \(S\) is trivial or \(S\) was considered already in Proposition 5.1. Hence, we let \(K=S\). Further, \(\mathrm{Aut}(S)/S\) is cyclic, generated by field automorphisms. For \(p\geqslant 3\) a prime not dividing \(q^{2}\), the Sylow \(p\)-subgroups of \(S={}^{2}\mathrm{B}_{2}(q^{2})\) and \(S={}^{2}\mathrm{G}_{2}(q^{2})\) are cyclic. So, first let \(K={}^{2}\mathrm{F}_{4}(q^{2})\) with \(q^{2}=2^{2n+1}\). Note that \(K^{*}=K\) is self-dual. In this case, the semisimple classes, centralizers, and maximal tori are given in [15], and the blocks are studied in [14]. First, suppose that \(p\mid(q^{2}-1)\). Then \(K\) has a unique unipotent block (namely, \(B_{0}(K)\)) with noncyclic defect group, which contains more than \(3\) characters of distinct degree. Similarly, there is a unique noncyclic block of positive defect in each series \(\mathcal{E}(K,s)\) for \(s\in\{t_{1},t_{2},t_{3}\}\), with \(t_{i}\) as in [15], using [14, Bem. 1]. The remaining blocks of positive defect are cyclic. If \(s\) is one of the classes of the form \(t_{1}\) or \(t_{2}\), then this noncyclic block contains two characters from \(\mathcal{E}(K,s)\) with distinct degrees. The centralizers \(\mathbf{C}_{K}(s)\) contain the maximal torus \(\mathbb{Z}_{q^{2}-1}^{2}\), from which we may obtain \(t,t^{\prime}\in\mathbf{C}_{K}(s)_{p}\) that are not \(\mathrm{Aut}(K)\)-conjugate (taking, for example, \(p\)-elements from classes \(t_{1}\) and \(t_{2}\)). This yields four characters in the block that are not \(\mathrm{Aut}(K)\)-conjugate, as desired. For \(s\) of the form \(t_{3}\), we have \(\mathbf{C}_{K}(s)\) is the full maximal torus \(\mathbb{Z}_{q^{2}-1}^{2}\), and for any \(t\in\mathbf{C}_{K}(s)_{p}\), we have \(\mathbf{C}_{K}(st)=\mathbf{C}_{K}(s)\). Hence we see that every irreducible character in this block has the same degree. When \(p\nmid(q^{2}-1)\), each \(\mathcal{E}_{p}(K,s)\) contains at most one block of positive defect (see [14, Bem. 1]). First, assume \(p\mid(q^{2}+1)\). Here, the noncyclic blocks correspond to \(s\in\{t_{4},t_{5},t_{14}\}\). The set \(\mathcal{E}(K,s)\) contains \(3\), \(2\), \(1\) distinct character degrees, respectively, in these cases, and each \(\mathbf{C}_{K}(s)\) contains the maximal torus \(\mathbb{Z}_{q^{2}+1}^{2}\). As before, there is only one character degree in the block in the latter case. In the other cases, we argue analogously to the previous paragraph to obtain four characters in the block that are not \(\mathrm{Aut}(K)\)-conjugate. If instead \(p\mid(q^{4}+1)\), then there are three distinct character degrees in \(\mathcal{E}(K,s)\) with \(s\in\{t_{7},t_{9}\}\). Then considering any character in \(\mathcal{E}(K,st)\) with \(t\in\mathbf{C}_{K}(s)_{p}\), we obtain a fourth character in the block that is not \(\operatorname{Aut}(K)\)-conjugate to these three. If instead \(s\in\{t_{12},t_{13}\}\), we obtain as before that every character in the block has the same degree. The remaining blocks in this case have cyclic defect groups. Now, let \(K={}^{3}\mathrm{D}_{4}(q)\). In this case, the blocks have been studied in [1]. Using the results there, we may argue analogously to the situation above. In the remaining cases, we would hope to appeal to the strategy employed in [15, Section 3]. Namely, with the above results, the results of loc. cit. largely reduce the problem of proving Condition B to the following: **Condition 5.5**.: _Let \(\mathbf{H}\) be a simple, simply connected reductive group and \(F\colon\mathbf{H}\to\mathbf{H}\) be a Frobenious morphism and \(H=\mathbf{H}^{F}\) the corresponding finite group of Lie type. Let \(B\) be a quasi-isolated \(p\)-block of \(H\) with an elementary abelian defect group \(D\). Then_ \[k_{\operatorname{Aut}(H)}(B)\geqslant\left\{\begin{array}{ll}4&\text{ if $D$ is not cyclic}\\ 3&\text{ if $D$ is cyclic}\end{array}\right.\] Indeed, from our above results, we may assume that \(S\) is a group of Lie type defined in characteristic distinct from \(p\) and that the Schur covering group for \(S\) is \(G=\mathbf{G}^{F}\) where \(\mathbf{G}\) is a simple, simply connected group with \(F\) a Frobenius endomorphism. Note then that \(K\) is a quotient of \(G\) by some central subgroup and that, from our assumption that \(p\nmid Z\) in Condition B, [15, Theorem 9.9(c)] tells us that it suffices to prove Condition B when \(K=G/\mathbf{Z}(G)_{p}\), where \(\mathbf{Z}(G)_{p}\) is the Sylow \(p\)-subgroup of \(\mathbf{Z}(G)\). Our assumption that \(p\geqslant 3\) then means that we may assume that \(K=G\) unless \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\) with \(p\mid(q-\epsilon)\) or \(S=\operatorname{E}_{6}^{\epsilon}(q)\) with \(p=3\mid(q-\epsilon)\). Then indeed, when \(K=G\), Condition 5.5 implies Condition B by [15, Proposition 3.9 and Lemma 3.12] (see also Remarks 3.10, 3.11 of Loc. Cit.) and the fact that Bonnafe-Rouquier correspondence preserves isomorphism types of defect groups when the defect is abelian, by [1, Theorem 7.16]. Of course, in the cases of \(S=\operatorname{PSL}_{n}^{\epsilon}(q)\) with \(p\mid(q-\epsilon)\) or \(S=\operatorname{E}_{6}^{\epsilon}(q)\) with \(p=3\mid(q-\epsilon)\), some additional work is needed, as was the case in [15, Prop. 3.16(b) and Theorem 3.21]. However, this method is not quite sufficient for completing the proof of Condition B. Unfortunately, although we see by following the proofs in [15, Sections 3.4-3.6] that Condition 5.5 holds in many situations, it turns out that there are indeed cyclic quasi-isolated blocks with \(k_{\operatorname{Aut}(G)}(B)=2\). (This is already pointed out in [15] after the statement of Theorem 3.1.) We list some additional situations in the following: _Example 5.6_.: Let \(G=\operatorname{SL}_{n}^{\epsilon}(q)\) and let \(\widetilde{G}=\operatorname{GL}_{n}^{\epsilon}(q)\). Let \(p\) be an odd prime not dividing \(q\) and let \(B\) be a \(p\)-block of \(G\) with positive defect. The following situations lead to exceptions to Condition 5.5: 1. If \(B\) is noncyclic: * \(n=3\), \(p=5\mid\mid(q-\epsilon)\), and \(B\) lies under a block \(\widetilde{B}\) of \(\widetilde{G}=\operatorname{GL}_{3}^{\epsilon}(q)\) indexed by a semisimple \(5^{\prime}\)-element \(\widetilde{s}\in\widetilde{G}\) with \(\mathbf{C}_{\widetilde{G}}(\widetilde{s})\cong C_{q-\epsilon}^{3}\). In this case, \(k_{\operatorname{Aut}(G)}(B)\geqslant 3\) (and equality can occur) and \(|\mathrm{cd}(\widetilde{B})|=1\). Note: this includes the quasi-isolated block of \(\operatorname{SL}_{3}(q)\) under the block indexed by the semisimple element \(\operatorname{diag}(1,\zeta_{3},\zeta_{3}^{-1})\), where \(|\zeta_{3}|=3\). (See also Remark 5.7.) * \(n=3\), \(p=3\)\(||\)\((q-\epsilon)\), and \(B\) lies under a block \(\widetilde{B}\) of \(\widetilde{G}=\operatorname{GL}_{3}^{\epsilon}(q)\) indexed by a semisimple \(3^{\prime}\)-element \(\widetilde{s}\in\widetilde{G}\) with \(\mathbf{C}_{\widetilde{G}}(\widetilde{s})\cong C_{q-\epsilon}^{3}\). In this case, \(k_{\operatorname{Aut}(G)}(B)\geq 3\) (and equality can occur), \(|\mathbf{Z}(G)|=3\), and the block of \(S\) contained in \(B\) is cyclic. * \(p=3\)\(||\)\((q+\epsilon)\) and \(B\) lies under a block \(\widetilde{B}\) of \(\widetilde{G}=\operatorname{GL}_{n}^{\epsilon}(q)\) with defect group \(C_{3}^{2}\leq C_{q+\epsilon}^{2}\). * If \(B\) is cyclic: * \(n=2\), \(p\)\(||\)\((q-\epsilon)\), and \(B\) lies under a block \(\widetilde{B}\) of \(\widetilde{G}=\operatorname{GL}_{2}^{\epsilon}(q)\) indexed by a semisimple \(p^{\prime}\)-element \(\widetilde{s}\in\widetilde{G}\) with \(\mathbf{C}_{\widetilde{G}}(\widetilde{s})\cong C_{q-\epsilon}^{2}\). Here \(k_{\operatorname{Aut}(G)}(B)\geq 2\) (and equality can occur). Note: this includes the quasi-isolated block of \(\operatorname{SL}_{2}(q)\) under the block indexed by the semisimple element \(\operatorname{diag}(-1,1)\). * \(B\) lies under a block \(\widetilde{B}\) of \(\widetilde{G}\) indexed by a semisimple \(p^{\prime}\)-element with \(\mathbf{C}_{\widetilde{G}}(\widetilde{s})\cong C_{q^{\delta}-\eta}\) where \(\delta=n\) and \(p\)\(||\)\((q^{\delta}-\eta)\). Here \(k_{\operatorname{Aut}(G)}(B)\geq 2\) (and equality can occur). _Remark 5.7_.: We remark that some of the exceptions given in Example 5.6 mean that Condition 5.5 is not always feasible. Further, the first exception of (I) yields examples of \(5\)-blocks with \(k_{\operatorname{Aut}(S)}(B)=3\), meaning that Condition B will also not always hold. For example, let \(K=\operatorname{SL}_{3}(q)\) where \(q=q_{0}^{4}\), \(q_{0}\equiv 2\pmod{3}\), \(q_{0}\equiv 3\pmod{5}\), and let \(p=5\)\(||\)\((q-1)\). Let \(\zeta\in\mathbb{F}_{q}^{\times}\) with \(|\zeta|=3\). Then a block \(B\) of \(K\) lying below the (unique) block \(\widetilde{B}\) of \(\operatorname{GL}_{3}(q)\) in \(\mathcal{E}_{5}(\operatorname{GL}_{3}(q),s)\) with \(s=\operatorname{diag}(\zeta,\zeta^{-1},1)\) satisfies \(k_{\operatorname{Aut}(S)}(B)=3\) and \(|D|=C_{5}\times C_{5}\). Further, \(|\mathrm{cd}(B)|=2\) although \(|\mathrm{cd}(\widetilde{B})|=1\). We also remark that it is still not known whether there are \(3\)-blocks with \(3\) irreducible characters with defect group \(\mathsf{C}_{3}\times\mathsf{C}_{3}\). This problem appeared first in [10] and has come up often. See, for instance, [1, p. 677]. Our proof of Theorem C shows that if one could prove Condition B with the additional hypothesis that \(D=\mathsf{C}_{3}\times\mathsf{C}_{3}\) then this problem would have a negative answer. On the other hand, a proof of Condition B when \(|D|>C\) for some universal constant \(C\), would settle the case \(k(B)=3\) of Brauer's Problem 21 for arbitrary groups.
2309.13995
Influence of density and viscosity on deformation, breakage, and coalescence of bubbles in turbulence
We investigate the effect of density and viscosity differences on a swarm of large and deformable bubbles dispersed in a turbulent channel flow. For a given shear Reynolds number, Re=300, and a constant bubble volume fraction, Phi=5.4%, we perform a campaign of direct numerical simulations of turbulence coupled with a phase-field method accounting for interfacial phenomena. For each simulation, we vary the Weber number (We, ratio of inertial to surface tension forces), the density ratio (r, ratio of bubble density to carrier flow density) and the viscosity ratio (e, ratio of bubble viscosity to carrier flow viscosity). Specifically, we consider two Weber numbers, We=1.50 and We=3.00, four density ratios, from r=1 down to r=0.001, and five viscosity ratios, from e=0.01 up to e=100. Our results show that density differences have a negligible effect on breakage and coalescence phenomena, while a much stronger effect is observed when changing the viscosity of the two phases. Increasing the bubble viscosity with respect to the carrier fluid viscosity damps turbulence fluctuations, makes the bubble more rigid, and strongly prevents large deformations, thus reducing the number of breakage events. Local deformations of the interface, on the contrary, depend on both density and viscosity ratios. The opposite effect is observed for increasing bubble viscosities. We report that these effects are mostly visible for larger Weber numbers, where surface forces are weaker. Finally, we characterize the flow inside the bubbles; as the bubble density is increased, we observe, as expected, an increase in the turbulent kinetic energy (TKE) inside the bubble, while as the bubble viscosity is increased, we observe a mild reduction of the TKE inside the bubble and a strong suppression of turbulence.
Francesca Mangani, Giovanni Soligo, Alessio Roccon, Alfredo Soldati
2023-09-25T09:54:40Z
http://arxiv.org/abs/2309.13995v1
# Influence of density and viscosity on deformation, ###### Abstract We numerically investigate the effect of density and viscosity differences on a swarm of large and deformable bubbles dispersed in a turbulent channel flow. For a given shear Reynolds number, \(Re_{\tau}=300\), and a constant bubble volume fraction, \(\Phi\simeq 5.4\%\), we perform a campaign of direct numerical simulations (DNS) of turbulence coupled with a phase-field method (PFM) accounting for interfacial phenomena. For each simulation, we vary the Weber number (\(We\), ratio of inertial to surface tension forces), the density ratio (\(\rho_{r}\), ratio of bubble density to carrier flow density) and the viscosity ratio (\(\eta_{r}\), ratio of bubble viscosity to carrier flow viscosity). Specifically, we consider two Weber numbers, \(We=1.50\) and \(We=3.00\), four density ratios, from \(\rho_{r}=1\) down to \(\rho_{r}=0.001\) and five viscosity ratios from \(\eta_{r}=0.01\) up to \(\eta_{r}=100\). Our results show that density differences have a negligible effect on breakage and coalescence phenomena, while a much stronger effect is observed when changing the viscosity of the two phases. Increasing the bubble viscosity with respect to the carrier fluid viscosity damps turbulence fluctuations, makes the bubble more rigid and strongly prevents large deformations, thus reducing the number of breakage events. Local deformations of the interface, on the contrary, depend on both density and viscosity ratios: as the bubble density is increased, a larger number of small-scale deformations, small dimples and bumps, appear on the interface of the bubble. The opposite effect is observed for increasing bubble viscosities: the interface of the bubbles become smoother. We report that these effects are mostly visible for larger Weber numbers, where surface forces are weaker. Finally, we characterize the flow inside the bubbles; as the bubble density is increased, we observe, as expected, an increase in the turbulent kinetic energy (TKE) inside the bubble, while as the bubble viscosity is increased, we observe a mild reduction of the TKE inside the bubble and a strong suppression of turbulence. + Footnote †: Now at Complex Fluids and Flows Unit, OIST, 904-0495 Okinawa, Japan Introduction Interactions among turbulence and deformable interfaces are common in many physical instances, from ocean waves formation [1; 2] to atomization processes [3], as well as drops and bubbles entrained in a turbulent flow [4; 5; 6]. The outcome of these interactions is of fundamental importance as it controls the exchanges of heat, mass, and momentum across the interface and thus between the two phases. The study of turbulence-interface interactions, however, is a non-trivial task as these interactions are governed by a physics acting at very different spatio-temporal scales: from the largest problem scale, down to the Kolmogorov scale of turbulence and further down to the molecular scale of the interface. This multi-scale nature makes the investigation of multiphase turbulence very challenging. In particular, experimental investigations using optical techniques are usually limited to small volume fractions due to the difficulty of accessing phases with heterogeneous optical properties [7; 8; 9] and limited range of length scales that can be possibly measured. In this scenario, despite some limitations, numerical simulations represent an essential tool to investigate multiphase flows as they allow to access detailed space- and time-resolved information on the flow field and dispersed phase. Specifically, direct numerical simulation (DNS), in which all relevant scales of turbulence are resolved, proved to be a tool of paramount importance for a deeper understanding of single-phase [10; 11] and multiphase turbulence [4; 5; 6]. In this work we focus on the interactions of a swarm of large and deformable bubbles or drops (bubbles hereinafter without any loss of generality) with wall-bounded turbulence (turbulent channel flow). This setup has been widely used in the past to investigate different aspects of bubbly flows, from bubbles shape, deformation and clustering to the flow modifications produced by the bubbles themselves. In the pioneering works of Lu & Tryggvason [12; 13], the effects of the bubble size and deformability were investigated: they observed that as bubbles become more deformable, they move towards the middle of the channel and have a relatively small effect on the flow-rate. Scarbolo _et al._[14; 15], considering a matched density and viscosity system, investigated the effect of the surface tension, observing that surface tension forces play a key role in determining the dispersed phase topology. Roccon _et al._[16] studied the effect of the bubble viscosity, finding that for small surface tension values, larger internal viscosities reduce the drop deformability. Recently, Soligo _et al._[17; 18], considering also the presence of a soluble surfactant, investigated the surfactant effects on drop morphology [17] and flow behavior [18]. Finally, Hassberger _et al._[19] analyzed the coherent structures obtained in a bubble-laden turbulent channel flow while Cannon _et al._[20] investigated the role played by droplets coalescence on drag in turbulent channel flows. The foremost goal of this paper is to improve the fundamental understanding of bubble-bubble and bubble-turbulence interactions. Indeed, bubbles transported by a turbulent flow are characterized by complex dynamics, as they collide, coalesce and break apart. This behavior is governed by the forces generated by the surrounding continuous phase, acting on the surface of the bubbles with shear and normal stresses, and by the response of bubbles, which depends on their surface tension and their density and viscosity. The ultimate competition among these forces determines the number, shape, and deformation of the bubbles. In this work, we want to extend our previous works [14; 16] and provide a comprehensive analysis on the effects of density ratio (ratio between the density of the bubble phase over the dispersed phase), viscosity ratio (ratio between the viscosity of the bubble phase over the dispersed phase) and surface tension (controlled by the Weber number, ratio of inertial over surface tension contributions) on the multiphase system. Specifically, the first objective is to investigate the effects of these parameters on the dispersed phase topology and its topological modifications (coalescence and breakage events), and to characterize the shape and deformation of the bubbles. The second objective of this work is to characterize the global and local flow modifications produced by bubbles on the turbulent channel flow behavior. To this aim, we build and analyze a database of direct numerical simulations of turbulent channel flows laden with deformable bubbles, considering different values of density ratios, viscosity ratios, and surface tension. The numerical framework of the simulations relies on a direct solution of the Navier-Stokes equations coupled with a phase-field method. Direct solutions of the Navier-Stokes equations are used to accurately resolve all the relevant turbulence scales, while the phase-field method [21; 22] - an interface capturing approach that relies on an order parameter to define the local concentration of each phase - is used to describe in a thermodynamically consistent manner the motion of the deformable interface and its topological modifications (i.e. coalescence and breakage events). The paper is organized as follows: in section II, we introduce the numerical method, the simulation setup, and the parameters of the simulations. Then, in section III, we present the results obtained from the analysis of the simulations database. First, we focus on the effects of density and viscosity ratios and surface tension values on the topology of the dispersed phase and its topological changes (breakage and coalescence). Secondly, we evaluate the effects of these parameters on the overall interfacial area and curvature of the bubble interface. Thirdly, we study the effects of density and viscosity ratios and Weber number on the mean velocity profiles and on the turbulent kinetic energy (TKE) of the bubbles. Finally, we summarize the results and draw our conclusions in section IV. Methodology We consider the case of a swarm of bubbles injected in a turbulent channel flow with a rectangular cross-section. The dispersed and carrier phases are characterized by density \(\rho_{d}\) and \(\rho_{c}\), and viscosity \(\eta_{d}\) and \(\eta_{c}\), where the subscripts \(d\) and \(c\) identify the dispersed and carrier phase, respectively. We define the density ratio and viscosity ratio as \(\rho_{r}=\rho_{d}/\rho_{c}\) and \(\eta_{r}=\eta_{d}/\eta_{c}\) respectively. The interface that separates the two phases is characterized by a constant and uniform value of the surface tension, \(\sigma\). To describe the dynamics of the system, direct numerical simulation (DNS) of the Navier-Stokes equations, used to describe the flow field, are coupled with a phase-field method (PFM), used to describe interfacial phenomena [21; 22]. ### Modeling of interfacial phenomena The phase-field method uses an order parameter, the phase field \(\phi\), to identify the two phases: the order parameter is uniform in the bulk of each phase (\(\phi=\pm 1\)) and undergoes a smooth transition across the interface. Indeed, the sharp interface is replaced by a thin transition layer. The transport of the phase field \(\phi\) is described by the Cahn-Hilliard equation, which in dimensionless form reads as: \[\frac{\partial\phi}{\partial t}+\mathbf{u}\cdot\nabla\phi=\frac{1}{Pe}\nabla^ {2}\mu+f_{p}\,, \tag{1}\] where \(\mathbf{u}=(u,v,w)\) is the velocity vector, \(Pe\) is the Peclet number, \(\mu\) is the chemical potential and \(f_{p}\) is the penalty flux introduced with the profile-corrected formulation of the phase-field method [23; 24; 25]. The Peclet number is defined as follows: \[Pe=\frac{u_{\tau}h}{\mathcal{M}\beta}\,, \tag{2}\] where \(u_{\tau}=\sqrt{\tau_{w}/\rho_{c}}\) is the friction velocity (being \(\tau_{w}\) the shear stress at the wall and \(\rho_{c}\) the carrier phase density), \(h\) is the channel half-height, \(\mathcal{M}\) is the mobility parameter and \(\beta\) is a positive constant introduced to make the chemical potential dimensionless. The Peclet number identifies the ratio between the diffusive time-scale, \(h^{2}/\mathcal{M}\beta\), and the convective time-scale, \(h/u_{\tau}\), of the interface. The chemical potential \(\mu\) is defined as the variational derivative of a Ginzburg-Landau free-energy functional, the expression of which is selected to represent an immiscible binary mixture of isothermal fluids [26; 17; 25]. The functional is composed by the sum of two different contributions: the first contribution, \(f_{0}\), accounts for the tendency of the system to separate into the two pure stable phases, while the second contribution, \(f_{mix}\), is a mixing term accounting for the energy stored at the interface. The mathematical expression of the functional is: \[\mathcal{F}[\phi,\nabla\phi]=\int_{\Omega}\bigg{(}\underbrace{\frac{(\phi^{ 2}-1)^{2}}{4}}_{f_{0}}+\underbrace{\frac{Ch^{2}}{2}\left|\nabla\phi\right|^{2} }_{f_{mix}}\bigg{)}\mathrm{d}\Omega\,, \tag{3}\] where \(\Omega\) is the domain considered and \(Ch\) is the Cahn number, which represents the dimensionless thickness of the thin interfacial layer between the two fluids (\(\xi\) is the physical thickness of the interface). \[Ch=\frac{\xi}{h} \tag{4}\] From equation (3), the expression of the chemical potential can be derived as the functional derivative with respect to the order parameter: \[\mu=\frac{\delta\mathcal{F}[\phi,\nabla\phi]}{\delta\phi}=\phi^{3}-\phi-Ch^{2} \nabla^{2}\phi\,. \tag{5}\] At the equilibrium, the chemical potential will be constant throughout all the domain. The equilibrium profile for a flat interface can thus be obtained solving \(\nabla\mu=0\), hence obtaining: \[\phi_{eq}=\tanh\bigg{(}\frac{s}{\sqrt{2}Ch}\bigg{)} \tag{6}\] where \(s\) is a coordinate normal to the interface. Finally, \(f_{p}\) is the penalty-flux employed in the profile-corrected formulation of the phase-field method. This formulation is an improvement to the standard phase-field formulation: it allows to better maintain the equilibrium interfacial profile and it overcomes the drawbacks of the method (e.g. mass leakages among the phases and misrepresentation of the interfacial profile [23; 27]). This penalty flux is defined as: \[f_{p}=\frac{\lambda}{Pe}\left[\nabla^{2}\phi-\frac{1}{\sqrt{2}Ch}\nabla\cdot \left((1-\phi^{2})\frac{\nabla\phi}{|\nabla\phi|}\right)\right]\,, \tag{7}\] where the numerical parameter \(\lambda\) can be set via the scaling \(\lambda=0.0625/Ch\)[25]. Before proceeding, it is worth to briefly discuss the main capabilities and limitations of interface-resolved simulations in describing topological modifications of the interface [6; 17; 28]. The numerical description of breakages and coalescences is indeed one of the most challenging aspects of interface-resolved simulation methods. A fully-resolved simulation of topological changes would require resolving all the scales, from the molecular scale of the interface [29] up to the largest scales of the flow. This type of simulation, however, is way beyond the capabilities of any existing supercomputing facility. The common choice is to avoid resolving the small interfacial scales and to find a way to approximate their dynamics on a much larger scale. Here, following a similar approach, we limit the resolved range to the scales of turbulence: from the Kolmogorov length scale up to the problem size. Thus, phenomena occurring at scales smaller than Kolmogorov are smeared out on the smallest resolved scale. This choice however influences the description of coalescence and breakage events. For coalescences, a part of the physics involved in the coalescence process [30] (i.e. film drainage and rupture) cannot be directly resolved. As a result, regardless of the approach employed to describe coalescence (models for interface tracking methods [31; 32] or implicit description for interface capturing methods [6; 33]), numerical simulations struggle in predicting physical coalescence, with this inaccuracy referred to as numerical coalescence. For breakages, the picture is different and their numerical description is less troublesome. Indeed, being breakage a very quick phenomenon, it can be well approximated without resolving the dynamics at the molecular scale and there is evidence that the Navier-Stokes equations alone provide an adequate description of a breakage event [34]. Besides, the small time scale of a breakage limits the impact of the approximation on the overall flow dynamics [35; 32]. Therefore, the description of breakages on turbulence-resolved grids is considered to be rather accurate, although in the pinch-off region the smallest interfacial features, characterized by high curvature, may not be perfectly resolved. ### Hydrodynamics To describe the hydrodynamics of the multiphase system, the Cahn-Hilliard equation is coupled with the Navier-Stokes equations. The presence of a deformable interface (and of the corresponding surface tension forces) is accounted for by introducing an interfacial term in the Navier-Stokes equations. Recalling that in the present study we consider two fluids having different densities and viscosities, we use here the formulation of continuity and Navier-Stokes equations proposed by Dong & Shen [36]. The resulting governing equations for the hydrodynamics read as follow: \[\nabla\cdot\mathbf{u}=0\,, \tag{8}\] \[\rho(\phi)\left(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla \mathbf{u}\right)=-\nabla p+\frac{1}{Re_{\tau}}\nabla\cdot\left[\eta(\phi)( \nabla\mathbf{u}+\nabla\mathbf{u}^{T})\right]+\frac{3}{\sqrt{8}}\frac{Ch}{We} \nabla\cdot\mathsf{T_{c}}\,, \tag{9}\] where \(\mathbf{u}=(u,v,w)\) is the velocity vector, \(p\) is the pressure field, \(\mathsf{T_{c}}\) is the Korteweg tensor and \(\rho(\phi)\) and \(\eta(\phi)\) are the density and viscosity fields, respectively. The density and viscosity fields are dimensionless scalar functions that account for the local value of density and viscosity respectively [37; 38; 39]; the carrier phase properties are used to make these fields dimensionless. The local density and viscosity are assumed to be linear functions of the phase field: \[\rho(\phi)=1+(\rho_{r}-1)\frac{\phi+1}{2}\,, \tag{10}\] \[\eta(\phi)=1+(\eta_{r}-1)\frac{\phi+1}{2}\,, \tag{11}\] where \(\rho_{r}\) and \(\eta_{r}\) are the density and viscosity ratios, respectively. The Korteweg tensor [40], used to account for the surface tension forces, is defined as follows: \[\mathsf{T_{c}}=|\nabla\phi|^{2}\mathfrak{l}-\nabla\phi\otimes\nabla\phi\,. \tag{12}\] The dimensionless groups appearing in the Navier-Stokes equations are the shear Reynolds number, \(Re_{\tau}\), and the Weber number, \(We\), which are defined as: \[Re_{\tau}=\frac{\rho_{c}u_{\tau}h}{\eta_{c}}\,,\qquad We=\frac{\rho_{c}u_{\tau} ^{2}h}{\sigma}\,. \tag{13}\] The Reynolds number represents the ratio between inertial and viscous forces, while the Weber number is the ratio between inertial and surface tension forces. Both Reynolds and Weber numbers are defined using the carrier phase properties (\(\rho_{c}\) and \(\eta_{c}\)). ### Numerical method The governing equations (1), (8) and (9) are solved using a pseudo-spectral method, which uses Fourier series along the periodic directions (streamwise and spanwise) and Chebyshev polynomials along the wall-normal direction. The Navier-Stokes and continuity equations are solved using the velocity-vorticity formulation: equation (9) is rewritten as a \(4^{th}\) order equation for the wall-normal component of the velocity \(u_{z}\) and a \(2^{nd}\) order equation for the wall-normal component of the vorticity \(\omega_{z}\)[11; 41]. Equation (1) is also split into two \(2^{nd}\) order equations [22]; this way the governing equations are recasted as a coupled system of Helmholtz equations, which can be readily solved. The governing equations are time advanced using an implicit-explicit scheme. For the Navier-Stokes equations, the non-linear term is first rewritten as the sum of a linear and a non-linear contribution [42]. Then, the linear part is integrated using a Crank-Nicolson implicit scheme, while the non-linear part is integrated using an Adams-Bashforth explicit scheme. Likewise, for the Cahn-Hilliard equation, the linear term is integrated using an implicit Euler scheme, while the non-linear term is integrated in time using an Adams-Bashforth scheme. The adoption of the implicit Euler scheme helps damping unphysical high-frequency oscillations that could arise from the steep gradients of \(\phi\). ### Boundary conditions The resulting set of governing equations is complemented by suitable boundary conditions. For the Navier-Stokes equations, no-slip boundary conditions are enforced at the top and bottom wall (\(z/h=\pm 1\)): \[\mathbf{u}(z/h=\pm 1)=0\,. \tag{14}\] For the Cahn-Hilliard equation, no-flux boundary conditions are applied at the two walls, yielding the following boundary conditions: \[\frac{\partial\phi}{\partial z}(z/h=\pm 1)=0\,,\qquad\frac{\partial^{3}\phi}{ \partial z^{3}}(z/h=\pm 1)=0\,. \tag{15}\] Along the streamwise and spanwise directions (\(x\) and \(y\)), periodic boundary conditions are imposed for all variables (Fourier discretization). The adoption of these boundary conditions leads to the conservation of the phase field over time: \[\frac{\partial}{\partial t}\int_{\Omega}\phi\mathrm{d}\Omega=0\,. \tag{16}\] This enforces mass conservation of the entire system but does not guarantee the conservation of the mass of each phase [25; 27], as some leakages between the phases may occur. This drawback is rooted in the phase-field method and is here mitigated with the adoption of the profile-corrected formulation. In the present cases, mass leakages are limited to at most 8% of the dispersed phase mass and occur only in the initial transient phase; once the statistically-stationary condition is reached, the mass of each phase keeps constant. ### Simulation set-up We consider a turbulent channel flow at a shear Reynolds number \(Re_{\tau}=300\) for all the cases. The computational domain has dimensions \(L_{x}\times L_{y}\times L_{z}=4\pi h\times 2\pi h\times 2h\), which corresponds to \(L_{x}^{+}\times L_{y}^{+}\times L_{z}^{+}=3770\times 1885\times 600\) wall units. The domain is discretized with \(N_{x}\times N_{y}\times N_{z}=512\times 256\times 513\) grid points; the computational grid has uniform spacing in the homogenous directions, while Chebyshev-Gauss-Lobatto points are used in the wall-normal direction. The flow is driven by an imposed constant pressure gradient in the streamwise direction. We consider two surface tension values, which are set via the Weber number: \(We=1.50\) (higher surface tension) and \(We=3.00\) (lower surface tension). The selected values are characteristics of air/water mixtures [43]. For each surface tension value (i.e. for each Weber number), we first keep a unitary density ratio and we analyze the effect of different viscosity ratios: from \(\eta_{r}=0.01\) (less viscous bubbles) up to \(\eta_{r}=100\) (more viscous bubbles). Then, we keep a unitary viscosity ratio and we consider different density ratios: from \(\rho_{r}=1\) (matched density bubbles) down to \(\rho_{r}=0.001\) (lighter bubbles). Finally, to evaluate the combined effect of density and viscosity differences, we consider a case in which both bubble density and viscosity are smaller than those of the carrier fluid: \(\rho_{r}=0.1\) and \(\eta_{r}=0.1\). In addition, we perform a single-phase flow simulation as a reference case and to provide initial velocity fields for the multiphase simulations. It is worthwhile noting that when different properties (i.e. density and viscosity) are considered, the local value of the Reynolds number changes as well as the range of spatiotemporal scales that needs to be resolved to fulfill the DNS requirements. These modifications can be appreciated from table 2 in which we show an estimate of the turbulence length scale inside the dispersed phase (computed from the definition of the Kolmogorov length scale), \(\eta_{k,d}^{+}\), the grid resolution, the final average bubble-size, \(\langle d_{eq}^{+}\rangle\), and its root mean square value, \(\text{RMS}(d_{eq}^{+})\), for all the different combination of density and viscosity ratios considered as well as for the reference single-phase case. The bubble size has been characterized using the equivalent diameter, \(d_{eq}^{+}\), i.e. the diameter of an equivalent spherical bubble with the same volume as the bubble considered [17]: \[d_{eq}^{+}=\left(\frac{6V^{+}}{\pi}\right)^{1/3} \tag{17}\] where \(V^{+}\) is the volume of the bubble. All dimensions are reported in wall units (based on the carrier flow shear Reynolds number) and refer to the channel centre, where most bubbles are located. The Kolmogorov scale, which is used here to provide an estimate of the smallest length scale inside the bubbles, has been computed as follows: \[\eta_{k,d}^{+}=\left(\frac{\eta_{r}^{2}Re_{\tau}^{2}}{\epsilon^{+}}\right)^{1/4} \tag{18}\] where \(\epsilon^{+}\) is the dissipation at the channel center evaluated in the region characterized by \(\phi\geq 0\) (i.e. inside the bubbles), \(\eta_{r}\) is the viscosity ratio and \(Re_{\tau}\) is the shear Reynolds number. We can observe that for almost all the cases presented here, the estimated Kolmogorov scale is of the order of the grid spacing thus ensuring a correct resolution of all the relevant flow scales. Only for the cases with \(\eta_{r}\leq 0.1\) (most critical cases due to the largest local Reynolds number increase), the smallest flow scales (which are found inside the bubbles) cannot be completely resolved. From table 2, we can also observe that the average bubble size is always at least one order of magnitude larger than the grid spacing. For the phase field, the Cahn number is set to \(Ch=0.02\). This value is selected based on the grid resolution: at least three grid points are required across the interface to accurately describe the steep gradients present [26]. The phase field Peclet number has been set according to the scaling \(Pe=1/Ch=50\), to achieve convergence to the sharp interface limit [27; 44]. More refined grids allow to reduce the thickness of the interface and to adopt smaller Cahn numbers. However, the resulting computational cost is much larger: grid resolution needs to be refined along all three directions, as the orientation of the interfacial layer is arbitrary, and the time step has to be reduced as well to satisfy the Courant-Friedrichs-Lewy condition. Overall, the computational cost of a simulation with an halved Cahn number is roughly 16 times larger: grid refinement makes the simulation eight times more expensive and the time step limitation makes the simulation twice as expensive. At the beginning of each simulation, a regular array of 256 spherical droplets with diameter \(d=0.4h\) (corresponding to \(d^{+}=120\) wall units) is initialized in a fully-developed single-phase turbulent channel flow. The total volume fraction of the dispersed phase is \(\Phi=V_{d}/(V_{c}+V_{d})\simeq 5.4\%\), being \(V_{d}\) and \(V_{c}\) the volume of the dispersed and carrier phase, respectively. As the array of spherical bubbles is suddenly released in a single-phase turbulent flow, turbulent velocity fluctuations strongly perturb the interfacial profile; during this initial coupling phase, mass leakages among the phases may occur [25; 27] After this initial transient, the mass of each phase keeps constant over time. While the initial condition chosen for the dispersed phase may seem unphysical, after a short transient, memory of the initial condition is completely lost and the results are not affected by the initial condition selected [17]. Different initial conditions have been tested (e.g., the injection of a thin liquid sheet at the channel center) and the same statistically statistically-stationary results were obtained. We selected the current initial configuration as it reduces the time required to reach statistically-stationary conditions. ## III Results We present here the results obtained from the analysis of the simulation database, starting from the effects of the density ratio, viscosity ratio, and Weber number on the topology of the dispersed phase (number of bubbles) and on its topological changes (coalescence and breakage rates). Then we evaluate the effects of these parameters on the shape and deformation of the bubbles studying the local curvature of the interface and the time evolution of the interfacial area. Finally, we investigate the flow modifications produced by the bubbles by analyzing the mean velocity profiles and the turbulent kinetic energy inside the bubbles. All the results will be presented according to the following color code: a red-colors scale is used to show the density ratio variations and a blue-colors scale to show the viscosity ratio variations. The case with both non-matched density and viscosity is represented in green, while the reference case (matched density and matched viscosity) is shown in black. ### Bubbles: number and topological modifications #### iii.1.1 Number of bubbles The topology of the dispersed phase is the direct consequence of the ultimate competition between breakage and coalescence events. To obtain a first qualitative insight of the effects of density ratio, viscosity ratio and Weber number on the statistically-stationary number of bubbles (i.e. once the effect of the initial condition is completely dissipated), we can consider figure 1. Panel (_a_) refers to \(We=1.5\), while panel (_b_) to \(We=3.0\). In each panel of figure 1, four snapshots of the multiphase system at statistically-stationary are arranged in a plot according to the values of density (horizontal axis) and viscosity (vertical axis) ratio of each case. The surface of the bubbles, identified as the \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline System & \(We\) & \(\eta_{r}\) & \(\rho_{r}\) & \(\Delta x^{+}\) & \(\Delta y^{+}\) & \(\Delta z^{+}\) & \(\eta_{h,d}^{+}\) & \(\langle d_{eq}^{+}\rangle\) & RMS(\(d_{eq}^{+}\)) \\ \hline \hline Single-phase & - & - & - & 7.36 & 7.36 & 1.84 & 4.19 & - & - \\ \hline Bubbles-laden & 1.50 & 0.01 & 1.000 & 7.36 & 7.36 & 1.84 & 0.20 & 195.13 & 176.97 \\ Bubbles-laden & 1.50 & 0.10 & 1.000 & 7.36 & 7.36 & 1.84 & 1.04 & 191.16 & 134.04 \\ Bubbles-laden & 1.50 & 1.00 & 1.000 & 7.36 & 7.36 & 1.84 & 5.27 & 226.72 & 123.55 \\ Bubbles-laden & 1.50 & 10.0 & 1.000 & 7.36 & 7.36 & 1.84 & 26.72 & 229.84 & 127.99 \\ Bubbles-laden & 1.50 & 100. & 1.000 & 7.36 & 7.36 & 1.84 & 145.50 & 245.04 & 104.51 \\ Bubbles-laden & 1.50 & 1.00 & 0.001 & 7.36 & 7.36 & 1.84 & 887.50 & 208.15 & 150.61 \\ Bubbles-laden & 1.50 & 1.00 & 0.010 & 7.36 & 7.36 & 1.84 & 185.80 & 230.31 & 142.16 \\ Bubbles-laden & 1.50 & 1.00 & 0.100 & 7.36 & 7.36 & 1.84 & 30.66 & 180.60 & 142.73 \\ Bubbles-laden & 1.50 & 0.10 & 0.100 & 7.36 & 7.36 & 1.84 & 5.86 & 186.00 & 138.08 \\ \hline Bubbles-laden & 3.00 & 0.01 & 1.000 & 7.36 & 7.36 & 1.84 & 0.19 & 81.37 & 74.77 \\ Bubbles-laden & 3.00 & 0.10 & 1.000 & 7.36 & 7.36 & 1.84 & 0.94 & 84.06 & 76.15 \\ Bubbles-laden & 3.00 & 1.00 & 1.000 & 7.36 & 7.36 & 1.84 & 4.87 & 87.56 & 79.55 \\ Bubbles-laden & 3.00 & 10.0 & 1.000 & 7.36 & 7.36 & 1.84 & 24.96 & 89.70 & 77.94 \\ Bubbles-laden & 3.00 & 100. & 1.000 & 7.36 & 7.36 & 1.84 & 140.3 & 203.62 & 111.09 \\ Bubbles-laden & 3.00 & 1.00 & 0.001 & 7.36 & 7.36 & 1.84 & 818.2 & 87.58 & 77.74 \\ Bubbles-laden & 3.00 & 1.00 & 0.010 & 7.36 & 7.36 & 1.84 & 142.0 & 86.54 & 76.25 \\ Bubbles-laden & 3.00 & 1.00 & 0.100 & 7.36 & 7.36 & 1.84 & 27.45 & 91.16 & 81.28 \\ Bubbles-laden & 3.00 & 0.10 & 0.100 & 7.36 & 7.36 & 1.84 & 4.63 & 83.62 & 75.41 \\ \hline \hline \end{tabular} \end{table} Table 1: Grid resolution, \(\Delta x^{+}\), \(\Delta y^{+}\) and \(\Delta z^{+}_{c}\), Kolmogorov scale at the channel centre in the dispersed phase, \(\eta^{+}_{k,d}\), average equivalent diameter of the bubbles, \(\langle d_{eq}^{+}\rangle\), and root mean square of the bubble equivalent diameter, RMS(\(d_{eq}^{+}\)), for all the different simulations performed. All dimensions are reported in wall units; Kolmogorov scale is measured at the channel centre. Single-phase flow values at the channel centre have been also reported as a reference. Figure 1: Top view of four statistically-stationary configurations (\(t^{+}=4000\)) for different combinations of density ratios (\(\rho_{r}=0.001\) and \(1\)) and viscosity ratios (\(\eta_{r}=0.01,1\) and \(100\)). Panel (_a_) refers to \(We=1.5\), while panel (_b_) to \(We=3.0\). The sub-panels are arranged in a plot using \(\rho_{r}\) as \(x\)-coordinate and \(\eta_{r}\) as \(y\)-coordinate. The effect of density can be appreciated in the sequence of panels on the middle row, while that of viscosity in the right column. The background of the plot shows the turbulent kinetic energy, TKE= \((\rho/\rho_{c})(u^{2}+v^{\prime 2}+w^{\prime 2})/2\) (white-low; black-high), computed on the central \(x^{+}-y^{+}\) plane of the channel. iso-contour \(\phi=0\), is reported at the time instant \(t^{+}=4000\) (statistically-stationary conditions); in the background the contour map of the turbulent kinetic energy, TKE\(=(\rho/\rho_{c})(u^{\prime 2}+v^{\prime 2}+w^{\prime 2})/2\) (where \(\rho\) identifies the local density value, \(\rho_{d}\) in the bubbles and \(\rho_{c}\) in the carrier phase), on a \(x^{+}-y^{+}\) plane located at the channel centre is shown. Among all cases, we select those with the extreme values of the density (\(\rho_{r}=0.001\) - \(\eta_{r}=1\)) and viscosity ratio (\(\rho_{r}=1\) - \(\eta_{r}=100\) and \(\rho_{r}=1\) - \(\eta_{r}=0.01\)). As a reference, also the matched density and viscosity case (\(\rho_{r}=1\) - \(\eta_{r}=1\)) is shown. We can observe that for \(We=1.5\) (figure 1_a_), the number of bubbles remains almost unchanged when both density and viscosity contrasts are introduced in the system. For \(We=3.0\) (figure 1_b_), the number of bubbles is higher in all the cases, compared to \(We=1.5\). If we then look along the density axis (namely to the pictures in the central row) of figure 1\(b\), we see that the number of bubbles is quite similar in the two cases, suggesting a negligible effect of density for the range of values considered here. By opposite, looking along the viscosity axis (thus to the pictures on the right column), we notice that viscosity does play an important role, as the number of bubbles significantly reduces from \(\eta_{r}=0.01\) to \(\eta_{r}=100\), with a more marked difference between \(\eta_{r}=1\) and \(\eta_{r}=100\), than between \(\eta_{r}=0.01\) and \(\eta_{r}=1\), thus hinting that the viscosity difference among the phases may actually be the relevant factor, rather than the viscosity ratio. To evaluate these results more quantitatively, we compute at each time the number of bubbles, \(N(t^{+})\), normalized by the initial bubbles number, \(N_{0}\). Figure 2 shows the results obtained for all the combination of density and viscosity ratios considered, and for the two Weber numbers as well. Left column refers to \(We=1.5\) (figure 2\(a\),_c_,_e_), while the right column to \(We=3.0\) (figure 2\(b\),_d_,_f_). The top, middle and bottom rows show, in order, the effects of the density ratio, viscosity ratio and of their combination. We start by analyzing the effect of Weber number solely and we consider the matched density and viscosity case (black lines in figure 2_a_-_d_). For \(We=1.5\), the number of bubbles decreases monotonically: coalescence events dominate the initial transient phase (up to \(t^{+}=2000\)). Then a balance between breakage and coalescence events is attained and the number of bubbles settles on a stationary value, \(N(t^{+})/N_{0}\simeq 0.1\). Likewise, for \(We=3.0\), an initial transient mainly characterized by coalescence events can be also observed. However, this phase ends at an earlier time (about \(t^{+}=500\)) and is followed by a statistically-stationary condition where breakups and coalescences alternately prevail on each other. Comparing simultaneously the plots at \(We=1.5\) (figure 2\(a\),_c_,_e_), we can observe that the effects of both density and viscosity ratios (and of their combination) are very small. This behavior can be traced back to the dominant role played by surface tension forces. The Weber number quantifies the relative importance of surface tension forces with respect \begin{table} \begin{tabular}{c c c c c c c} System & \(Re_{r}\) & \(We\) & \(\eta_{r}\) & \(\rho_{r}\) & \(Ch\) & \(Pe\) \\ \hline Single-phase & 300 & - & - & - & - & - \\ \hline Bubbles-laden & 300 & 1.50 & 0.01 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 0.10 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 1.00 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 10.0 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 100. & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 100. & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 1.00 & 0.010 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 1.00 & 0.100 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 0.10 & 0.100 & 0.02 & 50 \\ Bubbles-laden & 300 & 1.50 & 0.10 & 0.100 & 0.02 & 50 \\ \hline Bubbles-laden & 300 & 3.00 & 0.01 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 3.00 & 0.10 & 1.000 & 0.02 & 50 \\ Bubbles-laden & 300 & 3.00 & 100. & 0.010 & 0.02 & 50 \\ Bubbles-laden & 300 & 3.00 & 1.00 & 0.010 & 0.02 & 50 \\ Bubbles-laden & 300 & 3.00 & 1.00 & 0.100 & 0.02 & 50 \\ Bubbles-laden & 300 & 3.00 & 0.10 & 0.100 & 0.02 & 50 \\ \end{tabular} \end{table} Table 2: Overview of simulations parameters. Wa analyze two Weber numbers: \(We=1.50\) and \(We=3.00\). For each Weber number, we consider four density ratios: from \(\rho_{r}=0.001\) up to \(\rho_{r}=1.000\); five viscosity ratios: from \(\eta_{r}=0.01\) up to \(\eta_{r}=100\) and a combined case \(\rho_{r}=0.1\) and \(\eta_{r}=0.1\). In addition, a single-phase flow simulation has been also conducted. Figure 2: Time evolution of the number of bubbles, \(N(t^{+})\), normalized by its initial value \(N_{0}\). Left column refers to \(We=1.5\), while the right column to \(We=3.0\). Top row: effect of density ratio, for \(\rho_{r}=0.001,0.01,0.1\) and \(1\) (with \(\eta_{r}=1\)); Middle row: effect of viscosity ratio, for \(\eta_{r}=0.01,0.1,1,10\) and \(100\) (with \(\rho_{r}=1\)); Bottom row: combined effect of density and viscosity, for the case with \(\rho_{r}=0.1\), the cases \(\rho_{r}=0.1\), \(\eta_{r}=1\) and \(\rho_{r}=1\), \(\eta_{r}=0.1\) are reported for reference. On each line the left plot also includes the color code and a sketch with the definition of the property ratio considered (\(\rho_{r}\), \(\eta_{r}\) or both ratios). to inertial forces: the lower is the Weber number, the stronger is the action of surface tension in controlling bubbles dynamics. Thus, for \(We=1.5\), the surface tension forces are dominant and are those determining the topology of the dispersed phase (i.e. number of bubbles). For the higher Weber, surface tension forces are weaker in comparison, and density and viscosity ratios effects become more significant. In particular, for \(We=3.0\) (figure 2\(b\),_d_f_), the statistically-stationary value obtained for the number of bubbles shows a marked dependence on the viscosity ratio. As the dispersed phase dynamics for the cases at \(We=1.5\) are dominated by surface tension forces, we focus on the cases at \(We=3.0\) to investigate the effects of density and/or viscosity ratios. First, we consider the effects of the density ratio solely. Figure 2\(b\) shows the time evolution of the number of bubbles for different density ratios (from \(\rho_{r}=1.0\) down to \(\rho_{r}=0.001\)) and a fixed unitary viscosity ratio. We notice that the influence of the density ratio on the number of bubbles is small: the red-colors lines do not depart in average from the black reference line, nor from each other. Hence, no significant modifications are introduced in the topology of the dispersed phase when density contrasts are present between the phases (with respect to a two-phase system with uniform density). This behavior suggests that, for the range of density ratios considered, the external inertial forcing is the main factor that determines the bubble size and thus the dispersed phase topology. In contrast, the density (and thus the inertia) of the bubble plays a negligible role in determining the dispersed phase topology. On the other hand a marked effect of the viscosity ratio alone can be observed, figure 2\(d\). We observe in this case a much clearer trend: after the initial transient the curves depart from each other and set on different equilibrium values once statistically-stationary conditions are reached. In particular, as the viscosity ratio is increased, the statistically-stationary number of bubbles is reduced. For high viscosity ratio (\(\eta_{r}>1\)) fragmentation is prevented, coalescence dominates and only a few bubbles are present in the channel. By opposite, for low viscosity ratio (\(\eta_{r}<1\)) breakups are favored, the average bubble size decreases, and the resulting number of bubbles is slightly larger when smaller viscosity ratios are considered. Hence, it is evident that viscosity acts as a stabilizing factor, in a similar way as surface tension does. Indeed, it is interesting to observe that the behavior of the number of bubbles for \(\eta_{r}=100\) at \(We=3.0\) (high viscosity) resembles those of the cases at \(We=1.5\) (high surface tension, figure 2_c_). This suggests that a very high viscosity ratio can compensate a low surface tension and produce similar results in terms of topology. A physical argument that can explain the action of viscosity is related to the deformations that the external turbulent flow is able to induce on the bubble. When the internal viscosity is larger than the external one, the larger internal viscous dissipation damps all the turbulent fluctuations produced by the external flow. This hinders large deformations of the bubble surface and, as a consequence, it reduces the possibility of bubble breakage. Finally, we analyze the combined effects of density and viscosity ratios. In figure 2\(f\), we report the results obtained from the case \(\rho_{r}=0.1\) and \(\eta_{r}=0.1\) and from two cases with one matched property and one non-matched property, \(\rho_{r}=0.1\) and \(\eta_{r}=1\) (red line) and \(\rho_{r}=1\) and \(\eta_{r}=0.1\) (blue line). We can first note that these two latter cases, where only one property is non-matched, exhibit a very similar behavior for the entire duration of the simulation. This is consistent with our previous observation: the influence of the density ratio is almost negligible (figure 2_b_) and the effects of the viscosity ratio are relatively small for \(\eta_{r}=0.1\) (figure 2_d_). Then, we observe that the combined case (green line) does not deviate largely from the other two cases. This indicates that a simultaneous reduction of the density and viscosity ratios does not remarkably modify the general picture for the range of density and viscosity ratios here tested. Nevertheless, it is interesting to observe that the green line lies above the red and blue lines for a longer timespan, indicating that the number of bubbles for the combined case is slightly higher than in the other two cases. #### iii.2.2 Breakage and coalescence rates The evolution of the number of bubbles provides useful insights on the time behavior of the dispersed phase topology, although it only shows the net outcome of the competition between breakage and coalescence events. To evaluate whether density and viscosity differences among the phases affect breakage and coalescence dynamics, we compute the instantaneous number of breakage and coalescence events. Evaluating these effects is not only crucial to better understand the involved physics, but is also extremely important for the development of accurate coalescence and breakage kernels [45]. The time behavior of the breakage and coalescence is directly linked to the number of bubbles present in the channel, as hinted by the balance population equation [46]: \[\frac{dN(t^{+})}{dt^{+}}=\dot{N}_{b}(t^{+})-\dot{N}_{c}(t^{+})\;, \tag{19}\] where \(N(t^{+})\) is the number of bubbles and \(\dot{N}_{b}(t^{+})\) and \(\dot{N}_{c}(t^{+})\) are respectively the breakage and coalescence rates. We compute the breakage and coalescence rates counting the number of breakage or coalescence events that occur within a set temporal window \(\Delta t^{+}\) (see Appendix A for details): \[\dot{N}_{b}(t^{+})=\frac{N_{b}}{\Delta t^{+}},\hskip 28.452756pt\dot{N}_{c}(t^{+})= \frac{N_{c}}{\Delta t^{+}}\,, \tag{20}\] where the temporal window has been chosen equal to \(\Delta t^{+}=300\). As the number of breakage and coalescence events that occur in a certain temporal window is also influenced by the number of bubbles present in the channel [17], we normalize the breakage and coalescence rates, \(\dot{N}_{b}(t^{+})\) and \(\dot{N}_{c}(t^{+})\), by the instantaneous number of bubbles \(N(t^{+})\). Being the description of coalescence and breakage events in numerical simulations influenced by grid resolution [3; 6; 17; 28], a convergence study has been also performed to ensure that the grid employed is sufficient to obtain convergent results, please refer to Appendix B for details. Figure 3 shows the results obtained for all cases examined: breakage rate is plotted over time as a positive quantity, while coalescence rate as a negative quantity, being them related to an increase and decrease of the number of bubbles, respectively. We will first discuss the effect of the Weber number comparing the left column (figure 3\(a\),_c_,_e_) with the right column (figure 3\(b\),_d_,_f_). For \(We=1.5\) (left column), the breakage and coalescence rates behave nearly in the same way for all the combinations of density and viscosity ratios. After the initial transient where the behavior of the rates is influenced by the selected initial condition for the phase-field, an equilibrium is reached at about \(t^{+}=1000\) where both rates set on a constant value. At this stage, bubbles keep on breaking and coalescing, but with the same rate, thus maintaining their number in statistical equilibrium. This value of the Weber number does not allow density and viscosity contrasts to substantially modify the evolution of bubbles topology, as a good correspondence among the curves can be noticed in all the plots. Indeed, when a low Weber number is considered the deformability, which is a crucial factor for coalescence and breakage events, is mainly determined by surface tension forces that dominate over density and viscosity contributions. For \(We=3.0\) (right column), the results are qualitatively and quantitatively different: breakage and coalescence rates reach in general larger values, and some significant deviations among the curves are visible. This is a direct consequence of the larger Weber number: surface tension forces, which are smaller in magnitude, weakly counteract turbulent velocity gradients, that can more easily deform and break the bubbles. Thus, we observe a larger number of breakage and coalescence events due to the larger deformability of the bubbles, as can be appreciated from figure 2\(b\),_d_,\(f\). In addition, for this larger Weber number, we can clearly observe how the density and viscosity ratios play a much more important role in the dynamics of breakage and coalescence events (with respect to \(We=1.5\)). For this reason, we move now to discuss the effect of non-matched density or viscosity on the cases at \(We=3.0\) in more detail. Figure 3\(b\) shows the breakage and coalescence rate for different values of the density ratios. In the first transient phase, all cases manifest a very high frequency of both breakage and coalescence events, slightly larger for coalescences at the very beginning (coherently with the evolution of the number of bubbles shown in figure 2_b_). Later on, both rates stabilize and set on two equal (in magnitude) stationary values. Although a clear trend among the different density ratios cannot be observed, it is worth noticing that all the rates seem slightly larger when sub-unitary density ratios are considered (especially in the early stage of simulations). Overall, these observations suggest that density differences between the phases do not introduce remarkable changes in the dispersed phase topology and on its modifications: the number of bubbles and breakage and coalescence rates are weakly influenced by changing the density ratio. Moving now to the effect of the viscosity ratio, figure 3\(d\) depicts the time evolution of the breakage and coalescence rates obtained for different viscosity ratios (and a fixed unitary density ratio). Again, once the initial transient is finished, a statistically-stationary phase can be distinguished for all cases. From a qualitative viewpoint, coalescence is predominant at the beginning of the transient (consistently with the behavior reported in figure 2_d_); then relatively high values for both rates are maintained during the rest of the transient, until they stabilize on steady values. The cases, however, deeply differ from a quantitative point of view. We see in this case that the rates significantly change when the viscosity ratio is changed: both breakage and coalescence rates decrease in magnitude as the viscosity ratio is increased (i.e. when bubble viscosity is increased). This modification of the breakage and coalescence rates is clear when the case \(\eta_{r}=100\) is considered: the statistically-steady value of both rates is smaller than the one attained by the other cases. A similar trend was experimentally measured by Eastwood _et al._[47] for the breakup of immiscible fluid particles in a turbulent jet: it was observed that the breakage rate of the droplets scales inversely with the inner bubble capillary number (ratio between bubble viscous forces and surface tension forces). Present results seem to confirm this finding: bubble viscosity and the corresponding viscous forces, acting as a damper of external velocity fluctuations [16], make bubbles less deformable and the probability of breakage and coalescence decreases. Finally, we discuss the combination of density and viscosity contrasts (figure 3_f_). The three curves do not deviate considerably from each other and a clearcut trend cannot be appreciated. As the density effect is generally unimportant and the viscosity one shall be small for \(\eta_{r}=0.1\), the case \(\rho_{r}=0.1\) - \(\eta_{r}=0.1\) does not give us clear information on how density and viscosity effects combine together. Figure 3: Time evolution of the normalized breakage rate, \(\dot{N}_{b}(t^{+})/N(t^{+})\), and coalescence rate, \(\dot{N}_{c}(t^{+})/N(t^{+})\). Left column refers to \(We=1.5\), while right column to \(We=3.0\). Top row: effect of density ratio, for \(\rho_{r}=0.001,0.01,0.1\) and \(1\) (with \(\eta_{r}=1\)); Middle row: effect of viscosity ratio, for \(\eta_{r}=0.01,0.1,1,10\) and \(100\) (with \(\rho_{r}=1\)); Bottom row: combined effect of density and viscosity ratios, for the case with \(\rho_{r}=0.1\), \(\eta_{r}=0.1\). Cases \(\rho_{r}=0.1\), \(\eta_{r}=1\) and \(\rho_{r}=1\), \(\eta_{r}=0.1\) are reported for reference. For each row of plots, the left plot also shows the color code and a sketch with the definition of the ratio considered (\(\rho_{r}\), \(\eta_{r}\) or both). ### Shape and deformation of bubbles #### iii.2.1 Interfacial area A bubble released in a turbulent flow is constantly subjected to deformations due to the action of turbulent fluctuations [48; 49]. Turbulence fluctuations deform and stretch the bubble and, if strong enough, can lead to breakage of the bubble. The result of turbulence actions in terms of deformation can be evaluated by computing the total interfacial area. This quantity gives a general indication of the average bubble deformation and also provides a quantification of the amount of energy stored at the interface [50; 51; 33]. Indeed, in the hypothesis of constant surface tension (as in the present case), surface tension energy is proportional to the amount of interfacial area available [50; 51; 33]. With the aim of evaluating the effects of the simulations parameters (density ratio, viscosity ratio, and Weber number) on the interfacial energy, we compute the time behavior of total interfacial area, \(A(t^{+})\), for all cases considered. The results are presented normalized by the initial value \(A_{0}\). In figure 4, the results are shown using the same arrangement adopted in the previous figures. To correctly interpret these results, it is necessary to make a preliminary remark. The area of the interface between the dispersed phase and the carrier fluid evolves in time depending on two factors: the evolution of the number of bubbles and the modifications of the shape of the bubbles. This concept can be explained by considering the following example: to have a minimal interface area, the dispersed phase should consist of a unique spherical bubble, since, for a given volume, the spherical shape is the one that minimizes the surface area. If we split this bubble into several smaller spherical bubbles the total interface area will increase, being the total volume constant. If these smaller spherical bubbles are then deformed and elongated the area will further increase, as for each bubble the same mass will be redistributed in a way that makes it more exposed to the external flow. Thus, when we look at the evolution of the total interface area we are simultaneously observing the effect of the number of bubbles and of their deformation. We start by analyzing the effects of the density ratio for the cases at \(We=1.5\), figure 4\(a\). We notice an initial transient that is characterized by a nearly monotonic decrease of \(A(t^{+})/A_{0}\), for all the considered cases. In particular, during this transient, the curves corresponding to sub-unitary density ratios are superposed, while a remarkable discrepancy is visible between them and \(\rho_{r}=1\). As soon as the flow reaches a steady behavior, all the curves differentiate and a trend becomes visible, where the higher is the density ratio the larger is the total interface area. Considering that for \(We=1.5\) the number of bubbles is almost unaffected by the density ratio (figure 2_a_), this indicates that the trends observed in figure 4\(a\) are mainly caused by the bubble deformation: when smaller density ratios are considered, bubbles tend to be less deformed with respect to the case \(\rho_{r}=1\). The origin of this behavior can be traced back to the local Reynolds number (i.e. evaluated using the bubble proprieties): as the density ratio is decreased, the inertial forces become smaller, the local Reynolds number decreases and less deformed bubbles are obtained. For \(We=3.0\) (figure 4_b_), we notice a similar but more irregular behavior. For all density ratios, the normalized interfacial area decreases and sets on stationary values that are higher than the final stationary values obtained for \(We=1.5\) (figure 4_a_). This is coherent with the fact that increasing the Weber number, the number of bubbles increases, and so does the interfacial area. For this larger Weber number, the trend among the different density ratios is now less clear and the differences between the curves are slightly smaller. Nevertheless, consistently with the results obtained for \(We=1.5\) (figure 4_a_), the matched density case (\(\rho_{r}=1\)) is clearly above all the other curves (\(\rho_{r}<1\)) for almost the entire time range of the simulations. Being the number of bubbles similar for all the cases shown in figure 4\(b\), this seems to confirm that for smaller density ratios the overall interfacial area is reduced. The viscosity effect can be appreciated in figure 4\(c\),_d_. For \(We=1.5\) (panel _c_), the total interface area is practically independent on the viscosity ratio and no significant changes can be observed. As the number of bubbles is similar for all cases (figure 2_c_), this indicates that no significant effects on the average bubble deformation are observed. Even though bubble viscosity does not play an important role in the average bubble deformation, we can anticipate that it still plays a role when more local quantities are analyzed (e.g. local curvature), see section III.2.2. For \(We=3.0\), a remarkable difference is present between \(\eta_{r}=100\) (larger bubble viscosity) and all the other cases. This is consistent with the time evolution of the number of bubbles (figure 2_d_). Indeed, when the statistically-stationary configuration is reached, the number of bubbles for \(\eta_{r}=100\) is much lower than that obtained for the other ratios. As a result, the interfacial area is much lower than the other cases. For the other cases (from \(\eta_{r}=10\) down to \(\eta_{r}=0.01\)), a clear trend cannot be observed thus suggesting that no large modifications of the average bubble deformation are obtained for \(\eta_{r}<10\). However, as already anticipated for \(We=1.5\), larger modifications are observed when local quantities are analyzed, see next section for details. Finally, we discuss the combined effect of density and viscosity ratios (figure 4\(e\),_f_). For \(We=1.5\), the case with both non-matched density and viscosity (green line) overlaps the case with non-matched density (red line) during the transient and in the final steady configuration, while in the first steady part it is intermediate between the two other cases, \(\rho_{r}=0.1\) - \(\eta_{r}=1\) and \(\rho_{r}=1\) - \(\eta_{r}=0.1\). On average the combined case is therefore closer to the non-matched Figure 4: Time evolution of the total interface area \(A(t^{+})\), normalized by its initial value \(A_{0}\). Top row: effect of density, for \(\rho_{r}=0.001,0.01,0.1\) and \(1\) (with \(\eta_{r}=1\)); Middle row: effect of viscosity, for \(\eta_{r}=0.01,0.1,1,10\) and \(100\) (with \(\rho_{r}=1\)); Bottom row: combined effect of density and viscosity, for the case with \(\rho_{r}=0.1\), \(\eta_{r}=0.1\). Cases with \(\rho_{r}=0.1\), \(\eta_{r}=1\) and \(\rho_{r}=1\), \(\eta_{r}=0.1\) are reported for reference. These effects are shown for two different Weber numbers: (_a_)-(_c_)-(_e_) \(We=1.5\) and (_b_)-(_d_)-(_f_) \(We=3.0\). On each row the left plot also includes the colour code and a sketch with the definition of the ratio considered (\(\rho_{r}\), \(\eta_{r}\) or both ratios). density case, suggesting that the density ratio has a larger influence on the total interfacial area (and thus on the stretching of the bubbles) with respect to the viscosity ratio. This is confirmed by the plot for \(We=3.0\), where the green line shows again values that on average are much closer to the non-matched density case (i.e. \(\rho_{r}=0.1\)). #### iv.2.2 Probability density function of mean curvature The evolution of the total interface area gives us an idea of the overall behavior of the average deformation of the bubbles in presence of density and viscosity contrasts. However, being an average indication, it does not provide a clear indication of the local deformations of the bubbles surface. To obtain a deeper understanding of the deformation, we examine the probability density function (PDF) of the local interface mean curvature in the final statistically-stationary configuration. The mean curvature, \(\mathcal{K}^{+}\), can be computed as the divergence of the local normal vector \(\mathbf{n}\), which in turn can be defined from the phase variable \(\phi\)[52, 53]: \[\mathbf{n}=-\frac{\nabla\phi}{|\nabla\phi|},\hskip 28.452756pt\mathcal{K}^{+}= \nabla\cdot\bigg{(}-\frac{\nabla\phi}{|\nabla\phi|}\bigg{)}. \tag{21}\] We compute the mean curvature, \(\mathcal{K}^{+}\), for each point on the surface of the bubbles, corresponding to the points of the iso-level \(\phi=0\). The resulting curvature values tell us how much the bubbles deviate from their spherical equilibrium shape, giving rise to small bumps and ripples in the surface when \(\mathcal{K}^{+}\) is highly positive, or small dimples when \(\mathcal{K}^{+}\) is highly negative. From figure 5, we can appreciate the effect of density and viscosity on the mean curvature from a qualitative point of view. The figure shows for \(We=1.5\) (figure 5_a_) and \(We=3.0\) (figure 5_b_) four top views of the statistically-stationary configurations of the system. Bubbles are colored according to the local value of the mean curvature (blue-low; red-high). Red areas correspond to bumps and ripples of the interface (positive curvatures), while blue areas to dimples (negative curvatures). For \(We=1.5\) (figure 5_a_), the effect of the density ratio can be observed by looking at the horizontal sequence of pictures (central row): we notice that moving from \(\rho_{r}=1\) down to \(\rho_{r}=0.001\) there is a slight decrease in the extension of both red and blue saturated regions, which correspond to very high and very low curvatures respectively. Therefore a reduction of the density ratio (i.e. a decrease of bubble density), leads to a smoother bubble surface, characterized by fewer ripples and dimples. In the vertical sequence of pictures on the right column, we can appreciate the effect of viscosity. We notice that the shape of the bubbles is qualitatively unchanged increasing the viscosity from \(\eta_{r}=0.01\) to \(\eta_{r}=1\). However, from \(\eta_{r}=1\) to \(\eta_{r}=100\) the shape changes remarkably: the irregularities that characterize the bubbles surface at \(\eta_{r}=1\) disappear completely at \(\eta_{r}=100\), where the surface becomes very smooth and the bubbles shape very closely resembles the spherical shape. Thus, the action of viscosity seems opposite to the one of density in terms of local deformation of the bubble surface: an increase of viscosity prevents the formation of high curvatures values (in magnitude), while an increase of density promotes the formation of large interface deformations. The two opposite trends obtained increasing the density or viscosity ratios can be interpreted in terms of local Reynolds or capillary numbers (i.e. evaluated using the bubble proprieties). An increase of the density ratio leads to an increase of the local Reynolds number and as a consequence, a more irregular surface of the bubbles is obtained. In contrast, an increase of the viscosity ratio, produces a decrease of the local Reynolds number (which also corresponds to an increase of the capillary number) and a smoother surface of the bubbles is attained. Interestingly, the entity of these effects depends on the value of the ratio considered: a slight effect of the density ratio can be observed when it is decreased of three orders of magnitude (from \(\rho_{r}=1\) down to \(\rho_{r}=0.001\)), as well as for the viscosity ratio when reduced by two orders of magnitude (from \(\eta_{r}=1\) down to \(\eta_{r}=0.01\)), while a more noticeable difference is visible when it is increased of two orders of magnitude (from \(\eta_{r}=1\) up to \(\eta_{r}=100\)). Similar considerations can be obtained from the qualitative results obtained at \(We=3.0\) (figure 5_b_). In this case, we can qualitatively appreciate similar effects for the density and viscosity ratios. These modifications, however, are now reflected on a much larger number of bubbles (larger Weber number). To confirm these first qualitative observations, we compute the probability density function (PDF) of the mean curvature. Results are reported in figure 6 for different combinations of the density ratio, viscosity ratio, and Weber number. Left column (figure 6\(a\),_c_,_e_) refers to \(We=1.5\), while right column (figure 6\(b\),_d_,_f_) to \(We=3.0\). Before analyzing each curve in detail, we can do some general observations. All curves are centered on a positive value of curvature and present an asymmetry with respect to the null value. Since positive curvatures correspond to convex surfaces and the null curvature corresponds to a flat surface, this is consistent with the fact that bubbles are in average convex, considering an outwards normal vector. Then, comparing the results shown in the left column (cases at \(We=1.5\)) against those reported in the right column (cases at \(We=3.0\)), we can appreciate the effect of the Weber number: for \(We=3.0\) the curves are extended on a wider range of curvature values with respect to \(We=1.5\). In Figure 5: Top view of the mean curvature of the bubble surface, \(\mathcal{K}^{+}\), for four different combinations of density ratios (\(\rho_{r}=0.001\) and \(1\)) and viscosity ratios (\(\eta_{r}=0.01,1\) and \(100\)) once a statistically-stationary configuration is reached (\(t^{+}=4000\)). Panel (_a_) refers to \(We=1.5\) while panel (_b_) to \(We=3.0\). The sub-panels are arranged in a plot using \(\rho_{r}\) as \(x\)-coordinate and \(\eta_{r}\) as \(y\)-coordinate. The effect of density can be appreciated in the sequence of panels on the middle row, while that of viscosity in the right column. Bubble surface (iso-level \(\phi=0\)) is colored according to the local value of the mean curvature (low-blue; high-red). Figure 6: Probability density function of the mean curvature, \(\mathcal{K}^{+}\). Left column refers to \(We=1.5\), while right column to \(We=3.0\). Effect of density ratio can be appreciated on the top row for \(\rho_{r}=0.001,0.01,0.1\) and \(1\) (with \(\eta_{r}=1\)). The effect of bubble viscosity can be observed in the middle row for \(\eta_{r}=0.01,0.1,1,10\) and \(100\) (with \(\rho_{r}=1\)). Finally, the combined effect of the density and viscosity ratio is shown on the bottom row for the case with \(\rho_{r}=0.1\), \(\eta_{r}=0.1\), with respect to the cases where a single effect is considered (with \(\rho_{r}=0.1\), \(\eta_{r}=1\) and \(\rho_{r}=1\), \(\eta_{r}=0.1\)). particular, the curves are extended slightly towards negative values and considerably towards positive values, meaning that a higher Weber leads to a higher probability of having irregularities in the surface of the bubbles, especially bump or ripples-like irregularities. The higher probability of having large curvature values is also due to the presence of many smaller bubbles, which are intrinsically more convex (smaller radius) and closer to a spherical shape. We study now the effects of the density ratio (figure 6_a,b_). We notice a trend for \(We=1.5\) that becomes clearer for \(We=3.0\): the cases with \(\rho_{r}=0.1,0.01,0.001\) present a lower probability of having large curvatures (in magnitude) with respect to \(\rho_{r}=1\). This effect is small for positive curvatures and more pronounced for negative curvatures. We can also observe that while the discrepancy between the reference case (\(\rho_{r}=1\)) and all other cases is clear, there is almost no difference among the cases \(\rho_{r}=0.1,0.01\), and \(0.001\). Interestingly, a similar trend was also reported in a previous work [54] that investigated the rise of bubbles in quiescent liquid. In particular, Cano-Lozano _et al._[54] reported that for density ratios smaller than \(0.128\), a further decrease of the density ratio does not produce significant changes in the shape of the bubbles. This seems to suggest that the modifications produced by the density with respect to the case with \(\rho_{r}=1\) (matched density case), are likely to be proportional to the density difference between the two phases (i.e. \(\rho_{c}-\rho_{d}\)) rather than their ratio (i.e. \(\rho_{d}/\rho_{c}\)). Further simulations, which consider super-unitary density ratios, are however required to confirm this indication. Overall, present results (figure 4) indicate that when sub-unitary density ratios are considered, the probability of having large curvatures values, especially negative, and very stretched bubbles decreases. In other words, when the density of the bubbles is decreased with respect to the carrier density, it becomes more difficult for turbulence fluctuations to locally deform and stretch the bubbles, and in particular, it is difficult to create dimples and concave areas. A possible physical mechanism that supports present observations is the following: when an external perturbation reaches the deformable interface of a bubble, the bubble surface is modified and the perturbation then propagates to the internal bubble fluid. As bubble density is reduced, however, the propagation of this perturbation to the bubble fluid and thus to the rest of the bubble interface becomes less effective. Indeed, the inertia of the perturbation is modulated by the smaller bubble density and thus the magnitude of the inertial forces is reduced. As a result, viscous and surface tension forces increase their relative importance with respect to inertial forces, and the resulting bubble deformation is reduced. This behavior can be also justified considering the dispersed phase Reynolds number, i.e. the Reynolds number evaluated considering the dispersed phase density. As bubble density is reduced, so does the dispersed phase Reynolds number and the bubbles become less deformable and distorted, as can be also graphically appreciated from figure 1 comparing the case \(\rho_{r}=0.001\) (orange bubbles) against the case \(\rho_{r}=1.000\) (white bubbles). To evaluate the influence of the viscosity, we consider figure 6_c,d_. A trend can be distinguished for both the Weber numbers: the PDFs become narrower as the viscosity increases. More specifically, the largest effect can be seen for \(\eta_{r}=100\), where the range of possible curvatures is significantly reduced. The shrinkage of the pdf is less but still evident for \(\eta_{r}=10\), and it becomes almost negligible for \(\eta_{r}=0.1\) and \(\eta_{r}=0.01\). Unlike density, the impact of viscosity is important for \(\eta_{r}=100\) and \(\eta_{r}=10\), while it becomes less important for \(\eta_{r}=0.1\) and \(\eta_{r}=0.01\). Indeed, for these two latter cases, no significant modifications can be appreciated from both Weber numbers. Finally, the combined effects of the density and viscosity ratio can be evaluated from figure 6_e,f_. Interestingly, we observe that when both density ratio and viscosity ratios are decreased, the resulting PDF of the mean curvature lies in between the case \(\rho_{r}=0.1\) (and matched viscosity) and \(\eta_{r}=0.1\) (and matched density). This intermediate behavior can be traced back to the two opposite actions of density and viscosity on the mean curvature of the surface of the bubbles: while a decrease of the bubble density (i.e. of the density ratio) makes the bubbles surface more rigid and thus smoother, when bubble viscosity is decreased the bubbles become more deformable and ripples or dimples can be more easily formed on the interface. Thus, when we combine these two effects, these actions balance out and we obtain an intermediate trend. This result is already visible for \(We=1.5\) and becomes clearer for \(We=3.0\) where, thanks to the higher number of bubbles, a smoother statistic is obtained. ### Flow modifications #### iv.3.1 Mean velocity profiles Once detailed the evolution of the dispersed phase topology, its modifications and the deformation and curvature of the bubbles, we move to analyze the flow modifications produced by the bubbles. We start by analyzing the macroscopic behavior of the multiphase mixture, in terms of flow-rate and mean flow statistics. In particular, we investigate the wall-normal behavior of the mean velocity profiles of the multiphase flow, and we compare them with the single-phase flow statistics at the same \(Re_{\tau}=300\). Indeed, we aim at understanding whether the injection of bubbles in a turbulent flow induces modifications to the mean velocity profile, especially when density or viscosity contrasts are present between the two phases. This aspect is widely studied and a common question that persists in the field concerns the capability of bubbles in generating drag reduction [55; 56; 57; 58; 15]. Figure 7: Wall-normal behavior of the streamwise mean velocity profiles. Left column refers to \(We=1.5\), while the right column to \(We=3.0\). Density ratios effects are shown on the top row for \(\rho_{r}=0.001,0.01,0.1,1\). Viscosity ratio effects are shown on the middle row for \(\eta_{r}=0.01,0.1,1,10,100\). Finally, the combined effect of the density and viscosity ratios is shown on the bottom row for the case \(\rho_{r}=0.1\) and \(\eta_{r}=0.1\), with respect to the cases where only one effect is considered. As a reference, the classical law of the wall, \(u^{+}=z^{+}\) and \(u^{+}=(1/k)\log z^{+}+5\) (with \(k=0.41\) the von Kármán constant) is also reported with a dashed line. For all cases, with respect to single-phase, we observe a minor increase of the mean velocity. Figure 7 shows the wall-normal behavior of the mean velocity profiles, computed by averaging the streamwise velocity along the streamwise and spanwise directions in the entire domain (both dispersed and carrier phase). The results are illustrated for all combinations of density and viscosity ratios considered, following the same arrangement of the previously presented statistics. In addition, the velocity profile relative to the single-phase case is shown with a black dashed line, and the classical law of the wall, \(u^{+}=z^{+}\) and \(u^{+}=(1/\kappa)\ln z^{+}+5.2\)[59], is reported as a reference (with \(\kappa=0.41\) the Von Karman constant [60]). We observe that in all the plots the velocity profiles perfectly collapse on each other in the vicinity of the wall, while tiny deviations can be observed in the central part of the channel, where most bubbles are located. In particular, in the core region of the channel, no differences can be appreciated varying the density and viscosity ratios. However, all multiphase cases are characterized by a slightly greater velocity with respect to the single-phase case. As in our simulations a constant mean pressure gradient is used to drive the flow, the observed flow-rate enhancement corresponds to a slight drag reduction. The drag reduction we observe is rather low in all the simulated cases (roughly 1 to 2%), and current results suggest that the presence of density and viscosity contrasts among the phases does not visibly impact it. These results are in agreement with previous works [61, 20], which found that drag significantly depends on the bubble size. Specifically, they observe that large and deformable bubbles (obtained allowing bubbles to coalesce) migrate towards the central part of the channel and do not influence the drag significantly [12, 13, 55, 62]. By opposite, smaller bubbles (obtained not allowing bubbles to coalesce) move towards the near-wall region and lead to an increase of the drag [12, 13, 55, 62]. To support this argument, we can consider figure 8, which shows the scatter plot of the wall-normal location of each bubble over its equivalent diameter. Panel \(a\) refers to \(We=1.5\) while panel \(b\) to \(We=3.0\). The bottom and top walls are located at \(z^{+}=0\)\(w.u.\) and \(z^{+}=600\)\(w.u.\). Two black dashed lines identify the critical condition for which the upper (or lower) part of the bubble interface intercepts the top (or bottom) wall. From a mathematical point of view, this condition can be identified imposing: \[z_{b}^{+}=d_{eq}^{+}/2\,, \tag{22}\] where \(z_{b}^{+}\) is the distance of the center of the mass of the bubble from the closer wall, which can be computed as follows: \[z_{b}^{+}=\min(z_{i}^{+},2h^{+}-z_{i}^{+})\,, \tag{23}\] where \(z_{i}^{+}\) is the wall-normal location of the \(i\)-th bubble and \(h^{+}=300\)\(w.u.\) is the channel half-height in wall units. Hence, the equations that identify these conditions are: \[z^{+}=d_{eq}^{+}/2\,,\qquad z^{+}=2h^{+}-d_{eq}^{+}/2\,. \tag{24}\] Analyzing the dispersion of the bubbles along the wall-normal direction, we can confirm previous intuitions: smaller bubbles tend to disperse along the entire height of the channel and can get rather close to the two walls while, by opposite, larger bubbles tend to accumulate at the center of the channel and stay farther away from the two walls. It is worth pointing that despite a few points are located above (or below) the two black dashed lines (i.e. in the gray region), no collisions with the walls are detected. Instead, these points represent bubbles elongated along the streamwise or spanwise directions and thus with a larger \(d_{eq}^{+}\) with respect to the actual wall-normal size. Overall, the results presented in figure 7 corroborated by those reported in figure 8 seem to confirm the idea that bubble deformability is a crucial parameter for obtaining drag reduction [12, 56, 58, 63, 20]. Indeed, bubble deformability plays a central role in determining the preferential distribution of the bubbles [12, 32], which is directly linked to drag reduction [15, 58]. #### iv.2.2 Turbulent Kinetic Energy (TKE) of bubbles After having analyzed the flow field in terms of mean velocity, we focus on the turbulence behavior inside the bubbles. The characterization of the flow inside the bubbles is of paramount importance in many applications. Indeed, internal circulation controls the transport of heat, mass, momentum and chemical species through the interface [64, 65], the motion and deformation of the bubbles [66, 67] and particle removal efficiency in scrubbing process [68, 69]. To characterize the mixing and flow behavior in the dispersed phase, we consider the turbulent kinetic energy (TKE) inside the bubbles. As in the carrier phase no significant modifications of the mean velocity profile (figure 7) and of turbulence statistics are observed, larger modifications are expected in the dispersed phase: the flow inside the bubbles is confined by a deformable interface and continuously forced by the external carrier flow. In addition, fluid properties (density and viscosity) are different. As a results, the magnitude of inertial and viscous forces is changed, as well as the local Reynolds and Weber numbers (i.e. evaluated using the dispersed phase properties). To give a first qualitative idea of these modifications, we can consider the specific turbulent kinetic energy, TKE, whose definition is here recalled: \[\text{TKE}=\frac{\rho}{\rho_{c}}\frac{(u^{\prime 2}+v^{\prime 2}+w^{\prime 2})}{2}\,, \tag{25}\] where \(\rho\) is the local density (\(\rho_{d}\) in the bubbles and \(\rho_{c}\) in the carrier phase). Figure 9 shows the turbulent kinetic energy for two different simulations: panel (_a_) refers to the case with \(\rho_{r}=0.01\) and matched viscosity and panel (_b_) to the case with \(\eta_{r}=0.01\) and matched density. Both panels refer to the higher Weber number analyzed (\(We=3.0\)) and to the time instant \(t^{+}=4000\), when for both cases a statistically-stationary configuration is attained. The two snapshots illustrate with a white-black scale the contour map of TKE on an \(x^{+}-y^{+}\) plane located at the channel center (\(z^{+}=0\)). The interface of the bubbles is marked with a white thin line. We notice that the flow structures in the carrier phase are qualitatively similar in the two pictures, while inside the bubbles the contour maps of TKE look very different and for \(\rho_{r}=0.01\) and \(\eta_{r}=1\) (panel _a_), low values of TKE inside the bubbles. In evaluating the results presented in panel \(a\), however, it is important to make an important observation: although the energy content of the bubbles is rather low, velocity fluctuations are still present inside the bubbles. Indeed, the low values of TKE in the bubbles obtained for the case \(\rho_{r}=0.01\) and \(\eta_{r}=1\) are due to the low density that characterizes the bubbles: the prefactor \(\rho/\rho_{c}\) present in the definition of TKE reduces the values obtained inside the bubbles. Shifting our focus to the case \(\eta_{r}=0.01\) and \(\rho_{r}=1\) (panel _b_), we can appreciate here the presence of many vortical structures characterized by an energy content similar to that of the carrier phase. Interestingly, the characteristic length scale of these turbulence structures is much smaller than that of the carrier phase. This observation can be traced back to the smaller viscosity of the dispersed phase that results in a larger local Reynolds number, as also observed in other multiphase flow instances [51; 70]. Turbulence inside the bubbles is the mechanism that can increase or decrease transfer rates across the interface [71; 54]. To quantify more closely this aspect, we compute the mean value of the specific turbulent kinetic energy inside the bubbles for all simulated cases, except for the combined case, and we collect the results in figure 10. To better evaluate the contribution of density and velocity fluctuations in the resulting TKE values, turbulent kinetic energy is evaluated using the complete definition (equation 25) in panel \(a\) while TKE is evaluated considering only the velocity fluctuations contribution in panel \(b\), (i.e. TKE is reported normalized by the local density contribution \(\rho/\rho_{c}\)). The mean values of TKE are reported as a function of the density ratio (scale on the bottom part of the plot), viscosity ratio (scale on the top part of the plot), and Weber number (full circles for \(We=1.5\) and empty circles for \(We=3.0\)). We start by analyzing the effects of the density and viscosity ratios shown in panel \(a\), we can observe two opposite trends: as the viscosity ratio increases, the mean value of TKE inside the bubbles decreases of about one order of magnitude while, by opposite, increasing the density ratio, the mean value of TKE inside the bubbles rapidly increases (of about four orders of magnitude). This behavior reflects the modifications of the inertial and viscous Figure 8: Scatter plot of the wall-normal location of each bubble over its size for the different cases considered. The two black dashed lines identify the condition for which the interface of the bubble intercepts the closer wall in the hypothesis of a perfectly spherical bubble. Smaller bubbles tend to disperse along the entire channel height can get rather close to one of the two walls while larger bubbles tend to accumulate at the center of the channel. forces inside the bubbles produced by the different dispersed phase density and viscosity. As the viscosity ratio is increased from \(\eta_{r}=0.01\) up to \(\eta_{r}=100\) (from left to right), viscous forces become dominant over inertial forces and thus local Reynolds number decreases. As a result, for low viscosity ratios, we observe small turbulent structures inside the bubbles characterized by significative TKE levels, while, for viscosity ratios larger than unity, turbulence structures cannot be sustained inside the bubbles (larger viscous dissipation) and bubbles are characterized by a low level of TKE. A similar trend, albeit in a slightly different simulation setup, was reported by Cano-Lozano _et al._[54] that investigated the rise of bubbles in still liquid and observed a reduction of the velocity gradients for increasing values of the viscosity ratio. On the other hand, increasing the density ratio from \(\rho_{r}=0.001\) up to \(\rho_{r}=1\), inertial forces become dominant over viscous forces, the local Reynolds number increases and the bubbles are characterized by larger TKE values. Interestingly, we observe a much stronger action of the density ratio on the mean value of the bubbles TKE. Indeed, if we compute the specific turbulent kinetic energy using equation 25, the resulting TKE values directly depend on the bubble density and, as we can see from panel \(a\), present results roughly follow the \(\rho_{r}\) scaling law reported with a dotted line. However, it is worthwhile observing that when the smallest density ratio is considered (\(\rho_{r}=0.001\)), results start to deviate from the \(\rho_{r}\) scaling law: as the density ratio is reduced, we observe a reduction int the magnitude of the velocity fluctuations of about one order of magnitude. This deviation can be better appreciated in panel \(b\), where TKE values are reported normalized by the prefactor \(\rho/\rho_{c}\), so that the contribution from velocity fluctuations alone can be better appreciated. The magnitude of velocity fluctuations is roughly constant when considering different density ratios, exception made for the lowest density ration, \(\rho_{r}=0.001\), thus indicating that the specific TKE scales with the density ratio. Finally, we can consider the effect of the Weber number: increasing the Weber number, thus decreasing the surface tension, the TKE is slightly increased for all the cases. This trend can be Figure 9: Contour map of the turbulent kinetic energy in a \(x^{+}-y^{+}\) plane located at the channel center (\(z^{+}=0\)). Panel (_a_) refers to the case \(\rho_{r}=0.01\) and \(\eta_{r}=1\) while panel (_b_) refers to the case \(\rho_{r}=1\) and \(\eta_{r}=0.01\). Both panels refer to the lower surface tension case (\(We=3.0\)) and to the time instant \(t^{+}=4000\) (statistically-steady configuration). The interface of the bubbles is highlighted with a white line. For \(\rho_{r}=0.01\), bubbles and characterized by a low and uniform value of the TKE while, for \(\eta_{r}=0.01\), the TKE map is non-uniform and characterized by small scales fluctuations. attributed to the larger transfer of momentum that occurs when surface tension forces are weaker: as the interface becomes more deformable, the modulation effect of the interface becomes weaker and energy and momentum can be more easily exchanged between the phases. When the surface tension is reduced, in fact, the bubbles become more deformable and reasonably they are more likely to contain a greater amount of TKE. ## IV Conclusions In this work, we studied the behavior of bubbles in a turbulent channel flow for different values of the density ratio, viscosity ratio, and Weber number. The investigation is based on direct numerical simulation of turbulence coupled with a phase-field method. First, we investigated the topology of the dispersed phase and its modifications. We found that the number of bubbles present in the channel is strongly influenced by the surface tension value (i.e. by the Weber number), in accordance with the results of previous studies [14, 16, 17]. Besides, we observe that an increase of bubble viscosity with respect to the carrier (i.e. an increase of the viscosity ratio) has an important stabilizing role and leads to a remarkable increase of the maximum bubble stable diameter and thus to a decrease of the number of bubbles. By opposite, a reduction of the bubble density (i.e. a reduction of the density ratio), does not remarkably affect the dispersed phase topology. Similar findings are obtained from the analysis of the coalescence and breakage rates: an increase of bubble viscosity or surface tension (i.e. a decrease of the Weber number) leads to a reduction of the breakage and coalescence rates. In contrast, a modification of the density ratio has a marginal effect on the behavior of the breakage and coalescence rates. Secondly, we studied the surface stretching and curvature of the bubbles. We observed that these indicators are influenced by all three parameters investigated. In particular, larger viscosity ratios or lower density ratios or Weber numbers hinder the stretching of the bubbles and as a result the overall amount of interfacial area obtained is lower (with respect to the reference matched density and viscosity cases). Figure 10: Mean value of the turbulent kinetic energy (TKE) inside the bubbles. In panel \(a\), TKE is evaluated using the complete definition of specific TKE (i.e. including the prefactor \(\rho/\rho_{c}\)) while, in panel \(b\), TKE is evaluated considering only the velocity contribution (i.e. not considering the prefactor \(\rho/\rho_{c}\)). For both panels, a dashed line (\(We=1.5\)) and a continuous line (\(We=3.0\)) are used to show the behavior of TKE as the density or viscosity ratios are changed. Each value of TKE is marked with a circle (empty for \(We=1.5\) and filled for \(We=3.0\)), with a red-color scale for the non-matched density cases and a blue-color scale for the non-matched viscosity cases, while the black color is used for the reference case. These observations are also reflected in the probability density function of the mean curvature: an increase of bubble viscosity, a decrease of bubble density or a decrease of the Weber number hinder the formation of ripples and dimples on the surface of the bubbles and thus high curvature values are less likely to be found. Thirdly, we evaluated the flow modifications produced by the swarm of bubbles in the background turbulent flow and in the dispersed phase. From a macroscopic point of view, no significant modifications are observed in the wall-normal behavior of the mean velocity profiles and only a minor increase of the flow-rate is observed for all bubbles-laden cases with respect to a single-phase flow, in accordance with previous results [14; 16; 17]. Finally, as bubbles internal circulation play a key role in controlling the transport of heat, mass, momentum through the interface [64; 65], we characterized the mixing in the bubbles by studying the turbulent kinetic energy of the bubbles. We observe a clear action of density and viscosity in modulating the turbulent kinetic energy of the bubbles. In particular, a decrease of the bubble density or an increase of the bubble viscosity lead to a remarkable decrease of the turbulent kinetic energy levels in the bubbles. ###### Acknowledgements. We acknowledge ISCRA for awarding us access to Marconi-KNL (Project ID: HP10BOR3UN, 10M core hours and PRACE for awarding us access to HAWK at GCS@HLRS, Germany. FM gratefully acknowledges funding from the MSCA-ITN-EID project _COMETE_ (project code 813948). ## Appendix A Detection of coalescence and breakage events In the simulations presented in the manuscript, topological changes are implicitly described by the phase-field method and thus no closure models are required to describe coalescence and breakage events. To compute the coalescence and breakage rates, we use an algorithm that rely on the analysis of bubbles trajectories and bubbles volumes to identify topological modifications of the interface. The input data needed are: the position of the center of mass of each droplet, identified by the subscript \(i\), at the current time step, \(\mathbf{x}_{i}^{n}\), the velocity of the center of mass of each droplet at the current time step, \(\mathbf{u}_{i}^{n}\), and the position of the center of mass of each droplet at the following time step, \(\mathbf{x}_{i}^{n+1}\). These quantities are calculated for each droplet \(i\) and are defined as: \[\mathbf{x}_{i}^{n}=\frac{1}{V_{i}^{n}}\int_{V_{i}^{n}}\mathbf{x}_{i}^{n}\mathrm{ d}V\,; \tag{10}\] \[\mathbf{u}_{i}^{n}=\frac{1}{V_{i}^{n}}\int_{V_{i}^{n}}\mathbf{u}_{i}^{n} \mathrm{d}V\,; \tag{11}\] \[\mathbf{x}_{i}^{n+1}=\frac{1}{V_{i}^{n+1}}\int_{V_{i}^{n+1}}\mathbf{x}_{i}^{n+ 1}\mathrm{d}V\,, \tag{12}\] where the integral is computed over the volume \(V_{i}\) of each droplet. The apices \(n\) and \(n+1\) identify respectively the current and the following time step; the elapsed time between the two time steps is \(\Delta T\). In the first step the estimated position of each droplet at the following time step is computed as: \[\mathbf{x}_{est,i}^{n+1}=\mathbf{x}_{i}^{n}+\Delta T\mathbf{u}_{i}^{n}\,. \tag{13}\] To better explain the technique employed to detect translations, breakages and coalescences some examples have been reported in figure 12. For each droplet we compute the estimated position at the following time step \(\mathbf{x}_{est,i}^{n+1}\), and we search for the closest bubble at the following time step; at this step some droplets at the following time step may be left out (they are not the closest droplet to any estimated droplet position). This step corresponds to figure 12\((a)\): the estimated position of droplet \(T_{n}\) is calculated (red semi-transparent bubble) and the closest bubble at the following time step is found out (droplet \(T_{n+1}\)). In the following stage breakage and coalescence events have to be sorted out from these data. A breakage is detected whenever a droplet in \(\mathbf{x}^{n+1}\) has no parent droplet: according to figure 12\((b)\) bubble \(B_{n+1,2}\) has no parent bubble, thus it originated from a breakage event. Once a breakage event is identified, the algorithm searches for the the closest droplet to the bubble \(B_{n+1,2}\) at time step \(n+1\); in this case droplet \(B_{n+1,1}\) is found. It is then assumed that droplet \(B_{n}\) (whose estimated position is the closest to droplet \(B_{n+1,1}\)) breaks apart into droplets \(B_{n+1,1}\) and \(B_{n+1,2}\). Once all breakages have been detected, the algorithm looks for coalescence events. A coalescence event is detected whenever two separate droplets at time step \(n\) are assigned to the same droplet at time step \(n+1\). In particular, referring to figure 12\((c)\) bubbles \(C_{i}^{n}\) and \(C_{j}^{n}\) are both assigned to bubble \(C_{i}^{n+1}\), as it is the closest one to their estimated position. So far, only kinematic criteria have been used to determine the trajectory and eventual interactions (coalescences and breakages) of each bubble. Once all the trajectories at the present time step have been determined, the quality index and the balance are computed. In particular, the quality index, \(Q\), is initialized at the beginning of the time step to the number of droplets at the current time step, \(N_{n}\); every time volume is not conserved (within a certain small threshold) in all the translation, breakages and coalescences, the quality index is reduced by one. At the end of the time step, it is normalized by \(N_{n}\). Recalling the examples of figure 12, three checks on the volume conservation are performed depending on the type of event: \[\begin{cases}V_{T_{n}}=V_{T_{n+1}}\pm\varepsilon&\text{for translations}\\ V_{B_{n}}=V_{B_{n+1,1}}+V_{B_{n+1,2}}\pm\varepsilon&\text{for breakages}\\ V_{C_{n,1}}+V_{C_{n,2}}=V_{C_{n+1}}\pm\varepsilon&\text{for coalescences} \end{cases}\,. \tag{10}\] To account for numerical errors that could occur in the calculation of the volume of each bubble (that would strongly reduce the quality index of the matching), a small tolerance \(\varepsilon\) (of the order of few percents of the volume of parent droplet) is used when checking for volume conservation. The second parameter controlling the quality of the calculated trajectories is the balance, \(B\). The total number of bubbles at each time step is known: \(N_{n}\) at the current time step and \(N_{n+1}\) at the following one available. Once the number of breakage and coalescence events is known the balance can be calculated as: \[B=N_{n+1}-\left(N_{n}+N_{b}-N_{c}\right), \tag{11}\] where \(B\) and \(N_{c}\) are respectively the number of breakage and coalescence events that occur between time steps \(n\) and Figure 11: Possible cases considered for the algorithm: panel \((a)\) corresponds to a translation, panel \((b)\) to a breakage and panel \((c)\) to a coalescence. Red bubbles are at the current time step \((n)\), while blue bubbles are at the next time step available \((n+1)\). Semi-transparent bubbles show the estimated position, \(\mathbf{x}_{i,est}^{n+1}\). Arrows show the trajectory of the bubbles, \(\Delta T\mathbf{u}_{i}^{n}\). \(n+1\). The number of droplets at the current time step, \(N_{n}\), is increased whenever a droplet undergoes breakage into two bubbles and is decreased whenever two bubbles coalesce into one bubble. Here we make the assumption that all breakages are binary breakages and all coalescences involve only two parent droplets at a time. Thus, considering these two parameters, a fair matching of the trajectories is obtained with a quality index \(Q\to 1\) and a balance \(B=0\). This means that the volume is always matched (quality index never or rarely reduced) and no bubble is left out (balance equal to zero). Finally, once known the number of coalescence and breakage rates that occur between each time step \(n\) and \(n+1\), the coalescence and breakage rates, \(\dot{N_{c}}\) and \(\dot{N_{b}}\), can be computed by counting the overall number of coalescence or breakage occurring in the temporal window \(\Delta t^{+}\). Note that the temporal window used to track the trajectories of the bubbles is smaller than the temporal window used to compute the rates. The present algorithm considers only binary breakages and coalescences events. This assumption is not particularly limiting, as binary breakages/coalescences have the highest probability of occurrence [72, 73, 74]. This assumption is also confirmed by the simulations performed: the quality index never reduces below \(0.85\) (so the volume is matched for at least \(85\%\) of all the translation, breakage and coalescence events) and at most few droplets are left unmatched (the balance is almost zero). ## Appendix B Influence of grid resolution on coalescence and breakage rates To evaluate the influence of grid resolution on coalescence and breakage rates, we perform two additional simulations: one with a coarser grid resolution (\(N_{x}\times N_{y}\times N_{z}=256\times 128\times 257\)) and one with a more refined grid resolution (\(N_{x}\times N_{y}\times N_{z}=1024\times 512\times 1025\)). The three simulations consider the same given case: \(We=3.00\), \(\rho_{r}=1.000\) and \(\eta_{r}=1.00\). As the grid resolution is changed, the Cahn number has been also adjusted accordingly (from \(Ch=0.04\) for the coarser grid down to \(Ch=0.01\) for the finer grid). We compare the coalescence and breakage rates obtained from the three different grid resolutions in figure 13. We can observe that for all the grid resolutions considered, Figure 12: Flow chart of the algorithm used to detect breakage and coalescence events in the post-processing of the simulations. the trend reported is similar and for all simulations, after an initial transient, both rates set to an equal value (in magnitude). Analyzing the value of the rates obtained, we can notice that some differences are present between the coarser grid resolution (triangles) and the intermediate grid resolution (circles). However, these differences become marginal when the intermediate grid resolution (circles) and the refined grid (squares) results are compared. Overall, present results suggest that the mesh employed is sufficient to investigate breakage/coalescence dynamics.
2309.16635
Analysis of the Usability of Automatically Enriched Cultural Heritage Data
This chapter presents the potential of interoperability and standardised data publication for cultural heritage resources, with a focus on community-driven approaches and web standards for usability. The Linked Open Usable Data (LOUD) design principles, which rely on JSON-LD as lingua franca, serve as the foundation. We begin by exploring the significant advances made by the International Image Interoperability Framework (IIIF) in promoting interoperability for image-based resources. The principles and practices of IIIF have paved the way for Linked Art, which expands the use of linked data by demonstrating how it can easily facilitate the integration and sharing of semantic cultural heritage data across portals and institutions. To provide a practical demonstration of the concepts discussed, the chapter highlights the implementation of LUX, the Yale Collections Discovery platform. LUX serves as a compelling case study for the use of linked data at scale, demonstrating the real-world application of automated enrichment in the cultural heritage domain. Rooted in empirical study, the analysis presented in this chapter delves into the broader context of community practices and semantic interoperability. By examining the collaborative efforts and integration of diverse cultural heritage resources, the research sheds light on the potential benefits and challenges associated with LOUD.
Julien Antoine Raemy, Robert Sanderson
2023-09-28T17:43:00Z
http://arxiv.org/abs/2309.16635v1
# Analysis of the Usability of Automatically Enriched Cultural Heritage Data ###### Abstract This chapter presents the potential of interoperability and standardised data publication for cultural heritage resources, with a focus on community-driven approaches and web standards for usability. The Linked Open Usable Data (LOUD) design principles, which rely on JSON-LD as lingua franca, serve as the foundation. We begin by exploring the significant advances made by the International Image Interoperability Framework (IIIF) in promoting interoperability for image-based resources. The principles and practices of IIIF have paved the way for Linked Art, which expands the use of linked data by demonstrating how it can easily facilitate the integration and sharing of semantic cultural heritage data across portals and institutions. To provide a practical demonstration of the concepts discussed, the chapter highlights the implementation of LUX, the Yale Collections Discovery platform. LUX serves as a compelling case study for the use of linked data at scale, demonstrating the real-world application of automated enrichment in the cultural heritage domain. Rooted in empirical study, the analysis presented in this chapter delves into the broader context of community practices and semantic interoperability. By examining the collaborative efforts and integration of diverse cultural heritage resources, the research sheds light on the potential benefits and challenges associated with LOUD. ## 1 Introduction The success of the International Image Interoperability Framework (IIIF - pronounced "triple-eye-eff")1, a model for presenting and annotating digital resources that is backed by a global community developing and maintaining agreed-upon application programming interfaces (APIs) [1], must be learnt from by the cultural heritage sector with respect to the possibility and benefits of widespread interoperability. If, as a community, we can expand from our silos of knowledge into a connected system of interoperable information, the entire sector will benefit, both from the audience perspective of vastly increased access to the information, and also from the publishing perspective of ease of cataloguing and delivery. Footnote 1: [https://iiif.io](https://iiif.io) This knowledge network would be maintained by GLAM (Galleries, Libraries, Archives, and Museums) organisations as the owners and custodians of cultural and natural history objects, and are thus the best positioned to maintain information about those objects. The publication of that knowledge in an easy to use and consistent methodology will bring about the same ecosystem of tools, usage and understanding as we have seen emerge via IIIF over the last decade. Moreover, IIIF has provided a foundational framework that has not only facilitated the emergence but also guided the development of Linked Art, a community working together to create a shared model based on linked data to semantically describe cultural heritage resources, enabling it to embrace and adhere to analogous structural paradigms, as elaborated in Section 1.3. It will facilitate the creation of discovery and research systems without the expense of current aggregators that transform the data, and ensure that the data is kept up to date by incentivising the publishers to do so for their own benefit, rather than for the good of the aggregator. Yale has demonstrated the possibility of this vision through the creation of LUX, described in Section 1.4, which aggregates multiple, independent data sources as Linked Open Usable Data (LOUD), reconciles and enriches the records, and makes a large scale research and discovery system available for unencumbered use. If more institutions were to publish their data in a consistent and interoperable manner, in order to get the benefits demonstrated by LUX, all institutions' systems and user experience would be improved by access to the totality of the community's knowledge. This would directly improve exhibitions knowledge with access to all of the host institutions' and lending institutions' records, it would allow stronger bibliographic and museum linking such as others' objects as subjects of published works, and facilitate the creation of digital catalogues raisonnes (such as the Van Gogh World Wide project2 and the Duchamp Research Portal3), while better serving critical shared knowledge management tasks such as the maintenance of living artist information. The entire community can contribute their knowledge without significantly changing their existing knowledge management practices, by transforming and using their data according to the LOUD design principles4. Footnote 2: [https://vangoghworldwide.org](https://vangoghworldwide.org) Footnote 3: [https://www.duchamperchives.org](https://www.duchamperchives.org) Footnote 4: [https://linked.art/loud](https://linked.art/loud) The audiences also directly benefit, be that for teaching and learning, research or general awareness and interest. There are only a few use cases in which the owning institution or physical location is the significant factor, instead the user just wants to discover and interact with the objects via their digital surrogate. If a user is interested in the artist J.M.W. Turner, for example, it is of little concern that "Rain, Steam and Speed" is at the National Gallery in London while "Dort or Dordrecht: The Dort Packet-Boat from Rotterdam Becalmed" is in New Haven, when you can digitally find them via interoperable, semantic descriptions and bring images of them together to compare them side by side where-ever you are through IIIF. Participating institutions would further benefit via economies of scale. With decentralised data and interfaces, but centralised shared services, such as entity reconciliation and mapping of common datasets, we avoid the challenges of having a single centralised system which inevitably does not perform at scale, costs a lot of money for a single organisation to maintain, and the data is not kept up to date as there is no incentive to do so. However, with centralised shared and standards-based services that could be funded and maintained by the community, we ensure that the functionality is available to all and the costs can be defrayed beyond a single organisation. That vision might sound like a fanciful fiction at first, but given the impact of IIIF for access to image content, we must consider first why it was so successful, and secondly how we can apply that understanding to advancing broad and usable access to cultural knowledge globally. And for that we must start in the next section with the details of Data Usability. ### Data Usability Tim Berners-Lee's vision for the Semantic Web [2], or the Web of Knowledge, has been around for almost as long as the Web itself and has been convincingly argued to be ultimately unachievable on a global scale [3], across all knowledge domains. Moreover, Bizer et al. [5] identified several persistent challenges for the Semantic Web, a decade after the conception of the Resource Description Framework (RDF), a method for description and exchange of graph data, which include data-driven user interfaces, application architectures, schema mapping, link maintenance, licensing, trust, quality, relevance, as well as privacy concerns. However, with the creation of JavaScript Object Notation for Linked Data (JSON-LD) as a developer-friendly serialisation of RDF, we have seen some aspects of that vision realised over the past 10 years. #### Json-Ld At the time of writing, in September 2023, JSON-LD is used by 45.8% of all websites around the world5. IIIF also uses JSON-LD, although few systems actually depend on the graph that it describes and tend to treat it as JavaScript Object Notation (JSON) of a particular structure. With almost half of all websites using a knowledge graph serialisation, and the success of IIIF in the cultural heritage sector, it is clear that JSON-LD has played a critical role compared to previous attempts. Footnote 5: [https://w3techs.com/technologies/details/da-jsonld](https://w3techs.com/technologies/details/da-jsonld) JSON, as a data syntax and a lightweight data interchange format, is very easy to work with both in the browser and in data management systems. It is compact, and relatively easy to read and scan by the human eye, while enabling nested structures and values that align with programming languages. It can be created by hand in a text editor, or serialised from other data structures using common libraries and tools. This is important because, we argue, the audience for Linked Open Data (LOD) is the developer, and not a researcher or other end user of an application. For LOD to be used, it must be usable, and usability is determined not objectively without context, but instead by the needs and understanding of the user [4]. The user of the data is the developer, and thus they determine its usability to accomplish their current task, typically to build an application that either publishes or consumes that data to enable discovery and access to the knowledge that it encodes. #### LOUD Design Principles In his EuropeanTech 2018 keynote, Sanderson argues that for data to be usable it must have five core features, known as the LOUD design principles and paralleling to some extent Tim-Berners Lee's Five Star Open Data Deployment Scheme6: Footnote 6: [https://5tardata.info/](https://5tardata.info/) A. It must have the right Abstraction for the audience. The typical approach is to publish in excruciating and incomprehensible detail absolutely everything that is known about a particular topic in a complex structured form. This is neither usable nor necessary for the majority of use cases - the right abstraction of that data is one which allows the user (the developer) to accomplish their task relatively easily and, if at all possibly, enjoyably. If the developer likes to work with the data, then they will continue to do so, and will encourage others to use that format, creating a virtuous cycle. In the same way that the designer of a car's control systems, a mechanic working on it, and the driver all need different access and understanding of that system, so too do different audiences need different access to cultural heritage knowledge. 2. There must be few Barriers to entry. If it is easy to get started, hopefully by merely reading the data and understanding what is happening, then more developers will get started using the data. If it takes a long time to see any sign of progress, many developers will look for an easier route. Conversely, the more people that start and continue to work with the data, the more tools become available, and the more awareness of the data there is. This accelerates the virtuous cycle by demonstrating that not only is it the correct abstraction, it is also quick to accomplish the task. 3. It must be Comprehensible by simply reading the data, rather than having to use specialised tools or require significant initial research to know how to interpret it. A spreadsheet without column headers is incomprehensible, as are formats that rely exclusively on numeric naming conventions for classes, properties or other structures. Uniform Resource Identifiers (URIs) are central in linked data, but URIs should be treated as if they are opaque - users should not read semantics into the URI, and publishers should not feel the need to try and encode details of what the URI identifies within the URI itself. This means that the data must provide some assistance to the user by giving a label or name along every URI. 4. There must be solid Documentation, that has working examples to learn from. While many developers like to get started by reading the data, it is impossible to intuit all of the semantics and possible constructions from looking at examples. There must be solid, easily discoverable reference material that documents very clearly and explicitly what is permissible in the format. That documentation must have examples of each feature, and those examples should be complete and able to be dropped into an implementation of the specification in order to see it in practice. 5. There should be few Exceptions, and instead the data should be internally consistent. Every exception is another rule that the developer needs to understand and then implement in their code. These exceptions are often jarring and uncomfortable to work with, leaving the developer wondering why there is this difference and what other differences there are that they don't yet know about. Conversely, being as consistent as possible means that tools are easy to build and to create testing frameworks to prove that they are correct and complete. Overall, the main intention of LOUD is to provide straightforward access to data, primarily for software developers. Thus, a balance must be established that addresses the need for data completeness and accuracy, which depends on the ontological construct, and the pragmatic concerns of scalability and ease of use. #### Adherence of the IIIF Presentation API 3.0 to the LOUD Design Principles The IIIF specifications7 can be easily demonstrated to fulfil all of these requirements for Usability. Taking the IIIF Presentation API version 3.0 as the baseline, its goal is not semantic interoperability, but instead to provide enough information to the audience - the software engineer - to create a view of the object using the referenced images, metadata and other content, i.e. the IIIF Presentation API specifies a standardised description of a collection or compound object (via the Manifest resource) enabling a rich and complex user experience [6]. Comparing this to the above fields, we find that it meets them all easily. Footnote 7: [https://iiif.io/api/](https://iiif.io/api/) The abstraction of the data is appropriate for the audience to accomplish the expressed task of building a viewing application, as it does not attempt to encode any semantic or descriptive metadata, instead it aligns its structure with that intended usage. Instead of a myriad of metadata fields to understand, it has (label,value) pairs that are divided up by language, in a structure that is easy to read, and easy to code with. It is laid out in such a way that the first part of the data structure is the first part that the developer needs to render to the user, and even URIs are abstracted away into the JSON-LD context document, allowing the developer to deal only with easy-to-read strings and numbers. Figure 1: _Dort or Dordrecht: The Dort Packet-Boat from Rotterdam Becalmed by Joseph Mallord William Turner as a IIIF Presentation API 3.0 resource displayed in the Universal Viewer, a IIIF-compliant client. Link to the IIIF Manifest (JSON-LD): [https://manifests.collections](https://manifests.collections). yale.edu/v2/ycba/obj/34._ There are few barriers to entry to get started with either publishing (a complete instance can be created in just a few lines of code, or easily written by hand), or to build a consuming application. A complete implementation is a lot of work, but to get started is easy, even for someone with minimal programming experience. As it is easy to get started, many have followed through to create wonderful applications that use it. It is easily understood by reading through the data. The structure and naming conventions are easy to follow, and conform to the expected usage. As JSON, it is a syntax that is very familiar to developers, and that it is also JSON-LD is not even necessary to know, let alone understand, in order to use it8. Footnote 8: Starting in 2018, upcoming IIIF specifications and enhancements to current specifications has embraced JSON-LD 1.1 instead of JSON-LD 1.0. This shift offers numerous advantages, such as the capacity to finely define the impact of context definitions and exert greater control over the specific JSON serialisation. [https://iiif.io/api/annex/notes/jsonld/](https://iiif.io/api/annex/notes/jsonld/) The documentation is clear and kept up to date. The definition of the structure and possible properties, along with the expected values, are well indexed with expectations for clients and publishers as to what a minimally conforming instance will contain. The examples in the specification itself are not complete, as that would take up a lot of space, however the accompanying cookbook maintains a steady progression of examples and explanations from simple through to the most complex. Finally, there are few exceptions, even to the way in which images, text, video and other content such as tags or commentary are brought together, in this case through the W3C Web Annotation Data Model [7]. The naming conventions of properties, the usage of those properties and the expected usage of them are all clearly defined and do not have special rules based on the context where they are used. As such, the IIIF Presentation API is, according to the five criteria set out, highly usable and we argue this is the fundamental reason for its success. ### IIIF Design Principles The IIIF Design Principles9 express the way in which the IIIF community designed the APIs, leading to their usability. It must be noted that the design principles are not expressed in terms of usability, but as more objective constructs or methodologies that can be followed, which then result in usable data [8]. Footnote 9: [https://iiif.io/api/annex/notes/design_principles/](https://iiif.io/api/annex/notes/design_principles/) The first design principle is to scope the work through shared use cases. This ensures that the goals of the specifications are clear and well understood, such that the specifications will allow the developer to accomplish their aims, if those aims are met by the use cases that are agreed upon. It also ensures that the specifications are focused on practical details, not on theoretical issues, thereby making them easier to understand. This principle directly deals with requirements A and B. The next three principles focus on being as easy as possible and keeping the barriers to entry minimal. These deal with requirements A, B and E. By avoiding specific technologies as requirements, and selecting simple (and consistent) solutions, the APIs are easy to get started with, appropriate for the audience, and have few exceptions to deal with. Principles 2.5, 2.6 and 2.7, 2.9 and 2.10 are about implementation details with the web, and in particular to follow the linked data principles and good web practices. These principles do not fall into the categories above directly, but instead are to ensure that the implementations are performant and fit within existing technologies and standards. The closest principle to the notion of usability is perhaps 2.8, which asserts that the specifications should be designed with JSON-LD in mind. The document says that the intent here is to ensure that the data "is as easy to use as possible without the need for a full RDF development suite" and that this will increase the likelihood of adoption. The details of designing for JSON-LD in the IIIF context are then well described later in that document. The final three principles return to ease of adoption and implementation by ensuring that the different APIs do not all need to be implemented together but instead are loosely coupled, by ensuring that it is internationalised and usable around the world, and that it is easy to extend the specifications to local use cases by defining what is expected to work in which conditions, and leaving everything else unsaid. #### The Success and Adoption of IIIF A number of factors have contributed to the success and uptake of the IIIF. First, it hinges significantly on the presence of robust and well-designed software implementations, encompassing both servers and clients. Servers are essential for hosting, managing, and serving resources in a manner compliant with IIIF specifications. These server-side software solutions should efficiently handle image requests, metadata retrieval, and other IIIF API interactions while maintaining high performance and scalability. Equally important are well-designed, attractive and usable client implementations that are open source and easy to set up as they form the interface through which end-users access and interact with digital resources. For instance, the existence of Mirador10 and the Universal Viewer11 along with the OpenSeadragon12 library which they use for dealing with zoomable images, made interoperability an easy case to make to decision-makers and funders. Footnote 10: [https://projectmirador.org/](https://projectmirador.org/) Footnote 11: [https://universalviewer.io/](https://universalviewer.io/) Footnote 12: [https://openseadragon.github.io/](https://openseadragon.github.io/) As detailed in a study by Raemy in 2023 [9], the IIIF community's success is underpinned by its inclusive and collaborative nature, the availability of interoperable APIs, and compatible implementations. Raemy emphasises the community's openness, friendliness, and commitment to aiding others in their endeavours. Fur thermore, the study highlights the collaborative essence of the IIIF community, its connections with prominent figures in the field, and the active participation of technical experts. Raemy also commends the community's well-structured organisation, seamless coordination, and the invaluable support provided by IIIF staff to facilitate cooperation among members. Comprehensive documentation, a pragmatic approach, and the ability to address specific shared needs further contribute to the community's success. The IIIF community's dedication to developing specifications, providing practical solutions and continually evolving the standard underpins its continued appeal. In light of these attributes that have propelled the IIIF community to prominence, it is noteworthy to delve into a specific aspect of its approach. In striving for widespread adoption of its specifications, the IIIF community undertakes several proactive initiatives, such as writing "cookbook recipes"13 to encourage publishers to adopt common patterns in modelling classes of complex objects, enable client software developers to support these patterns, for consistency of user experience as well as to demonstrate the applicability of IIIF to a broad range of use cases. Footnote 13: [https://iiif.io/api/cookbook/](https://iiif.io/api/cookbook/) Additionally, the community remains highly active, furthering its reach and influence, notably through its various committees and interest groups14. By consistently seeking advancements and adaptations, the IIIF community not only ensures its relevance but also propels the field forward. This commitment is epitomised by its active exploration of avenues to formally disseminate 3D objects within its framework [10]. Footnote 14: [https://iiif.io/community/](https://iiif.io/community/) By adopting these easy-to-implement specifications, institutions immediately experience the advantage of not needing to tackle the more complex user-facing components. When considering Linked Art and semantic cultural heritage data, we will look at Yale's LUX from this perspective: if the specifications are easy to publish, is there an adoptable consuming application that demonstrates the value of publishing the data? While the IIIF Presentation API 3.0 focuses on providing a structured framework with sufficient metadata to facilitate a seamless remote viewing experience, it still doesn't convey semantic information that Linked Art can provide. This highlights a gap that Linked Art bridges by enriching the understanding and integration of cultural heritage data in the digital realm. ### Linked Art Linked Art is a community-driven initiative collaborating to define a metadata application profile, the model, to describe cultural heritage, and the technical means, a RESTful API, for conveniently interacting with it. More specifically, it is an RDF application profile of the CIDOC Conceptual Reference Model (CIDOC-CRM) se rialised in JSON-LD that incorporates Getty vocabularies15, such as the Arts & Architecture Thesaurus (AAT), the Thesaurus of Geographic Names (TGN), and the Union List of Artist Names (ULAN), and leverages other commonly used RDF ontologies like RDF Schema (RDFS) and Dublin Core for disambiguating closely related property names used by CIDOC-CRM [11]. Linked Art recognises another important perspective: that of software developers who, in many cases in collaboration with scholars, build applications that make use of collections data held by cultural heritage institutions and embrace the LOUD design principles [12]. Footnote 15: [https://www.getty.edu/research/tools/vocabularies/](https://www.getty.edu/research/tools/vocabularies/) The goal of Linked Art is to use linked data to enhance cultural heritage collections, particularly focusing on artworks and their origins. This approach enables consistent and structured ways for art institutions to share art-related data. Since it is based on the high-level ontology CIDOC-CRM, which is developed and maintained by the International Committee for Documentation of the International Council of Museums [13], Linked Art describes assertions in an event-centric paradigm rather than a conventional object-centric framework. Thus, any activity can be potentially represented in an event-centric ontology and is advantageous for modelling temporal data, enabling better discovery of relationships as well as facilitating fine-grained tracking of changes and historical analysis. Figure 2: Linked Art from 50,000 feet [16; 17]. #### Conceptual Model Linked Art16 is documented, much like IIIF, in an incremental approach where common use cases of stakeholders - compiled through GitHub issues in a trasparent manner17 - greatly influence the model [14]. Footnote 16: [https://linked.art](https://linked.art) Footnote 17: [https://github.com/linked-art/linked.art/issues](https://github.com/linked-art/linked.art/issues) Figure 2 shows the high-level conceptual model of Linked Art. It comprises some of the CIDOC-CRM classes leveraged by Linked Art. The model primarily addresses five of these provenance questions: "what", "where", "who", "how", and "when", akin to some extent to the W7 model developed by Ram and Liu to capture provenance semantics [15]. The model consists of various interconnected components, some of which share common patterns, while others have unique patterns tailored to their specific characteristics. When working with an open ontology like CIDOC-CRM, having these common baseline patterns is valuable. They have been established through experience with datasets from numerous museums, offering practical ways to structure cultural heritage data. There are a few core properties that every resource should have for it to be a useful part of the world of linked data: \begin{tabular}{l l} @context & Contains a reference to the context mapping which determines how to interpret the JSON as LOD. It is not a property of the entity being described, but of the document. It must be present. \\ id & Captures the URI that identifies the object. Every resource must have exactly one id, and it must be an HTTP URI. \\ type & Captures the class of the object, or rdf:type in RDF. Every resource must have exactly one class. This allows software to align the data model with an internal, object oriented class based implementation. \\ \_label & Captures a human readable label as a string, intended for developers or other people reading the data to understand what they are looking at. Every resource should have exactly one label, and must not have more than one. It is just a string, and does not have a language associated with it - if multiple languages are available for the content, then implementations can choose which is most likely to be valuable for a developer looking at the data. \\ \end{tabular} Additionally, CIDOC-CRM functions as a framework that needs to be extended through the utilisation of additional vocabularies and ontologies to become useful. The provided mechanism for achieving this is the classified_as property, which points to a term from a controlled vocabulary. This is in contrast to the type property mentioned earlier, which is reserved for CIDOC-CRM defined classes and a few specific extensions as required. Below is a JSON-LD snippet example of an assertion stating that this object is a painting and, therefore, an artwork, using AAT terms. { "@context": "[https://linked.art/ns/v1/linked-art.json](https://linked.art/ns/v1/linked-art.json)", "id": "[https://linked.art/example/object/20](https://linked.art/example/object/20)", "type": "HumanMadeObject", "_label": "Simple Example Painting", "classified_as": [ { "id": "[http://vocab.getty.edu/aat/300033618](http://vocab.getty.edu/aat/300033618)", "type": "Type", "_label": "Painting" }, { "id": "[http://vocab.getty.edu/aat/300133025](http://vocab.getty.edu/aat/300133025)", "type": "Type", "_label": "Work of Art" } } } Further identified patterns within the conceptual model, all vetted by the Linked Art community, consist of object descriptions, people and organisations, places, digital integration (such as leveraging the IIIF specifications), provenance of objects, collections and sets, exhibitions of objects, primary sources of information, assertion level metadata, and dataset level metadata. Each pattern plays a pivotal role in defining and organising data related to artworks, artists, locations, digital assets, historical contexts, collections, exhibitions, and the metadata that underpins this interconnected web of cultural heritage data. ### API Design Principles and Requirements Linked Art also follows the footsteps of IIIF in terms of scoping how web specification should be developed by defining its own sets of API design principles and requirements18. Footnote 18: [https://linked.art/api/1.0/principles/](https://linked.art/api/1.0/principles/) The design principles are rooted in practicality and interoperability. They are crafted with shared, well-understood use cases, ensuring that the resulting specifications solve real-world problems. Internationalisation is prioritised to remove language barriers for users. The APIs aim for simplicity, allowing for both basic and complex use cases, with the flexibility to start small and incrementally build up. They avoid dependency on specific technologies, making them adaptable across various implementations. By following REST principles, they seamlessly align with the web, ensuring easy caching and interaction. JSON-LD serves as the primary serialisation method, promoting user-friendly representations. Whenever possible, Linked Art adheres to existing standards and best practices to integrate seamlessly with the broader web-based cultural heritage data landscape. Extensibility is encouraged, enabling experimentation and early adoption of new versions. Lastly, Linked Art embraces the network's role in information access, recognising that a multitude of publishing environments is more valuable than overly simplistic consuming implementations. The Linked Art API requirements are grouped into four key areas, further illustrate how Linked Art aims to provide implementation-based guidance for creating specifications that are not only practical but also responsive to the needs of the cultural heritage sector. Trivial to Implement Linked Art adheres to principles that prioritize ease of implementation, allowing data to be generated without the need for databases or dynamic systems. Consistency across Representations It is maintained by ensuring that each statement appears in only one response document, if possible. Moreover, If a resource has references from multiple other resources, then it needs to be in its own response. Lastly, an efficient handling of inverse relationships is required as each connection should be encoded in a one-way direction, although Linked Art considers exceptions for performance and easy data access through a separate API for some cases. It focuses on representing 1-to-many relationships from the "many" side, defining deterministic and straightforward rules for data representation, and embedding resources when they have a 1:1 relationship with their parent to reduce the number of separately maintained resources. This requirement stipulates that resources not requiring separate dereferencing do not need their own URIs, and the flexibility of URI structure is maintained, allowing for a broad range of implementations without specific URI structure requirements for API endpoints. If there aren't any specific URI structure requirements, there are best practices for URIs documented within the Linked Art protocol19 with preferred endpoint paths. The top-level entity endpoints20 align mostly with the core classes of the Linked Art model. At the time of writing, there are eleven endpoints, loosely based on the conceptual model presented previously: \begin{tabular}{l l} Concepts & Types, Materials, Languages, and others, as full records rather than external references \\ Digital Objects & Images, services and other digital objects \\ Events & Events and other non-specific activities that are related but not part of other entities \\ Groups & Groups and Organisations \\ People & Individuals \\ Physical Objects & Physical things, including artworks, buildings or other architecture, books, parts of objects, and more \\ Places & Geographic places \\ Provenance Activities & The various events that take place during the history of a physical thing \\ Sets & Sets, including Collections and sets of objects used for exhibitions \\ Textual Works & Texts worthy of description as distinct entities, such as the content carried by a book or journal article \\ Visual Works & Image content worthy of description as distinct entities, such as the image shown by a painting or drawing \\ \end{tabular} #### Adoption of Linked Art Linked Art, being a relatively novel initiative in comparison to IIIF, has faced challenges in achieving the same level of widespread adoption. The lack of awareness and limited availability of tools and services has hindered broader engagement within the Linked Art community [9]. However, a pivotal moment is on the horizon for Linked Art. Yale's LUX stands out as a pioneering and substantial implementation, symbolising a turning point and serving as a catalyst for change. LUX, recognised as a flagship initiative, effectively showcases the substantial potential and transformative influence embedded in the Linked Art and IIIF specifications. The valuable insights and advancements brought forth by LUX hold the promise to reshape the prevailing perspectives within the community. In the subsequent section, a detailed exploration into the transformative impact of LUX ensues, shedding light on its potential to shape perceptions, and importantly, its role in potentially fostering increased adoption of portals implementing standards that adhere to the LOUD design principles. ## 1.4 LUX: Yale Collections Discovery LUX21 is an implementation of Linked Art and IIIF as a discovery and research platform for the combined collections of Yale University. This encompasses the Yale Center for British Art (YCBA), the Yale University Art Gallery (YUAG), the Yale Peabody Museum (YPM) and the Yale University Library (YUL). These collections encompass art, natural history, bibliographic and archival collections, and all of the related people, organisations, places, concepts and events, totalling some 41 million records at the time of writing. Footnote 21: [https://lux.collections.yale.edu/](https://lux.collections.yale.edu/) Beyond just using the Linked Art metadata application profile, the development of LUX also tried to apply the same principles to other decisions that were needed when mapping data and use cases from the systems of record into Linked Art, and which functionality was important to implement. The system consists of several interconnected components, namely data harvesting, data pipeline, back-end database, middle tier and front-end. These components have been integrated according to established standards including both IIIF and Linked Art, so that any individual component can be replaced without requiring a complete rewrite of the system. Figure 3.3: LUX Homepage #### Developing a LOUD-driven discovery platform The usability of the data was extremely important to the development process as it meant that a relatively junior front-end software engineer was able to build the application without significant assistance. The data format being easy to understand and work with meant she could dive in and get started and stakeholders could immediately see results. The consistency of the structure meant that components could be built that leveraged the repeated patterns and then could be reused whenever that pattern was encountered. By following the design principles adopted from IIIF, the implementation architecture meant that the resulting system is performant, scalable, relatively modular and easy to adopt and adapt. Discussions around data mapping decisions were easier given the design principles and specifications. Instead of the discussions being about competing viewpoints, which has often led to frustration and lack of engagement, instead they could be structured around cooperatively determining which possibilities best aligned with the principles, and which were outside them. Examples of requested modelling that was determined through this process to be outside of the usability guidelines, and therefore out of the scope of the work, was a desire to align the parallel structures of textual description and structured data around dimensions and materials of an object, the inclusion of meta-meta-data such as the provenance of where individual assertions came from in the merged records, and structured data around uncertainty of assertions. Cases that would have led to inconsistency and more exceptions in the mapping included that animals referenced as subjects or actors could be treated as people, and fossils in the natural history museum could be treated as human made objects. Without the structures to help focus the attention on usability rather than correctness and completeness, these situations all would have led to either long and fraught discussions or aggravated developers needing to deal with more and more complex data structures. This paradigm also helped with determining the correct approach for systems architecture and functionality. The hardest challenge of using a knowledge graph was the need to have a traditional records style interface with keyword search, facets and views of the individual entities. A triplestore or native graph based system does not easily enable any of these, and requires multiple systems to be used in conjunction, which increases the complexity of development and maintenance of the platform. Instead after several months of research, a multi-modal platform was licensed which can treat the records as records, and extract the relevant parts of the graph and allow a single query to use the features of both worlds simultaneously. Again following the principles such as as simple as possible but no simpler, the graph parts of the queries were analysed against the search requirements and the resulting relationships simplified to only what was necessary. As the record maintains the full data, no information is lost, however the performance and ease of development was increased by collapsing complex chains of relationships down to only one artificial predicate. For example, in order to capture the role of each artist in the production of an artwork, the object is produced by a Production event, which then has parts to represent the roles, and each part is carried out by a Person or Organisation. To simplify the common query of objects produced by a given artist, that was reduced to the equivalent of the Dublin Core relationship of creator in the graph. This pattern was then applied across all of the record types and requirements such that only relationships between records were materialised, resulting in a 40% reduction in the size of the data and a much more performant system. One of the principles of Linked Art is that each relationship should only be present in the dataset in one record, including inverse relations. For example, if there are two Place records, and one place is part of the other such as a county within a state, then the part of the relationship is only expressed in the county, and the state does not list all of the counties, cities or other localities which are part of it. This direction is intentional to keep the size of each record down, and relatively consistent. However, it leads to the inevitable and obvious question of how do you determine, in this case, the places which are part of the state? As the solution requires looking up many records, the implementation is a search on that property. To avoid technology dependence on a particular query language or search engine, the Linked Art API makes use of the Hypertext Application Language (HAL) specification [18] which allows the record to include links to search URIs and give each a name. The front end need only follow the link to receive a pigmented list of all of the results, in the same format as for any other set of search results. This layer of indirection both avoids technology dependence and increases the usability for the front-end developer, who no longer needs to understand the data model and query language in order to retrieve the list of child places, but instead is provided a named link in the record to follow, and a standard response with all of the functionality needed to produce different user interfaces for different situations. Several other practical choices were facilitated through the LOUD principles and practices. In the LOD world, there is a fascination with federated queries - distributing the query among multiple, potentially heterogeneous systems, and then bringing the results back together before presenting them to the user. This paradigm is unreliable as the speed of the search is dependent on the speed of the slowest participating system, and if any system is offline for some reason, then the results will necessarily be incomplete. The alternative is to harvest all of the data from every participating system and combine it in advance into a single infrastructure. The trade-off is between the extent to which the results are out of date with the source system, and the speed at which searches can be accomplished by end users. Given the relatively infrequent change of the majority of the records, and that the users' information needs can likely still be satisfied by information that is a day out of date, the harvest approach was selected. The records are made available for synchronisation leveraging the IIIF Change Discovery API 1.0 [19], with the Linked Art data taking the place of the IIIF resources. This was significantly easier to implement by the participating libraries, archives and museums than every unit maintaining their own query endpoint. #### Automatically Enriched Cultural Heritage Data The LUX platform is distinguished by its extensive connections, not only within various Yale units but also outside, as it incorporates external data sources during data processing. These sources encompass a wide range of subject areas and perspectives. This enriches the data accessible to users by matching records within LUX. For instance, one of the key procedures employed to harmonise works and objects involves incorporating reconciled Wikidata records22, allowing for meaningful connections between items, for instance, in the YCBA collection and related works. Additionally, these sources incorporate additional names and terms from authority records and subject headings, such as those from the French National Library (BnF)23, the Library of Congress (LoC)24, or the German-speaking Integrated Authority File (GND)25. In addition, LUX integrates Wikimedia images that are in the public domain, as illustrated by Figure 1.426. Footnote 22: [https://www.wikidata.org](https://www.wikidata.org) Footnote 23: [https://data.bnf.fr](https://data.bnf.fr) Footnote 24: [https://id.loc.gov](https://id.loc.gov) Footnote 25: [https://gnd.network](https://gnd.network) Footnote 26: [https://upload.wikimedia.org/wikipedia/commons/9/9f/Joseph_Mallord_William_Turner_auto-retrato.jpg](https://upload.wikimedia.org/wikipedia/commons/9/9f/Joseph_Mallord_William_Turner_auto-retrato.jpg) This integration significantly enhances the record by combining knowledge from different Yale units, Getty vocabulary terms, national libraries and other external sources. This is an example of the positive impact and improvement that can be Figure 1.4: Joseph Mallord William Turner, 1775-1851. The portrait image of Turner is hosted on Wikimedia Commons. achieved by linking disparate data sources. The linking process is automated within the data processing code, using equivalent URI and intelligent matching of names associated with people, places and things. However, matching and merging data into a single LUX record can be a complex task. Data quality is affected by human imperfections, as all data is derived from human input. Figure 6 depicts the overall data transformation, reconciliation, enrichment and publication workflow for LUX. The base records come from both internal and external sources, with internal records being harvested via the IIIF Change Discovery API, which is in turn an implementation of the W3C Activity Streams 2.0 specification [20]. The process (diamonds) named Harvest runs nightly, triggered by an operating system level scheduler to poll each stream to find and retrieve records that have changed since the previous harvest. For external datasets that do not have associated Activity Streams, these records are either retrieved _en masse_ via downloadable dump files, or as needed when another record refers to them. The initial state, and all subsequent states after transformations have occurred, are stored in the "Record Caches" store. All records are passed through source specific transformation routines (Transform) in order to either map from arbitrary data formats, or to validate and clean up records already provided in Linked Art. Once the information is available in a consistent format, the records are first sent to a reconciliation engine (Reconcile) to discover further identities from the various datasets to be able to collect all information about a particular entity eventually into a single record. Once the records are connected, they have their internal identifiers re-written to a central set of unique identifiers by means of "Identifier Map", a very fast in-memory database, that maps the original URIs to the internal identifiers (Re-Identify). Figure 5: Yale and External Data Sources that have been used for the record about Joseph Mallord William Turner, 1775-1851. Linked Art representation of this record (JSON-LD): [https://lux.collections.yale.edu/data/person/f778f2f8-6b04-44af-8bef-cbfb8eccdc6f](https://lux.collections.yale.edu/data/person/f778f2f8-6b04-44af-8bef-cbfb8eccdc6f). The result is a transformation of the records where the data remains the same, but the identifiers are now consistent. The records from multiple sources that have been mapped to the same identifier are then merged together (Merge) to form the single record for the entity. The resulting dataset is then annotated with some additional features for indexing and exported to Load into the back-end query engine, a product called MarkLogic27, a licensed system by a company called Progress28. Footnote 27: [https://www.marklogic.com/](https://www.marklogic.com/) Footnote 28: [https://progress.com/](https://progress.com/) In order to interact with the data, a user connects to the LUX portal in their web browser and performs a search. That search is sent through a middle tier gateway that allows for seamless transition between MarkLogic installations (a process known as blue/green switching) and through an internal web cache built with Varnish to ensure that repeated queries are only evaluated once. Additional web caches are in place between the user and the LUX front end, including Cloudfront29, react cache and the browser's native web cache, to ensure performance is as fast as possible. Footnote 29: [https://aws.amazon.com/cloudfront/](https://aws.amazon.com/cloudfront/) After the public launch of LUX in May 2023, the focus for LUX involves developing various services. Among the requested services is direct access to the identifier map of equivalencies, as well as the associated indexes for reconciliation. Figure 1.6: LUX Data Pipeline and Architecture The open documentation of these services will be of significant value to individuals and institutions within the cultural heritage sector. By providing open access to these resources, LUX facilitates streamlined data reconciliation processes and the creation of meaningful connections between diverse cultural heritage datasets. This accessibility will enable a wider community to make effective use of LUX's capabilities, thereby promoting enriched data and interconnected cultural heritage resources across the sector. ## 0.5 Discussion This discussion is divided in two parts reflecting on the IIIF and Linked Art communities as well as the development of LUX. First, we discuss the dimension of community engagement required to create open and interoperable standards, emphasising the collective effort required to create specifications that can be seamlessly integrated into different systems. The effectiveness of the second dimension, which focuses on how LOUD standards can facilitate data enrichment, is greatly enhanced by the successful implementation of the first dimension. The collaborative approach to standards development lays the foundation for comprehensive data enrichment processes and highlights the need for standardised approaches to improve data enrichment in different domains. ### Community Engagement Fosters Open Standards and Interoperability The openness of a standard, while critical, is arguably not sufficient for widespread adoption. Achieving successful interoperability often depends on having a significant platform, either through commercial influence or community involvement as articulated by Nelson and Van de Sompel [21]: Because of the growing global adoption of open standards by GLAM institutions, especially IIIF specifications stand as a testimony that rich interoperability for distributed resource collections is effectively achievable. But other promising specifications that aim for the same holy grail are struggling for adoption, and, many times, lack of resources is mentioned as a reason. While that undoubtedly plays a role, it did not stand in the way of rapid adoption of protocols that have emerged from large corporations, such as the Google-dominated schema.org. This consideration re-emphasizes that a core ingredient of a successful interoperability specification, and hence of achieving an interoperable global information web, is a large megaphone, either in the guise of commercial power or active community engagement. For community-driven initiatives like IIIF and Linked Art, they require transparent practices that facilitate on-boarding of new members and governance, such as the establishment of a consortium, to steer the initial vision. However, it is essential to recognise that flexibility is equally paramount, particularly in the early stages of an initiative. Embracing adaptability allows these initiatives to respond to evolving needs and emerging insights from new members, ensuring that the initial vision remains dynamic and responsive to the changing landscape. Channelling the perspective of the World Wide Web Consortium (W3C) to accomplish a demonstration of (interoperable) implementations [22], here is how we define interoperability within a LOUD lens: Interoperability is a state in which two or more tested, independently developed technological systems can interact successfully according to their scope through the implementation of agreed-upon standards. This necessitates the development and availability of several compliant tools and software that adhere to these standards, forming an ecosystem of interoperable solutions. In striving for interoperability, it is essential to recognise the formation of sub-communities or satellite groups, often existing within or adjacent to communities. They play a critical role in the creation and maintenance of tools that align with the specified standards. Heavy reliance on a particular tool, if left without dedicated care and maintenance, can pose significant challenges. In such cases, communities need to rally and take collective action to ensure the tool's sustainability and continued functionality. These instances underscore the necessity for a shared commitment to support and collectively manage critical tools, demonstrating the communal responsibility and collaborative ethos that should define community-driven initiatives. On a different level, the success of collaboration for developing LUX can be attributed to a shared vision that recognises the value of highlighting the connections between diverse collections across different domains. This shared vision has enabled the resources of all participating units to contribute significantly over a number of years through active participation in committees, working groups and unit-level development efforts. #### LOUD Standards Facilitates Data Enrichment Interoperability and openness stand as essential cornerstones in the facilitation of robust data enrichment processes. The principle of interoperability engenders a collaborative environment wherein disparate data sources and formats converge to augment the quality and completeness of data. This convergence is pivotal in data enrichment, allowing for a seamless flow of data across a spectrum of tools and platforms. Concurrently, openness advocates for unhindered accessibility and availability of data, often epitomised by adherence to open standards. Through the lens of data enrichment, these principles synergistically operate to accommodate a diverse array of sources and perspectives, cultivating a more comprehensive and accurate enrichment process. The LOUD paradigm embodies the harmonisation of interoperability and openness within data enrichment efforts. By harmonising data enrichment with specifications compliant with the LOUD design principles, data connections are strengthened, ultimately improving both scholarly understanding and user experience by providing enriched, accessible, and contextually interwoven data. Many cultural heritage institutions have yet to embrace open APIs at scale, which hinders data accessibility and interoperability. Factors such as resource constraints, lack of awareness of the benefits, and complexity of implementation contribute to this slow adoption. Yet despite these challenges, LOUD specifications, such as Linked Art, offer a promising opportunity to address these issues and improve interoperability and data sharing in the cultural heritage domain. As such, Yale's LUX serves as an exemplary model of how combining these specifications, namely IIIF and Linked Art APIs, can provide pathways for robust data pipelines, data reconciliation, and subsequent data enrichment, thereby helping the cultural heritage field to progress. ## 0.6 Conclusion Both IIIF and Linked Art, supported by their dedicated and collaborative communities, are strongly committed to advancing the accessibility and interoperability of cultural heritage resources and their associated metadata. These communities actively contribute to the development and maintenance of shared APIs that are critical to promoting the seamless discovery and use of cultural heritage. LUX serves as a compelling case study for the use of linked data at scale, demonstrating the real-world application of automated enrichment in the cultural heritage sector. By leveraging linked data technologies, LUX enables users to seamlessly access and explore vast collections of cultural heritage data across Yale's museums, libraries and archives in a single environment. The platform demonstrates how automatically enriched data can improve accessibility, usability and interoperability, ultimately transforming the way users engage with and discover cultural heritage resources. Achieving semantic interoperability requires the establishment of sound LOUD-compliant ecosystems and workflows [23]. More specifically, the use of standards such as Linked Art is essential to enable effective data sharing across different domains. In addition, the use of IIIF APIs plays a key role in the seamless delivery and annotation of image-based resources. Importantly, it is possible for institutions of limited resources and size, not necessarily of the scale of larger institutions such as Yale, to achieve this type of interoperability. Collaboration and engagement with the wider IIIF and Linked Art communities becomes critical, providing vital support and expertise, particularly in the absence of human resources or skills. This combination of collaborative standards and real-world application underscores the potential and need for initiatives such as IIIF and Linked Art to drive transformative progress in the cultural heritage sector and beyond. Acknowledgements We want to express our deep appreciation to the dedicated contributors within the IIIF and Linked Art communities, who have served as a continual source of inspiration for our work. We also extend our thanks to our colleagues at the University of Basel and Yale University for their unwavering support and expertise.
2309.09715
LST-1 observations of an enormous flare of BL Lacertae in 2021
The first prototype of LST (LST-1) for the Cherenkov Telescope Array has been in commissioning phase since 2018 and already started scientific observations with the low energy threshold around a few tens of GeV. In 2021, LST-1 observed BL Lac following the alerts based on multi-wavelength observations and detected prominent gamma-ray flares. In addition to the daily flux variability, LST-1 also detected sub-hour-scale intra-night variability reaching 3-4 times higher than the gamma-ray flux from the Crab Nebula above 100 GeV. In this proceeding, we will report the analysis results of LST-1 observations of BL Lac in 2021, especially focusing on flux variability.
Seiya Nozaki, Katsuaki Asano, Juan Escudero, Gabriel Emery, Chaitanya Priyadarshi
2023-09-18T12:30:02Z
http://arxiv.org/abs/2309.09715v1
# LST-1 observations of an enormous flare of BL Lacertae in 2021 ###### Abstract: The first prototype of LST (LST-1) for the Cherenkov Telescope Array has been in commissioning phase since 2018 and already started scientific observations with the low energy threshold around a few tens of GeV. In 2021, LST-1 observed BL Lac following the alerts based on multi-wavelength observations and detected prominent gamma-ray flares. In addition to the daily flux variability, LST-1 also detected sub-hour-scale intra-night variability reaching 3-4 times higher than the gamma-ray flux from the Crab Nebula above 100 GeV. In this proceeding, we will report the analysis results of LST-1 observations of BL Lac in 2021, especially focusing on flux variability. Introduction Blazars are a type of active galactic nuclei (AGN) characterized by the presence of collimated relativistic plasma jets oriented toward the Earth. The emission from blazars is characterized by a highly variable non-thermal electromagnetic spectrum from radio to very-high-energy gamma rays (VHE; E\(\succ\sim 20\) GeV). The broadband SED has a two-hump structure. The lower-energy hump has a peak located from the radio to X-ray band and it is explained by the synchrotron emission from the accelerated leptons in the jet. On the other hand, the origin of the higher-energy hump located at gamma-ray bands is still under debate. Possible scenarios of the origin are inverse Compton scattering on low-energy photons emitted by synchrotron radiation and/or external photons (e.g. broad line region, dust torus). Blazars with no or faint optical emission lines are classified as "BL Lac" type object. In addition, it is sub-classified based on the peak frequency of the synchrotron peak (\(v_{s}\)). BL Lacertae (hereafter BL Lac) is a well-studied blazar located at redshift \(z=0.069\)[1]. BL Lac is eponymous of the intermediate-synchrotron-peak "BL Lac" type object (\(10^{14}\,\mathrm{Hz}<v_{\mathrm{s}}<10^{15}\,\mathrm{Hz}\)) [2]. It is well known for the flux variability in various energy bands as described in [3]. In the VHE gamma-ray band, BL Lac is only detected during the flaring state so far. After the first detection in VHE gamma-ray band (above 1 TeV) in 1998 [4], MAGIC and VERITAS detected multiple flares of BL Lac with various flux levels [5, 6, 7, 8]. Some observations also detected intra-night flux variability. As an example, VERITAS detected a decay time of \(13\pm 4\) min in 2011 [6] and a rise and decay time of 2.3 hours and 36 minutes in 2016, respectively [8]. Since 2019, BL Lac was relatively active in the gamma-ray band and VHE gamma-ray flares were detected by MAGIC several times [9, 10, 11]. Cherenkov Telescope Array1 (CTA) will be a next-generation very-high-energy gamma-ray observatory. Three different sizes of telescopes are planned to be built to cover a wide energy range with an order of magnitude better sensitivity than the current generation of Cherenkov telescopes. The Large-Sized Telescope2, with a 23-m diameter mirror dish, is designed to detect (relatively) low-energy signals, upwards from a few tens of GeV. The first prototype of LST (LST-1) was inaugurated at the CTA northern site (La Palma, Spain) in October 2018 and it has been in the commissioning phase. In parallel with the commissioning tests, LST-1 already started to observe gamma-ray objects for scientific purposes. Footnote 1: [https://www.cta-observatory.org/](https://www.cta-observatory.org/) Footnote 2: [https://www.lstl.iac.es/](https://www.lstl.iac.es/) In this contribution, we present the analysis results of the LST-1 observations of an enormous flare of BL Lac in 2021. ## 2 LST observations and analysis LST-1 started a campaign of BL Lac observations in July 2021 following the detection of the highest flux ever observed in the optical band [12]. On July 11 (MJD 59406), LST-1 detected VHE gamma-ray signals from BL Lac despite under bad weather conditions [13]. In August 2021, BL Lac was still active and VHE gamma-ray signals were also detected by MAGIC telescopes [14]. LST-1 continued its observation campaign until the mid of August 2021. The LST observations of BL Lac were performed during moonless time. We limited our observations to time windows with the source located at less than \(\sim 50\) degrees away from the zenith. It allows to take advantage of the LST-1 low-energy performance since low-energy photons are more absorbed by the thicker atmosphere crossed at lower altitudes. Each observation run takes 15-20 minutes. The total observation durations were 4.9 and 12.6 hours in July and August, respectively. We selected good-quality data based on the camera-averaged rate of pixel pulses with charge above 30 p.e as performed in [15] representing the quality of the atmospheric conditions. After the data selection, most of the LST-1 data taken in July 2021 were not selected. Thus, we only use the August datasets in this contribution and the duration of the selected August dataset is 9.8 hours. The observation of each night is summarized in Table 1. The selected data were processed using the standard pipeline cta-lstchain3[16, 17]. The detail of the analysis procedures is described in [15]. In this contribution, we performed the analysis using a likelihood technique4, of which an earlier version is covered in [18], to parameterize the data instead of the Hillas' parameters extraction used in [15]. The method performs a fit of a space-time signal model at the waveform level. The choice was also made to perform a so called source-dependent analysis, with the assumption of the knowledge of the source position being used in the event reconstruction. For the gamma-like event selection, we use \(alpha\) (angle between shower axis and the line between the known source position and the image centroid) and \(gammaness\) (the score indicating how likely it is that the primary particle is a gamma ray) obtained by machine learning. In addition, we apply an event cut of \(intensity\) (the sum of the charges of the pixels which survive the image cleaning) above 50 photo-electrons to ensure the data quality. The instrumental response function (IRF) of Cherenkov telescopes depends on the pointing direction of the telescope. Thus, we compute the IRF by the interpolation of ones obtained with the simulation data at different pointing directions. The high level analysis was performed using gammapy5[19, 20] to find the best spectral model using likelihood ratio test and compute the integrated gamma-ray flux. \begin{table} \begin{tabular}{c c c c} \hline \hline Date & MJD & Observation time & \(Zd\) range \\ & & [hours] & [degrees] \\ \hline Aug 3 & 59428.95–59429.05 & 1.76 & 20–45 \\ Aug 4 & 59429.94–59430.13 & 1.92 & 14–49 \\ Aug 5 & 59431.10–59431.13 & 0.50 & 14–16 \\ Aug 6 & 59431.92–59431.95 & 0.55 & 44–51 \\ Aug 8 & 59434.20–59434.22 & 0.44 & 34–41 \\ Aug 9 & 59434.99–59435.09 & 1.93 & 14–32 \\ Aug 10 & 59436.04–59436.10 & 1.34 & 13–19 \\ Aug 12 & 59438.12–59438.17 & 0.92 & 19–30 \\ Aug 13 & 59439.03–59439.05 & 0.45 & 15–20 \\ \hline \end{tabular} \end{table} Table 1: LST observation conditions. ## 3 Multi-wavelength observations and analysis ### Fermi-LAT The Large Area Telescope (LAT) on board the \(Fermi\) Gamma-ray Space Telescope is a wide field-of-view pair-conversion telescope covering the energy range from below 20 MeV to more than 300 GeV [21]. It has a wide field of view so that the entire sky is scanned every three hours for the standard survey mode. We analyzed \(Fermi\)-LAT data between July 29, 2021 (MJD=59424) and August 15, 2021 (MJD=59441) covering the LST-1 observation periods in August 2021. We performed the binned likelihood analysis using fermipy6. The analysis settings follow the recommendation for the Pass 8 data analysis7. We analyzed the data in 24-hours and 12-hours bins. Footnote 6: [https://github.com/fermiPy/fermipy](https://github.com/fermiPy/fermipy) Footnote 7: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8_usage.html](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Pass8_usage.html) ### Swift-UVOT, XRT The Neil Gehrels \(Swift\) Observatory is a multiwavelength mission for Gamma-Ray Burst astronomy and has been operational since 2004 [22]. \(Swift\) carried out 13 observations of BL Lac around the LST-1 observation campaign in August 2021. In this proceeding, we used the data obtained by the X-ray Telescope (XRT) [23] (0.2-10.0 keV) and the Ultraviolet/Optical Telescope (UVOT) [24] (170-600 nm) onboard the \(Swift\) satellite. XRT data were processed using an online XRT product generator8 and UVOT data were analyzed using the python wrapper tool including the official UVOT analysis pipeline90 Footnote 8: [https://www.swift.ac.uk/analysis/uvot/](https://www.swift.ac.uk/analysis/uvot/) Footnote 9: [https://github.com/Karlen5/swift-uvot-analysis-tools](https://github.com/Karlen5/swift-uvot-analysis-tools) ## 4 Results ### VHE gamma-ray signal detection Fig. 1 shows the distribution of \(alpha\) from the expected source position (ON) and another position without gamma-ray sources (OFF) on Aug 9. The VHE gamma-ray signals are clearly detected with a significance of 43.4\(\sigma\). The background level at a larger \(alpha\) region is consistent between ON and OFF events. Even though the difference in the background level is small, it can significantly affect the results at the lowest energy range since the background rejection power of the single telescope data analysis is worse than the stereo data analysis below \(\sim 100\) GeV as seen in Fig. 16 of [15]. For this flare, the signal-to-noise ratio is high even for the low-energy events so that the results are less affected by the background normalization factors than other gamma-ray sources. ### VHE gamma-ray light curve Fig. 2 shows the intra-night light curve above 100 GeV on Aug 9. The flux level was variable during this night and reached 3-4 times higher than that of the Crab Nebula (Crab Unit; C.U.) at maximum. Two peaks can be seen in the intra-night light curve, both with a rise and decay time scale around 10-20 minutes. Figure 1: Distributions of \(alpha\) of the LST observation on Aug 9. Blue and orange histograms correspond to the distribution of ON and OFF events, respectively. Each error bar shows the statistical uncertainty. Here, we apply event cuts of \(intensity>50\) p.e. and \(gammaness>0.9\). To compute the statistics in the figure, a cut of \(alpha<10\) degrees (red-dashed line) is also applied. Figure 2: Intra-night light curve (>100 GeV) observed by the LST-1 on Aug 9. Blue and orange points correspond to run-wise and 5-min duration. The error bars on the flux level show statistical uncertainties. The gray line shows the integral flux of the Crab Nebula obtained by the MAGIC [25] as a reference. ### Multi-wavelength light curve Fig. 3 shows the multi-wavelength light curve around the LST-1 observation period. In the night-wise LST light curve, we can see the flux variation (<0.1-2 C.U.) on a long time scale with large intra-night variabilities. In the \(Fermi\)-LAT light curve, multiple peaks were detected during this period and the highest flux was observed around Aug 4 when LST-1 also detected high flux (\(\sim\) 3 C.U.) in the run-wise light curve. Around this peak, Swift-XRT also detected the highest count rates reaching around three times higher than the lowest rates during this period. After all of the bands showed an increase in the flux again around Aug 6-7, it gradually decreased except for the LST-1 data points observed on Aug 9. Figure 3: Multi-wavelength light curves of the BL Lacertae between MJD=59424 and 59441 (VHE gamma-ray: LST-1 (>100 GeV), high-energy gamma-ray: Fermi-LAT (> 100 MeV), X-ray: Swift-XRT (0.3–10 keV), Ultraviolet and Optical: Swift-UVOT). For LST-1, night-wise (blue) and run-wise (orange) light curves are shown. Gray dashed line shows the integral flux of the Crab Nebula [25] as shown in Fig. 2. For Fermi-LAT, 1-day (blue) and 12-day (orange) light curves are shown. ## 5 Summary LST-1 performed the observation of BL Lac flare in 2021. During the observation campaign, VHE gamma-ray signals were detected on most nights and the detection significance was 43.4\(\sigma\) on Aug 9 with the high signal-to-noise ratio. We have observed the intra-night variability with the flux level of 3-4 C.U. at the peak. This variability had a sub-hour scale double-peak structure. The LST-1 night-wise light curve also shows the daily-scale flux variability (<0.1-2 C.U.). \(Fermi\)-LAT and Swift-XRT show the highest flux around Aug 4 when LST-1 also detected a high flux of \(\sim 3\) C.U. in the run-wise light curve. The detailed discussion and interpretation will be described in a future paper. ## Acknowledgments We gratefully acknowledge financial support from the following agencies and organisations: Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), Fundacao de Apoio a Ciencia, Tecnologia e Inovacao do Paran - Fundacao Araucaria, Ministry of Science, Technology, Innovations and Communications (MCTIC), Brasil; Ministry of Education and Science, National RI Roadmap Project DO1-153/28.08.2018, Bulgaria; Croatian Science Foundation, Rudjer Boskovic Institute, University of Osijek, University of Rijeka, University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Zagreb, Faculty of Electrical Engineering and Computing, Croatia; Ministry of Education, Youth and Sports, MEYS LM2015046, LM2018105, LTT17006, EU/MEYS CZ02.02.1.01/0.0/0.0/16_013/0001403, CZ.02.1.01/0.0/0.0/18_0460016007 and CZ.02.1.01/0.0/0.0/16_019/0000754, Czech Republic; CNRS-IN2P3, the French Programme d'investigsements d'avenir and the Enigmass Labex, This work has been done thanks to the facilities offered by the Univ. Savoie Mont Blanc - CNRS/IN2P3 MUST computing center, France; Max Planck Society, German Bundesministerium fur Bildung und Forschung (Verbundforschung / ErtUM), Deutsche Forschungsgemeinschaft (SFBs 876 and 1491), Germany; Istituto Nazionale di Astrofisica (INAF), Istituto Nazionale di Fisica Nucleare (INFN), Italian Ministry for University and Research (MUR); ICRR, University of Tokyo, JSPS, MEXT, Japan; JST SPRING - JPMJSP2108; Narodowe Centrum Nauki, grant number 2019/34/E/ST9/00224, Poland; The Spanish groups acknowledge the Spanish Ministry of Science and Innovation and the Spanish Research State Agency (AEI) through the government budget lines PGE2021/28.06.000X.411.01, PGE2022/28.06.000X.411.01 and PGE2022/28.06.000X.711.04, and grants PID20202-139117NB-C44, PID2019-104114RB-C31, PID2019-107847RB-C44, PID2019-104114RB-C32, PID2019-105510GB-C31, PID2019-104114RB-C33, PID2019-107847RB-C41, PID2019-107847RB-C43, PID2019-107847RB-C42, PID2019-107988GB-C22, PID2021-1245810B-100, PID2021-125331NB-100; the "Centro de Excelencia Severo Ochoa" program through grants no. CEX2019-000920-S, CEX2020-001007-S, CEX2021-001131-S; the "Unidad de Excelencia Maria de Maeztu" program through grants no. CEX2019-000918-M, CEX2020-001058-M; the "Ramon y Cajal" program through grants RYC2021-032552-I, RYC2021-032991-I, RYC2020-028639-1 and RYC-2017-22665; the "Juan de la Cierva-Incorporacion" program through grants no. IJC2018-037195-I, IJC2019-040315-I. They also acknowledge the "Atraccion de Talento" program of Comunidad de Madrid through grant no. 2019-T2/TIC-12900; the project "Tecnologis avanzadas para la explorci o del universo y sus componentes" (PR47/21 TAU), funded by Comunidad de Madrid, by the Recovery, Transformation and Resilience Plan from the Spanish State, and by NextGenerationEU from the European Union through the Recovery and Resilience Facility; the La Caixa Banking Foundation, grant no. LCF/BQ/PI21/11830030; the "Programa Operativo" FEDER 2014-2020, Consejeria de Economia y Conocimiento de la Junta de Andalucia (Ref. 1257737), PAIDI 2020 (Ref. P18-FR-1580) and Universidad de Jaen; "Programa Operativo de Crecimiento Inteligente" FEDER 2014-2020 (Ref. ESFRI-2017-IAC-12), Ministerio de Ciencia e Innovacion, 15% co-financed by Consejeria de Economia, Industria, Comercio y Conocimiento del Goibierno de Canarias; the "CERCA" program and the grant 2021SGR00426, both funded by the Generalitat de Catalunya; and the European Union's "Horizon 2020" GA:824064 and NextGenerationEU (PRTR-C17.11). State Secretariat for Education, Research and Innovation (SERI) and Swiss National Science Foundation (SNSF), Switzerland; The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreements No 262053 and No 317446; This project is receiving funding from the European Union's Horizon 2020 research and innovation programs under agreement No 676134; ESCAPE - The European Science Cluster of Astronomy & Particle Physics ESFRI Research Infrastructures has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement no. 824064.
2305.00382
Constructing a Knowledge Graph from Textual Descriptions of Software Vulnerabilities in the National Vulnerability Database
Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance.
Anders Mølmen Høst, Pierre Lison, Leon Moonen
2023-04-30T04:23:40Z
http://arxiv.org/abs/2305.00382v2
# Constructing a Knowledge Graph from Textual Descriptions of ###### Abstract Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance. ## 1 Introduction An increasing number of services are moving to digital platforms. The software used on these digital platforms is, unfortunately, not without flaws. Some of these flaws can be categorized as security vulnerabilities that an attacker can exploit, potentially leading to financial damage or loss of sensitive data for the affected victims. The National Vulnerability Database (NVD)1 is a database of known vulnerabilities which, as of January 2023, contains more than 200 000 vulnerability records. The Common Vulnerability and Exposures (CVE) program2 catalogs publicly disclosed vulnerabilities with an ID number, vulnerability description, and links to advisories. NVD fetches the data from CVE and provides additional metadata such as weakness type (CWE) and products (CPE). CWEs are classes of vulnerabilities (CVEs), for example, CWE-862: Missing Authorization contains all CVEs related to users accessing resources without proper authorization. A CPE is a URI string specifying the product and its version, for example, _cpe:2.3:a:limesurvey:limesurvey:5.4.15_ is the CPE for the survey app Limesurvey with version 5.4.15. Keeping the information in the database up to date is important to patch vulnerabilities in a timely manner. Unfortunately, patching becomes increasingly difficult as the yearly number of published vulnerabilities increases.3 Footnote 1: [https://nvd.nist.gov/general/nvd-dashboard](https://nvd.nist.gov/general/nvd-dashboard) To automatically extract relevant information from vulnerability descriptions, named entity recognition (NER) and relation extraction (RE) can be applied as shown in Fig. 1. The extracted information can be stored as triples in a knowledge graph (KG). As the extracted triples might be incorrect or missing, knowledge graph embeddings (KGE) can be used to learn the latent structures of the graph and predict missing entities or relations. The work described in this paper is based on the master thesis by the first author. We investigate how NLP and KGs can be applied to vulnerability records to predict missing software entities. More specifically, we address the following research question: _RQ: Can our knowledge graph predict vulnerability weakness types and vulnerable products?_ The contributions of this paper include: (1) An approach for extracting and assessing vulnerability data from NVD; (2) A vulnerability ontology for knowledge graph construction; (3) A rule-based relation extraction model. ## 2 Related Work We distinguish the ensuing areas of related work: **Labeling:** Labeled data may not always be available to train supervised learning models for tasks including NER and RE. To address this problem, distant supervision aims at proposing a set of labeling functions for the automatic labeling of data. Bridges et al. (2014) applied distant supervision using a cybersecurity corpus. Their ap proach includes database matching using the CPE vector, regular expressions to identify common phrases related to versioning, for example, "before 2.5", and _gazetteers_, which are dictionaries of vulnerability-relevant terms, such as "execute arbitrary code". After manual validation of the labeled entities, Bridges et al. (2014) report a precision of 0.99 and a recall of 0.78. **Named Entity Recognition:** Training NER models on labeled data are useful as distant supervision depends on assumptions about the input data, which does not always hold. For example, in the case of NVD, if the new data is missing CPE information. Machine learning models are not dependent on such metadata, and, as a consequence can generalize better to new situations. Bridges et al. (2014) propose NER based on the Averaged Perceptron (AP). The conventional perceptron updates its weights for every prediction, which can over-weight the final example. The averaged perception keeps a running weighted sum of the obtained feature weights through all training examples and iterations. The final weights are obtained by dividing the weighted sum by the number of iterations. Gasmi et al. (2019) propose another NER model based on a long short-term memory (LSTM) architecture. The authors argue that it can be more useful when the data set has more variation, as the LSTM model does not require time-consuming feature engineering. However, their results show it is not able to reach the same level of performance as Bridges et al. (2014). SecBERT4 is a pre-trained encoder trained on a large corpus of cybersecurity texts. It is based on the BERT architecture (Devlin et al., 2019) and uses a vocabulary specialized for cybersecurity. SecBERT can be fine-tuned for specific tasks such as NER. Footnote 4: [https://github.com/jackaduma/SecBERT](https://github.com/jackaduma/SecBERT) Another pre-trained encoder similar to SecBERT is SecureBERT, proposed by Aghaei et al. (2022). SecureBERT leverages a customized tokenizer and an approach to alter pre-trained weights. By altering pre-trained weights, SecureBERT aims to increase understanding of cyber security texts while reducing the emphasis on general English. **Relation Extraction:** Relations between named entities can be discovered with RE. Gasmi et al. (2019) propose three RE models for vulnerability descriptions from NVD based on LSTMs. Their best-performing model achieves a precision score of 0.92. For labeling the relations, Gasmi et al. (2019), applies distant supervision (Jones et al., 2015). Gasmi et al. (2019) does not manually evaluate their labels before using them in the LSTM models; however, the approach is based on Jones et al. (2015), which indicates 0.82 in precision score after manual validation. Both NER and RE are important components for constructing knowledge graphs from textual descriptions. We explore several knowledge graphs related to cybersecurity in the next section. **Knowledge Graphs in Cybersecurity:** CTI-KG proposed by Rastogi et al. (2023), is a cybersecurity knowledge graph for Cyber Threat Intelligence (CTI). CTI-KG is constructed primarily from threat reports provided by security organizations, describing how threat actors operate, who they target, and the tools they apply. Rastogi et al. (2023) manually labels a data set of approximately 3000 triples with named entities and relations. This labeled data is then used for training models for NER and RE for constructing the KG. CTI-KG also uses KGE to learn latent structures of the graph and predict incomplete information. Here, Rastogi et al. (2023) applies TuckER, a tensor decomposition approach proposed by Balazevic et al. (2019), which can be employed for knowledge graph completion. TuckER can represent all relationship types (Balazevic et al., 2019), as opposed to earlier models. For example, TransE proposed by Bordes et al. (2013) has issues modeling 1-to-\(n\), \(n\)-to-1, and \(n\)-to-\(n\) relations (Lin et al., 2015). An example of a 1-to-\(n\) relationship in a cybersecurity context is the relationship between Figure 1: Example of a CVE with labels CVEs and CPEs. Whereas a CVE can have multiple CPEs, a CPE can only have one CVE. As CTI-KG focuses on threats, another KG, VulKG (Qin and Chow, 2019), is constructed from vulnerability descriptions from NVD. VulKG consists of three components, a vulnerability ontology, NER for extracting entities from the vulnerability descriptions, and reasoning for discovering new weakness (CWE) chains. After extracting entities, relations between these can be found using the VulKG ontology (Qin and Chow, 2019). The final step of the framework presented by Qin and Chow (2019) is the reasoning component which is based on chain confidence for finding hidden relations in the graph. Similarly to VulKG, we construct our KG from vulnerability descriptions in NVD. However, VulKG depends on training NER models from scratch, while we instead depend on a pre-trained model fine-tuned to our data. Contrary to training the model from scratch, the pre-training approach utilizes an existing model already trained on a large dataset. Consequently, fine-tuned models can learn patterns in the new data set more quickly. ## 3 Methods Our approach is shown in Fig. 2 and gives an overview of the construction of the vulnerability knowledge graph from CVE records. We discuss the different steps below. For replication, we share details about the hyperparameter tuning of various models in the appendices. **Data:** Our dataset is downloaded in JSON format from NVD, and the pipeline consists of multiple steps before predicting missing or incorrect labels as the final step. The data set consists of all CVE records from 2003 to 2022, which contains approximately 175 000 CVEs. The CVE records are labeled using the distant supervision approach proposed by Bridges et al. (2014). **Named Entity Recognition:** We train two architectures: Averaged Perceptron and SecBERT. _Averaged Perceptron (AP):_ AP is a feature-engineered model, and we use the same features as Bridges et al. (2014) Due to computational constraints in the AP model, we restricted our training data to 4000 CVEs. We first replicate their approach and separately trained and evaluated two AP models, one for IOB-labeling and one for domain-labeling, using the distant supervision-generated labels. In practice, when a new CVE is published, we only have access to the textual description. Since the IOB labels are input features to the domain model, those must be predicted first. Thus, in our second experiment, we again train two AP models, but use the predicted IOB labels as input to the domain labeling, instead of the generated labels. _SecBERT:_ In addition to AP, we use the pre-trained SecBERT model for NER. A significant difference from AP is that SecBERT jointly extracts IOB and domain labels. Moreover, as SecBERT is significantly faster than AP, there is no need to restrict the dataset. We split our data into 60/20/20 for training, evaluation, and testing. **Relation Extraction:** For relation extracting, we use an _ontology_ illustrated in Fig. 3, to guide their creation: When two entities of type \(A\) and \(B\) are detected in a CVE, a relation between the two is created if the ontology has an edge between types \(A\) and \(B\). Note that entities are connected to their corresponding CVE-ID and CWE-ID, and we concatenate multi-word entities based on their IOB labels. The vulnerability descriptions are generally written so that vendors are followed by their products which are then followed by their versions. Thus, we can derive relations between vendor, product, and version by looking at the word order. We also make relations from relevant terms to the corresponding CVE ID entity, and through the CVE-ID the relevant terms are connected to the corresponding vendors, products, and versions. **Entity Prediction:** To answer the RQ, our KG should predict weakness types (CWEs) and products (CPEs). Given a head entity and a relation as input, the task of entity prediction is to find the tail entity, which is the final step of our KG. \(Hits@n\) and _mean reciprocal rank_ (MRR) are standard metrics used for entity prediction. For each input example, the embedding algorithm assigns a confidence score to all possible triples. These triples are then ranked by confidence scores, where the triple with the highest confidence is the most plausible to be true according to the model. The \(Hits@n\) metric measures the number of times the true triple is ranked among the top \(n\) triples. We use the processed triples from the RE model as input to our entity prediction model, where TuckER is the chosen architecture. The triples from our RE model are considered ground truth. TuckER removes the tail entities from the ground truth before predicting these based on entity and relation embeddings. We perform data augmentation by reversing all the relational triples. The data set is split in 80/10/10 percent for training, validation, and testing. We select the best model by refining the four combinations proposed by Balazevic et al. (2019) with an additional grid search. ## 4 Results and discussion Our empirical evaluation uses the CVE dataset discussed in Section 3. For replication, the parameters of the best-performing models are in the appendices. **NER:** NER results are presented in Table 1. We see that SecBERT outperforms AP on all metrics. We compare our reproduction results with the results reported by Bridges et al. (2014) in Table. 2. Where Table 1 shows the performance with all labels in place, individual IOB and domain labeling performance are reported in Table 2. The AP model was based on Bridges et al. (2014), which implemented their experiments in OpenNLP and Python. We reused their Python code for our reproduction. Note that the results on our data are below the reports by Bridges et al.. The authors indicated that they experienced slightly better performance using OpenNLP, which _could_ be the reason for the difference in score. Unfortunately, they do not provide any explanation of this difference or why it occurs. Contrary to Bridges et al. (2014), we are not interested in the performance of IOB and domain labeling measured individually. In our approach, the NER model should be used to extract entities from new data that can form triples in our KG. When a new CVE is published, we can access the textual description without any labels. Using Bridges' approach, we first need to use the IOB model, and then the predicted IOB labels can be used as input features to the domain model responsible for the final prediction. To the best of our knowledge, we can not analytically combine the IOB model and domain model results reported by Bridges et al.. As such, we rely on our own experimental results, which show that the performance of the fine-tuned SecBERT model outperforms the AP model. **Relation Extraction:** We did not have any ground truth data when evaluating our RE approach, as a consequence, we manually validated \begin{table} \begin{tabular}{l c c c} \hline \hline NER Model & Precision & Recall & \(F_{1}\) \\ \hline Averaged perceptron & 0.925 & 0.84 & 0.88 \\ Fine-tuned SecBERT & 0.93 & 0.93 & 0.93 \\ \hline \hline \end{tabular} \end{table} Table 1: NER evaluation results for the averaged perception and the fine-tuned SecBERT model. \begin{table} \begin{tabular}{l l l c c} \hline \hline Author & Labeling & Precision & Recall & \(F_{1}\) \\ \hline Host et al. & IOB & 0.93 & 0.93 & 0.93 \\ & Domain & 0.94 & 0.94 & 0.94 \\ Bridges & IOB & 0.97 & 0.97 & 0.96 \\ & Domain & 0.99 & 0.99 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 2: Our reproduction results compared to those reported by Bridges et al. (2014) Figure 3: Ontology for relation extraction. The edges should be interpreted as, for example, “a vendor _has_ a product”, “a product _has_ a version”, “a CVE vulnerability _has_ a CWE type” Figure 2: The figure illustrates the steps in our approach. We start by downloading our data from NVD, pre-processing the data, and adding labels to the entities. With our labeled data, we perform NER and RE to construct the KG. Because missing entities might occur in the KG, we predict these in the last step. a sample of 100 extracted triples. From this sample, we measured a precision score of 0.77. While Jones et al. (2015) has proposed a semi-supervised approach for labeling relations, they focus on a broader data set than we do. We, therefore, choose to identify relations based on our proposed ontology in Fig. 3. Our RE approach could not reach the level of Jones et al. (2015), which reported 0.82 in precision score. For future work, one idea to improve RE is to utilize CPE vectors for relation labeling in addition to our proposed rules. Then we can train machine learning models on top of our labeled data using pre-trained variations of BERT models. **Entity Prediction:** During the relation extraction, we extracted approximately two million triples. As we further reversed all triples, four million triples were used as input to the model. In Table. 3, we compare our best-performing model with the results presented in Rastogi et al. (2023), which uses the same model architecture, TuckER, on threat reports. The input data are assumed to be true, and evaluation performance is not manually validated. We choose TuckER as our embedding algorithm for entity prediction as it is the current state-of-the-art model measured on standard data sets (Balazevic et al., 2019). The idea is that TuckER captures latent structures of our KG. TuckER encodes the input triples as vector embeddings based on encoded characteristics and can use these embeddings to predict missing entities. For example, if two CVEs share important characteristics such as vulnerability-relevant terms and affected products, then according to the theory, they should belong to the same neighborhood in a vector space. Consequently, TuckER could predict that the CVEs belong to the same CWE. \(Hits@n\) and _mean reciprocal rank_ (MRR) are standard metrics used for entity prediction. Given a head entity and a relation, the task is to predict the tail entity. For each example, the embedding algorithm assigns a confidence score to all possible triples. These triples are then ranked by confidence scores, where the triple with the highest confidence is the most plausible to be true according to the model. The \(Hits@n\) metric measures the number of times the true triple is ranked among the top \(n\) triples. As a benchmark to measure our performance, we use the results presented in Rastogi et al. (2023), which also uses TuckER for entity prediction. Rastogi et al. (2023) has reported a Hits@10 metric of 0.804, which is better than our reported results seen in Table 3. We believe that more precise and consistent input labels can be the reason for this, where a limitation of our approach is that we aim at predicting CVE-IDs which are unique for each vulnerability description. We consider the task of predicting CVE-IDs as less important for our model as these will always be attached to the CVE description from our raw data. Balazevic et al. (2019) addresses that future work might incorporate background knowledge on relationship types. Avoiding predicting CVE-IDs is one example of such background knowledge. Another reason for the difference could be that some CWEs overlap and share many of the same entities making it more difficult for our model to discriminate between CWEs. ## 5 Conclusion This paper proposes a vulnerability knowledge graph constructed from textual CVE records from the National Vulnerability Database (NVD). The graph construction relies on a pipeline including NER, relation extraction, and an entity prediction model based on the TuckER framework. As future improvements, we are interested in better labeling of relations through distant supervision approaches and the integration of BERT models for relation extraction. ## Acknowledgements The research presented in this paper was supported by the Research Council of Norway through projects secureIT (grant #288787) and Cleanup (grant #308904). The empirical evaluation benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), supported by the Research Council of Norway through grant #270053. \begin{table} \begin{tabular}{l r r r r} \hline \hline Model & Hits@10 & Hits@3 & Hits@1 & MRR \\ \hline Host et al. & 0.760 & 0.728 & 0.682 & 0.710 \\ Rastogi & 0.804 & 0.759 & 0.739 & 0.75 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance metrics for our entity prediction model compared to Rastogi et al. (2023).
2305.20052
Integrated Decision Gradients: Compute Your Attributions Where the Model Makes Its Decision
Attribution algorithms are frequently employed to explain the decisions of neural network models. Integrated Gradients (IG) is an influential attribution method due to its strong axiomatic foundation. The algorithm is based on integrating the gradients along a path from a reference image to the input image. Unfortunately, it can be observed that gradients computed from regions where the output logit changes minimally along the path provide poor explanations for the model decision, which is called the saturation effect problem. In this paper, we propose an attribution algorithm called integrated decision gradients (IDG). The algorithm focuses on integrating gradients from the region of the path where the model makes its decision, i.e., the portion of the path where the output logit rapidly transitions from zero to its final value. This is practically realized by scaling each gradient by the derivative of the output logit with respect to the path. The algorithm thereby provides a principled solution to the saturation problem. Additionally, we minimize the errors within the Riemann sum approximation of the path integral by utilizing non-uniform subdivisions determined by adaptive sampling. In the evaluation on ImageNet, it is demonstrated that IDG outperforms IG, Left-IG, Guided IG, and adversarial gradient integration both qualitatively and quantitatively using standard insertion and deletion metrics across three common models.
Chase Walker, Sumit Jha, Kenny Chen, Rickard Ewetz
2023-05-31T17:25:12Z
http://arxiv.org/abs/2305.20052v2
# Integrated Decision Gradients: Compute Your Attributions Where the Model Makes Its Decision ###### Abstract Attribution algorithms are frequently employed to explain the decisions of neural network models. Integrated Gradients (IG) is an influential attribution method due to its strong axiomatic foundation. The algorithm is based on integrating the gradients along a path from a reference image to the input image. Unfortunately, it can be observed that gradients computed from regions where the output logit changes minimally along the path provide poor explanations for the model decision, which is called the _saturation effect_ problem. In this paper, we propose an attribution algorithm called integrated decision gradients (IDG). The algorithm focuses on integrating gradients from the region of the path where the model makes its decision, i.e., the portion of the path where the output logit rapidly transitions from zero to its final value. This is practically realized by scaling each gradient by the derivative of the output logit with respect to the path. The algorithm thereby provides a principled solution to the saturation problem. Additionally, we minimize the errors within the Riemann sum approximation of the path integral by utilizing non-uniform subdivisions determined by adaptive sampling. In the evaluation on ImageNet, it is demonstrated that IDG outperforms IG, left-IG, guided IG, and adversarial gradient integration both qualitatively and quantitatively using standard insertion and deletion metrics across three common models. ## 1 Introduction The access to internet-scale data and compute power has fueled the success of black box neural network models for applications such as disease detection [1], image synthesis [2], and protein folding [3]. The phenomenal performance of these networks comes from the large number of parameters and non-linear interactions among them. The complex and high dimensional dynamics makes it difficult to understand and visualize why a neural network makes a particular decision. To establish trustworthiness in neural network models, noteworthy research efforts have been devoted to interpretability and explainability [4]. Attribution methods provide model explanation by computing the contribution of each input feature to a model decision. Attribution methods broadly fall into perturbation based methods [5; 6], backpropagation based methods [7; 8], and gradient based methods [9; 10]. Gradient based methods are promising due to their strong axiomatic foundation, and model-agnostic implementation [10]. Gradient based methods compute attribution maps by capturing the gradients at the model inputs with respect to the model outputs [9]. However, gradients computed with respect to important input pixels may be zero due to the non-linear activation functions. Integrated Gradients (IG) solved this problem by integrating the gradients along a path from a baseline reference image to the input image. Unfortunately, it can be observed that gradients from regions of the path where the output logit changes minimally (e.g. is saturated) provide poor explanations for the model decision [11]. This phenomena is called the _saturation effect_ problem. Solution templates to solve the saturation problem include: selecting non-straight-line paths [12; 13], path truncation [11], and post processing methods that use thresholding [14] as well as averaging across blurred inputs [15]. While these methods improve attribution quality, they do not provide a principled solution to the saturation problem. In this paper, we propose a new attribution method called Integrated Decision Gradients (IDG). We call the portion of the path where the output logit rapidly transitions from zero to its final value the _decision region_. The IDG algorithm focuses on integrating gradients from the decision region of the path integral. This is practically realized by scaling each gradient by the derivative of the output logit with respect to the path. The scaling factor rewards gradients in the decision region and penalizes gradients from outside the decision region. The main contributions of this paper are summarized as follows: * We propose a new attribution method called IDG that satisfies a path integral sensitivity axiom and provides a principled solution to the saturation problem. * We present an adaptive sampling technique to select non-uniform subdivisions for the Riemann approximation of the path integral. The non-uniform subdivisions reduce computational errors (or runtime overheads) compared with using uniform subdivisions. * Compared with IG [10], Left-IG (LIG) [11], Guided IG (GIG) [12], and Adversarial Gradient Integration (AGI) [13], IDG improves both the qualitative and quantitative results. The remainder of the paper is organized as follows: related work is examined in Section 2, the IDG attribution method in Section 3, the adaptive sampling algorithm is proposed in Section 4, experimental evaluation is presented in Section 5, and the paper is concluded in Section 6. ## 2 Related Work In this section, we first review the limitations of directly using gradients as attributions. Next, we review integrated gradients and assess the saturation effect problem within path integrals. ### Limitations of Using Gradients as Attributions Attributions are defined to be the contribution of each input feature to the model output decision. An attribution method satisfies the axiom of _sensitivity_ if a single feature that differs between a baseline and input - which produce different output predictions - is given a non-zero attribution. Additionally, if a neural network is not affected by changing a variable, then that variable's attribution shall be zero [10]. Computing the gradient of the inputs with respect to the output logit is a promising Figure 1: (left) An overview of the adaptive sampling algorithm, and the IDG attribution method. (right) A preliminary visual comparison of IDG with IG [10], LIG [11], GIG [12], and AGI [13]. method for computing attributions [9]. However, the use of non-linear activation functions causes the sensitivity axiom to be violated [10], which is shown in Example 1 below. **Example 1**.: _Consider a function \(F=1-ReLU(1-x)\), a baseline \(x^{\prime}=0\), and an input \(x=2\). For \(x^{\prime}=0\), the function \(F\) is equal to \(0\), and for \(x=2\), the function \(F\) is equal to \(1\). Since changing \(x\) from \(0\) to \(2\) affects the output of \(F\), the attribution w.r.t. the feature \(x\) should be non-zero. However, \(\partial F/\partial x=0\) at \(x=2\), which results in an attribution of 0 [10]._ Integrated gradients offers a solution to computing attributions that satisfies the sensitivity axiom. ### Integrated Gradients Integrated Gradients computes attributions by integrating gradients on a straight line between a reference image and an input image [10]. Let \(F\) be the function realizing the output logit of interest. \(IG_{i}\) with input image \(x\) is mathematically defined using a path-integral [10], as follows: \[IG_{i}(x)=(x_{i}-x^{\prime}_{i})\times\int_{\alpha=0}^{1}\frac{\partial F(x^{ \prime}_{i}+\alpha\times(x_{i}-x^{\prime}_{i}))}{\partial x_{i}}d\alpha \tag{1}\] where \(x^{\prime}\) is a black baseline image, \(\alpha\in[0,1]\) parameterizes the straight-line path between \(x^{\prime}\) and \(x\), \(x_{i}\) and \(x^{\prime}_{i}\) represent a single pixel of their respective images, and \(IG_{i}\) is therefore the attribution of pixel \(i\) in the input image. The IG attribution method is illustrated in Figure 2. The top row shows interpolated inputs, the second row shows the corresponding input gradients, the third row visualizes the output logit with respect to the path. The IG attribution map is equal to the sum of the gradients in the second row. The use of a path-integral ensures that gradients from regions of \(F\) where \(\partial F/\partial x_{i}\) is non-zero are computed. In Example 1, IG will compute gradients from the region \([0,1]\), where \(\partial F/\partial x=1\). The resulting attribution w.r.t. \(x\) is \(2\), i.e., the attribution is non-zero and sensitivity is satisfied. Nevertheless, many attributions computed using IG are still noisy due to saturation effects [11]. ### Saturation Effects within Path-Integrals To introduce and understand the _saturation effect_ problem within path-integrals, we examine the performance of the IG attribution method in in Figure 2. We study the quality of the computed gradients with respect to the decision and saturated regions of the path integral. It can be observed Figure 2: The figure illustrates the IG attribution method and saturation effects within path integrals. The top row shows interpolated inputs and the second row shows the corresponding gradients. The IG attribution map (shown to the right) is the average of the gradients. The third row shows the logit-\(\alpha\) curve, which defines the decision and saturation regions. It can be observed that the gradients from the decision region are of higher quality than the saturation region. that (i) gradients from the saturation regions are of low quality and (ii) gradients from the decision region are of high quality. The conclusion is rather straight forward to understand. If the model output does not increase while moving \(\triangle\alpha\) along the path, it is intuitive that the corresponding gradients are not important to the model decision. Conversely, if the output logit changes rapidly while moving \(\triangle\alpha\) along the path, those gradients have a strong impact on the model decision. This raises the rudimentary question: Is it possible to design a path integral that focuses on computing gradients from the region where the model decision is made and the highly informative gradients are located? It can for example be observed in Figure 2 that the gradients computed at \(\alpha=0.02\) alone provide an excellent explanation for the model decision. ## 3 Integrated Decision Gradients In this section, we propose a new attribution method called Integrated Decision Gradients (IDG). We outline the motivation behind the design of IDG, explain the concept of importance factors, and provide the definition as well as a visualization of IDG. ### Motivation Path integrals integrate gradients from a reference image to an input target image. A fundamental challenge is to determine the ideal importance of each gradient. Based on the analysis in the previous section, we define a new _sensitivity axiom_ for path integrals. Next, we introduce the concept of an importance factor, which is used to construct an attribution algorithm that satisfies the axiom. Axiom: Sensitivity (path integrals)Let \(F\) be the output of a neural network. For every point within a path integral parameterized by a parameter \(\alpha\), an attribution method satisfies Sensitivity (path integrals) if there is no contribution to the attribution result when \(\partial F/\partial\alpha\) is equal to zero. If \(\partial F/\partial\alpha\) is non-zero, there should be a non-zero contribution to the attribution result. None of the existing attribution methods based on path integrals satisfy the axiom [10; 11; 12; 13]. The traditional IG method places an equal weight on all gradients [10], even those that occur in the saturation region where \(\partial F/\partial\alpha=0\). The Left-IG attribution attempts to solve this by truncating the path integral after the output logit has reached \(90\%\) of its final value [11]. This assigns a weight of zero and one to gradients from the approximate saturation and decision regions respectively, which does not guarantee that the axiom is satisfied. GIG and AGI use non-straight line paths that attempt to avoid integrating gradients from saturated regions [12; 13], which does also not guarantee that the Sensitivity (path integrals) axiom is satisfied. To satisfy the axiom, we conjecture that the importance of each gradient should be proportional to the impact on the model output, which is conceptually shown in Figure 3. Inspired by this, we define an _importance factor_, as follows: \[IF(\alpha)=\frac{\partial F(x^{\prime}+\alpha(x-x^{\prime}))}{\partial\alpha} \tag{2}\] where \(IF(\alpha)\) is the importance of the gradient computed at \(\alpha\). Next, we define an attribution method that satisfies the Sensitivity (path integrals) axiom based on scaling each gradient with the importance factor in Eq (2). ### Definition of Integrated Decision Gradients In this subsection, we formally define the IDG attribution algorithm. Given a neural network represented by function \(F:R^{n}\rightarrow[0,1]\), an input vector \(x\), and given \(F\) exists over the range Figure 3: This figure illustrates the relationship between importance factor magnitude and gradient quality. Higher importance factors are directly related to higher quality gradients. \(\alpha\in[0,1]\), IDG assigns an importance factor to each input feature \(x_{i}\) with respect to the model output, using the following equation: \[IDG_{i}(x)=\underbrace{(x_{i}-x_{i}^{\prime})\times\int_{\alpha=0}^{1}\frac{ \partial F(x_{i}^{\prime}+\alpha(x_{i}-x_{i}^{\prime}))}{\partial x_{i}}}_{ \text{Traditional IG}}\times\underbrace{\frac{\partial F(x_{i}^{\prime}+\alpha( x_{i}-x_{i}^{\prime}))}{\partial\alpha}}_{\text{Importance Factor}}d\alpha \tag{3}\] The IDG attribution method is equivalent to IG in Eq (1) but with each gradient scaled with the importance factor in Eq (2). The importance factor is equivalent to the derivative of the logit-\(\alpha\) curve in the bottom of Figure 2. The importance factors scale-up gradients from the decision region and scale-down gradients from saturated regions, respectively. Therefore, IDG provides a principled solution to the saturation problem, and satisfies the Sensitivity (path integrals) axiom by definition. The path integral is practically computed using the Riemann sum approximation [10], as follows: \[IDG_{i}(x)=(x_{i}-x_{i}^{\prime})\times\frac{1}{m}\times\sum_{k=1}^{m}\frac{ \partial F(x_{i}^{\prime}+\frac{k}{m}\times(x_{i}-x_{i}^{\prime}))}{\partial x _{i}}\times\frac{\partial F(x_{i}^{\prime}+\frac{k}{m}\times(x_{i}-x_{i}^{ \prime}))}{\partial\alpha} \tag{4}\] where \(m\) is the number of steps for approximation. We will further discuss the selection of the step size and its impact on the approximation error in Section 4. We illustrate IDG with an example in Figure 4. First, looking at the left side of the figure, the top row shows the logit-\(\alpha\) curve associated with the input image. The second row shows the derivative of this curve, i.e., \(\partial F/\partial\alpha\) in Eq (2). The third row shows the interpolated inputs for selected alpha values and the fourth row shows the gradients computed by IG for these inputs. The last row visualizes the effect of IDG by scaling the gradients above by the importance factors from the second graph. The importance factors scale up the magnitude of the gradients from the decision region while scaling down the magnitude of the gradients from the saturated regions. In the figure, it can be observed that, in particular, the attributions from \(\alpha=0.005\) are scaled up. On the right of the figure, we show the original image, and the attributions generated by IG, LIG, GIG, AGI, and IDG. The attributions computed using IDG are substantially less noisy than all competitors. We note that GIG has a low amount of noise, but IDG has more focused attributions on the highlighted features. Figure 4: A full visualization of how IDG uses importance factors to eliminate saturation effects. The top row shows the logit-\(\alpha\) curve. The next row shows the derivative of the curve, i.e., the importance factors with respect to \(\alpha\). The third row shows the interpolated images, the fourth shows the associated gradients, and the bottom row shows these gradients scaled with the corresponding importance factors. The right side shows the input image, and the attributions computed using IG [10], LIG [11], GIG [12], AGI [13], and IDG. ## 4 Adaptive Sampling Algorithm In this section, we first analyze the errors within the Riemann sum approximation of the IDG path integral for uniform subdivisions. Next, we propose an adaptive sampling technique to minimize the approximation errors using non-uniform subdivisions. In the supplementary results, we show that the adaptive sampling only results in major improvements for IDG. The impact of the adaptive sampling on regular IG is minor. ### Motivation The errors within the Riemann approximation of the IDG path integral can be calculated, as follows: \[\epsilon(x,n)=\lim_{m\rightarrow\infty}IDG_{i}(x,m)-IDG_{i}(x,n) \tag{5}\] where \(\epsilon(x_{i},n)\) is the approximation error for attribution \(x_{i}\) when computing the integral with \(n\) uniform subdivisions. \(n\) and \(m\) are the number of steps used within the Riemann sum approximation in Eq (4). We analyze the approximation error and the impact on the attributions in Figure 5. The graph (b) shows the average error across all the pixels in the attribution map with respect to the number of used steps \(n\). Since a low step count results in a lack of samples in the decision region, a large number of steps are required for a good approximation. The image (a) is the input for the four columns (c), (d), (e), and (f) of attributions and graphs. The columns show the quality of the attributions with respect to the number of steps and type of subdivision. It is observed from the graphs that taking more samples in the decision region greatly improves IDG attribution quality. Therefore, to obtain high IDG quality without a prohibitive number of steps, we design a new adaptive sampling algorithm - seen in Figure 5 (f) - that uses non-uniform subdivisions concentrated on the decision region. ### Adaptive Sampling Methodology It is desirable to sample the high quality gradients that lie in the decision region to improve the quality of the attained attributions. In Algorithm 1, we show how the adaptive sampling algorithm is used with IDG. Our approach is based on first pre-characterizing the logit-\(\alpha\) curve with \(N\) uniform subdivisions in lines 3 - 7. Next, \(M\) subdivisions are non-uniformly distributed within the \(N\) regions based on logit growth and IDG is calculated in lines 8 - 15. Because there are \(M\) total samples, line Figure 5: This figure shows the motivation for the adaptive sampling algorithm. The image (a) is the input to the attributions in the figure. The graph (b) demonstrates how the attribution error decreases as step count increases. Columns (c), (d), and (e) of attributions and graphs show the relationship between sample locations and IDG quality as \(50\), \(250\), and \(600\) steps are used respectively. We show that as the number of steps increases, the quality of IDG grows greatly, influencing the adaptive sampling algorithm. Lastly, column (f) shows the equivalent result of column (e) achieved by using adaptive sampling with 50 steps. 11 executes \(O(N+M)\) times. In practice it is best if \(N=M\) (this is shown in the supplementary materials) therefore the algorithm runtime is \(O(N)\). As seen in Figure 5 (e) and (f), combining this adaptive sampling algorithm with IDG creates attributions as strong as IDG with 600 steps while only using 50 steps. Figure 1 provides a high-level overview of this new IDG process. The figure shows that when given an input image and a number of steps, the adaptive sampling algorithm calculates non-uniform subdivisions based on logit growth. These are then used as input for IDG where the gradient at each location is calculated and then weighted, producing the final attribution. In this figure, the IDG sampling graph shows that \(31\) out of \(50\) samples are placed in the decision region \(\alpha\in[0.0,0.2]\), where the logit changes from \(0\) to \(7.2\). ## 5 Experimental Results In this section, we will evaluate the effectiveness of the proposed method. We perform our experiments in PyTorch using the 2012 validation set of ImageNet [16] on NVIDIA A40 GPUs. According to ML CO\({}_{2}\) impact, the experimental evaluation released \(43.6\) kg of CO\({}_{2}\) with zero offset [17]. The attributions computed using Algorithm 1 are called IDG. We compare our method with IG [10], left-IG [11], guided IG [12], and adversarial gradient integration [13]. We use Captum for the implementation of IG, whereas left-IG, GIG, and AGI are taken from their respective repositories [18; 19; 20; 21]. We evaluate the quality of the computed attributions both quantitatively and qualitatively. In Table 1, we quantitatively evaluate the attributions using standard perturbation testing which measures the importance of the pixels in an attribution via an area under the curve (AUC) score. A total of four tests are presented with three insertion methods and one deletion method from the authors of RISE and XRAI [22; 14] which are described in Section 5.1. The table compares the computed attribution quality for the first \(5000\) images of the ImageNet dataset such that five images are taken from each of the \(1000\) classes. The five attribution methods are evaluated with three models trained on ImageNet. We selected ResNet101 (R101), ResNet152 (R152), and ResNeXt (RNXT) as pre-trained models from PyTorch and use the newest ImageNet weights available (V2 for the ResNet models and V1 for ResNeXt) [23; 24; 25]. Qualitatively, we present a subset of five examples in Figure 6 which are gathered gathered using the ResNet101 model and the method parameters explained below. We provide a larger selection of examples in the supplementary materials for visual comparison. Inputs are reshaped to (\(224\), \(244\)) for all three presented models. This image processing follows the attribution documentation provided by Captum [18]. The RISE, AIC, and SIC tests use the default parameters found from their respective repositories [26; 27]. The IG and LIG attribution methods use \(50\) steps and a black baseline image. GIG uses the default parameters found at [20]. AGI uses the default parameters found at [21]. Lastly, IDG is used with 50 steps and a black baseline image. For all the methods, we use a single baseline only. ### Quantitative Evaluation Metrics The evaluation metrics are built upon the intuition that the highest attribution values should correspond to those features that contribute more to the classification of the target class [22; 14]. The process starts from the most important pixels and starts deleting (inserting) them from the original image (to a blurred image for insertion) until only a black (the original) image remains. At each step, the softmax score (or accuracy) is calculated. This gives us an ROC curve from base image to final image, which is used to compute the AUC score for a given attribution. This AUC value is computed for each image and then averaged out over the entire test data selection. For the insertion game, a higher AUC score indicates a better attribution and for the deletion game, a lower AUC score indicates better performance. The two sets of methods presented from Petsiuk, et al and Kapishnikov, et al. take different approaches to the insertion process [22; 14]. In RISE, the insertion (deletion) test which starts (ends) with a Gaussian blurred (black) image [22]. In their implementation pixels are added (deleted) in equal amounts during the test process. Given an NxN image, the test will change the image by N pixels at a time over N steps. Kapishnikov, et al. present the Accuracy Information Curve (AIC) and Softmax Information Curve (SIC) in their XRAI paper [14]. The AIC test gives each perturbation step a score of \(0\) or \(1\) for an incorrect or correct classification and SIC uses softmax as previously discussed. For pixel perturbation, these methods use a schedule that non-linearly removes groups of pixels from the image in increasingly large amounts. The last difference from the RISE insertion test is the blurring method, where the initial image is now blurred in segments, each having its own noise distribution. ### Comparison With Previous Work In Table 1, attribution quality is evaluated using the AIC and SIC insertion metrics and the RISE insertion and deletion metrics. We use an arrow to denote if larger (arrow up) or smaller (arrow down) scores are better. The best score for each model and test type is in bold. Additionally we provide how many times a given method outperforms all other methods in the last row of the table. It can be observed in Table 1 that IDG achieves a consistent improvement over IG, LIG, GIG, and AGI across all twelve of the tests presented. Comparing IDG to IG and LIG clearly indicates the ability of IDG to mitigate saturation effects in path-based methods while retaining the most important gradient information. When compared to AGI and GIG, the large margin of improvement in the scores shows \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Metric & Model & IG [10] & LIG [11] & GIG [12] & AGI [13] & IDG \\ \hline \multirow{3}{*}{AIC (\(\uparrow\))} & R101 & 0.571 & 0.589 & 0.626 & 0.675 & **0.701** \\ & R152 & 0.575 & 0.616 & 0.646 & 0.686 & **0.718** \\ & RNXT & 0.580 & 0.611 & 0.634 & 0.654 & **0.730** \\ \hline \multirow{3}{*}{SIC (\(\uparrow\))} & R101 & 0.498 & 0.522 & 0.559 & 0.609 & **0.638** \\ & R152 & 0.508 & 0.552 & 0.582 & 0.619 & **0.659** \\ & RNXT & 0.478 & 0.518 & 0.532 & 0.554 & **0.620** \\ \hline \multirow{3}{*}{Insertion (\(\uparrow\))} & R101 & 0.498 & 0.535 & 0.547 & 0.561 & **0.592** \\ & R152 & 0.517 & 0.562 & 0.565 & 0.577 & **0.615** \\ & RNXT & 0.276 & 0.299 & 0.296 & 0.307 & **0.324** \\ \hline \multirow{3}{*}{Deletion (\(\downarrow\))} & R101 & 0.181 & 0.148 & 0.155 & 0.172 & **0.108** \\ & R152 & 0.202 & 0.148 & 0.164 & 0.190 & **0.118** \\ \cline{1-1} & RNXT & 0.101 & 0.078 & 0.082 & 0.104 & **0.068** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of attributions using the AIC, SIC, insertion, and deletion tests that IDG presents a more complete solution to the saturation problem than these methods. Overall, IDG outperforms all of the attribution methods in the comparison, achieving new state-of-the-art performance. For qualitative analysis, we compare IG, LIG, GIG, AGI, and IDG in Figure 6. All attributions are computed as previously described. The comparison is performed using images of a "Guenon", "Submarine", "Tripod", "African Hunting Dog", and "Warplane" taken from ImageNet [16]. Across the five selections, IDG clearly produces much sharper attributions than IG and LIG, further verifying that it solves the saturation problem present in these methods. When compared to GIG, IDG also has superior performance in all of the images. For the Tripod example, even though GIG has relatively low noise, IDG has stronger attributions on the tripod in the foreground, and the one in the background as well. Lastly, when comparing to AGI, it can be seen AGI generally has low extraneous noise in the attributions. However, IDG provides tighter, and sharper attributions on the class subject in the images, therefore the results are better. The images clearly show that IDG improves visual quality over the other methods. IDG generates attributions with less random noise, and sharper attributions, showing its ability to solve the saturation problem. Additionally it shows its ability to outperform the methods which use non-straight-line paths. We provide an additional 50 visual comparisons in the supplementary results section. ## 6 Discussion In this paper, we propose a new attribution method called Integrated Decision Gradients (IDG). The key idea of IDG is to perform the path integral while weighting sampled gradients by their associated logit growth. This amplifies gradients located in the decision region, and negates those from the saturation region, solving the saturation issue. In contrast, traditional IG integrates gradients between the same images while giving all gradients equal weight, saturated or not, causing the majority of saturated gradients to dominate the output. Additionally, we provide evidence that Figure 6: Qualitative comparison of attributions computed using the IG [10], LIG [11], GIG [12], and AGI [13], and IDG methods. It is seen that IDG solves the saturation problem and outperforms the state-of-the-art path-based attribution methods in visual quality. the decision region of the path integral is where the best gradients lie. With this, we present an adaptive sampling algorithm which densely samples the decision region without runtime penalty, improving IDG performance. We show qualitatively and quantitatively that IDG reaches state-of-the-art performance in the path-based attribution field. In our future work, we plan to apply IDG concepts to other attribution methods to further enhance attribution quality. We also plan to employ IDG within practical real-world applications. The code to replicate the results presented in this paper is available at: [https://github.com/chasewalker26/Integrated-Decision-Gradients](https://github.com/chasewalker26/Integrated-Decision-Gradients). LimitationsWe present quantitative and qualitative results that show the proposed method outperforms those in its field. However, there does not yet exist criteria to perfectly examine what makes a good attribution. Therefore, we provide the best, currently accepted evaluation of our method. ## Acknowledgements This work was partly supported by the Lockheed Martin University Engagement Program, the Florida High Tech Corridor Matching Grants Program, and the DARPA cooperative agreement #HR00112020002. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. ## References * [1]M. Mirdita, K. Schutze, Y. Moriwaki, L. Heo, S. Ovchinnikov, and M. Steinegger (2022) Colabfold: making protein folding accessible to all. Nature Methods, pp. 1-4. External Links: ISSN 1063-6905, Document, Link Cited by: SS1. * [2]A. Das and P. Rad (2020) Opportunities and challenges in explainable artificial intelligence (xai): a survey. arXiv preprint arXiv:2006.11371. Cited by: SS1. * [3]M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818-833. Cited by: SS1. * [4]M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should i trust you?. explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144. Cited by: SS1. * [5]J. Tobias Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: SS1. * [6]R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618-626. Cited by: SS1. * [7]K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. Cited by: SS1. * Volume 70, ICML'17, pp. 3319-3328. Cited by: SS1. * [9]V. Miglani, N. Kokhlikyan, B. Alsallakh, M. Martin, and O. Reblitz-Richardson (2020) Investigating saturation effects in integrated gradients. External Links: 2006.11371 Cited by: SS1. [MISSING_PAGE_POST] * Pan et al. [2021] Deng Pan, Xin Li, and Dongxiao Zhu. Explaining deep neural network models with adversarial gradient integration. In Zhi-Hua Zhou, editor, _Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21_, pages 2876-2883. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Main Track. * Kapishnikov et al. [2019] A. Kapishnikov, T. Bolukbasi, F. Viegas, and M. Terry. Xrai: Better attributions through regions. In _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 4947-4956, Los Alamitos, CA, USA, nov 2019. IEEE Computer Society. * Smilkov et al. [2017] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viegas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise, 2017. * Russakovsky et al. [2015] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. _International Journal of Computer Vision (IJCV)_, 115(3):211-252, 2015. * Lacoste et al. [2019] Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. _arXiv preprint arXiv:1910.09700_, 2019. * Kokhlikyan et al. [2020] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. Captum: A unified and generic model interpretability library for pytorch. _arXiv preprint arXiv:2009.07896_, 2020. * Miglani et al. [2020] Vivek Miglani, Narine Kokhlikyan, Bilal Alsallakh, Miguel Martin, and Orion LLC Reblitz-Richardson. _Left-IG Code Repository_, 2020. Available at [https://github.com/vivekmig/captum-1/tree/ExpandedIG](https://github.com/vivekmig/captum-1/tree/ExpandedIG). * Kapishnikov et al. [2021] A. Kapishnikov, S. Venugopalan, B. Avci, B. Wedin, M. Terry, and T. Bolukbasi. Gig code repository, 2021. Available at [https://github.com/PAIR-code/saliency/tree/master/saliency/core](https://github.com/PAIR-code/saliency/tree/master/saliency/core). * Pan et al. [2021] Deng Pan, Xin Li, and Dongxiao Zhu. Agi code repository, 2021. Available at [https://github.com/pd90506/AGI](https://github.com/pd90506/AGI). * Petsiuk et al. [2018] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In _Proceedings of the British Machine Vision Conference (BMVC)_, 2018. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * Xie et al. [2017] Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1492-1500, 2017. * Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Proceedings of the 33rd International Conference on Neural Information Processing Systems_. Curran Associates Inc., 2019. * Petsiuk et al. [2018] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise code repository, 2018. Available at [https://github.com/eclique/RISE](https://github.com/eclique/RISE). * Kapishnikov et al. [2019] A. Kapishnikov, S. Venugopalan, B. Avci, B. Wedin, M. Terry, and T. Bolukbasi. Aic code repository, 2019. Available at [https://github.com/PAIR-code/saliency/tree/master/saliency/metrics](https://github.com/PAIR-code/saliency/tree/master/saliency/metrics). * European Commission [2019] Content European Commission, Directorate-General for Communications Networks and Technology. Ethics guidelines for trustworthy ai, 2019. * Ryan [2020] Mark Ryan. In ai we trust: ethics, artificial intelligence, and reliability. _Science and Engineering Ethics_, 26(5):2749-2767, 2020. Appendix In this appendix we provide additional information that did not fit in the bounds of the paper. In Section A.1 we provide detailed analysis of the selection of \(N\) and \(M\) for the adaptive sampling algorithm. In Section A.2 we provide further explanation of the impact of the AS algorithm, showing that IDG provides the true solution to the saturation problem. In Sections A.3 and A.4 we provide information on the licensing of libraries used in experimental evaluation and we discuss the potential ethical impact of the proposed method. Lastly, in Section A.5 we present 50 additional qualitative visual comparisons of our proposed method against those presented in the manuscript. ### Ablation Study for Adaptive Sampling The adaptive sampling algorithm has two parameters \(N\) and \(M\). \(N\) is the number of samples used in the pre-characterization of the logit-\(\alpha\) curve. \(M\) is the number of samples used in the computation of IDG using non-uniform subdivisions. Three types of selections of \(N\) and \(M\) are possible: \(N<M\), \(N==M\), and \(N>M\). Assuming \(M\) or \(N\) is set to \(50\), which is a common step count for path-based methods, we provide analysis of which selection provides the best result via an ablation study. In Figure 7, we present an ablation study on the selection of \(N\) and \(M\). In (a), \(M\) is set to \(50\) and we take the average of the deletion score [22] over \(10\) images as \(N\) is varied from \(5\) to \(100\) by increments of \(5\). In (b), \(N\) is set to \(50\) and the deletion scores are gathered as before where \(M\) is varied instead. We see from graph (a) that low values of \(N\) produce poor results and the transition from \(5\) to \(20\) results in a large drop in deletion score. We see a similar case in (b) where the score improves as \(M\) increases. We note that stable performance is seen on both graphs where \(N==M\). We conclude that selecting \(N==M\) results in proper estimation of the importance factors given the \(M\) IDG steps available for placement. While selecting \(N<M\) may provide equally strong results, it may provide poor results without meaningful runtime improvement, therefore \(N==M\) is chosen. Additionally, we note selecting an \(N>M\) does not improve the score enough for the associated runtime penalty. ### IDG Is the Solution to the Saturation Problem In the manuscript we present adaptive sampling as a method to improve the ability of the proposed IDG method to solve the saturation problem. Adaptive sampling takes advantage of the importance factors to perform non-uniform sampling that focuses on the region of growth. This provides a large ratio of high quality gradients to saturated gradients from which IDG can generate an attribution. However, AS alone is not a solution to the saturation problem, which we will demonstrate by evaluating IG with adaptive sampling. In Figure 8, for the given input image we compare the attributions generated by IG with uniform sampling (US), IG with adaptive sampling, IDG with uniform sampling, and IDG with adaptive sampling. When comparing IG with US and IG with AS, we see a small reduction in noise in the AS attribution, as there are inherently less saturated gradients captured when AS is applied to IG. However, due to IG equally weighting all gradients, the saturated gradients still dominate the output, Figure 7: The change in the deletion score of IDG with AS averaged over \(10\) images by varying (a) \(N\) and (b) \(M\). In the graphs, \(N\) (\(M\)) is varied from \(5\) to \(100\) while \(M\) (\(N\)) is set to \(50\). It is seen that the most stable scores are located where \(N==M\). illustrating that AS alone cannot solve the saturation problem. However, when viewing IDG with US compared to both IG attributions, we see a vast improvement to attribution quality, illustrating IDG's ability to solve the saturation problem. Furthermore, when AS is is applied to IDG, its ability to solve the saturation problem increases. As adaptive sampling does not meaningfully improve IG performance and IDG with US provides much stronger attributions than IG, we verify that integrated decision gradients is the solution to the saturation problem. We reiterate that adaptive sampling is used to provide IDG access to gradients of a higher quality than US does, therefore improving its performance, but not acting as the solution to the saturation problem. ### Licenses of Use The 2012 validation set of ImageNet [16] is used under the BSD 3-Clause License. The insertion, deletion, AIC, and SIC are used under the MIT and Apache 2.0 licenses respectively [22; 14]. The IG [10] and LIG [11] attribution methods as well as the PyTorch repository [25] are used with the BSD 3-Clause license, GIG [12] is used under Apache License 2.0, and AGI [13] is used under the MIT license. ### Broader Impact While explainable AI endeavours to increase human trust in AI systems, there is debate about what trust of AI _is_, and if unjustified trust can be harmful. A prominent proponent of complete AI trust is the European Commission's High-level Expert Group on AI (HLEG) [28]. This promotion of this trust is seen as harmful by some who believe creating a reliance on AI will have a negative impact on humanity and that AI can never truly be trusted due to its nature [29]. Explainable AI, as presented in this paper, is intended to allow better validation and understanding of models which are in use or proposed for use. ### Additional Visual Comparisons To validate the quantitative performance presented in the paper, we visually compare IG [10], LIG [11], GIG [12], AGI [13], and IDG with a larger number of examples. We present 50 example images on pages 14 - 18. There are six columns per example. From left to right the columns are: the input image, IG, LIG, GIG, AGI, and IDG. These labels are provided above the columns on each page and the class of the input image is provided to its left. The images are from the ImageNet validation set. We use ResNet101 on pages 14 - 16, ResNet152 on page 17, and ResNeXt on page 18 [23; 24]. The attributions are generated with the same parameters as the quantitative testing. The presented attributions are analyzed visually. A stronger attribution is defined by reduction of noise in areas irrelevant to the object of the image, and stronger attribution (darker color) in areas where the object exists. After visual analysis, we believe IDG presents a sharper attribution than all of the methods presented for a majority of the provided examples. This thorough qualitative analysis provides further proof of the strength of the proposed IDG method. Figure 8: Given the input, attributions created by IG using uniform sampling (US) and AS are compared to attributions created by IDG with US and AS. Since AS applied to IG does not meaningfully improve performance over IG with US, and IDG with US provides a higher quality attribution, we determine that IDG, not AS, is the solution to the saturation problem. This is further exemplified by the improvement seen in IDG with AS, reinforcing the idea that AS gives IDG access to better gradients, but IDG is the solution to the saturation problem. ## References ## References
2309.10244
UPL-SFDA: Uncertainty-aware Pseudo Label Guided Source-Free Domain Adaptation for Medical Image Segmentation
Domain Adaptation (DA) is important for deep learning-based medical image segmentation models to deal with testing images from a new target domain. As the source-domain data are usually unavailable when a trained model is deployed at a new center, Source-Free Domain Adaptation (SFDA) is appealing for data and annotation-efficient adaptation to the target domain. However, existing SFDA methods have a limited performance due to lack of sufficient supervision with source-domain images unavailable and target-domain images unlabeled. We propose a novel Uncertainty-aware Pseudo Label guided (UPL) SFDA method for medical image segmentation. Specifically, we propose Target Domain Growing (TDG) to enhance the diversity of predictions in the target domain by duplicating the pre-trained model's prediction head multiple times with perturbations. The different predictions in these duplicated heads are used to obtain pseudo labels for unlabeled target-domain images and their uncertainty to identify reliable pseudo labels. We also propose a Twice Forward pass Supervision (TFS) strategy that uses reliable pseudo labels obtained in one forward pass to supervise predictions in the next forward pass. The adaptation is further regularized by a mean prediction-based entropy minimization term that encourages confident and consistent results in different prediction heads. UPL-SFDA was validated with a multi-site heart MRI segmentation dataset, a cross-modality fetal brain segmentation dataset, and a 3D fetal tissue segmentation dataset. It improved the average Dice by 5.54, 5.01 and 6.89 percentage points for the three tasks compared with the baseline, respectively, and outperformed several state-of-the-art SFDA methods.
Jianghao Wu, Guotai Wang, Ran Gu, Tao Lu, Yinan Chen, Wentao Zhu, Tom Vercauteren, Sébastien Ourselin, Shaoting Zhang
2023-09-19T01:52:37Z
http://arxiv.org/abs/2309.10244v1
UPL-SFDA: Uncertainty-aware Pseudo Label Guided Source-Free Domain Adaptation for Medical Image Segmentation ###### Abstract Domain Adaptation (DA) is important for deep learning medical image segmentation models to deal with testing images from a new target domain. As the source-domain data are usually unavailable when a trained model is deployed at a new center, Source-Free Domain Adaptation (SFDA) is appealing for data and annotation-efficient adaptation to the target domain. However, existing SFDA methods have a limited performance due to lack of sufficient supervision with source-domain images unavailable and target-domain images unlabeled. We propose a novel Uncertainty-aware Pseudo Label guided (IPL) SFDA method for medical image segmentation. Specifically, we propose Target Domain Growing (TDG) to enhance the diversity of predictions in the target domain by duplicating the pre-trained model's prediction head multiple times with perturbations. The different predictions in these duplicated heads are used to obtain pseudo labels for unlabeled target-domain images and their uncertainty to identify reliable pseudo labels. We also propose a Twice Forward pass Supervision (TFS) strategy that uses reliable pseudo labels obtained in one forward pass to supervise predictions in the next forward pass. The adaptation is further regularized by a mean prediction-based entropy minimization term that encourages confident and consistent results in different prediction heads. UPL-SFDA was validated with a multi-site heart MRI segmentation dataset, a cross-modality fetal brain segmentation dataset, and a 3D fetal tissue segmentation dataset. It improved the average Dice by 5.54, 5.01 and 6.89 percentage points for the three tasks compared with the baseline, respectively, and outperformed several state-of-the-art SFDA methods. Source-free domain adaptation, self-training, fetal MRI, heart MRI, entropy minimization. ## I Introduction Deep learning has achieved excellent performance in medical image segmentation tasks in recent years [1, 2]. Its current success is highly dependent on the assumption that training and testing images are from the same distribution. However, in practice, a model trained with images from one certain source domain may be used to deal with images in an unseen target domain with different image appearances, which is usually caused by different scanning devices, imaging protocols, patient groups or image qualities, etc. Failing to deal with the gap between the source and target domains will lead to a dramatic performance decrease [3]. As it is impossible to collect images from all the potential target domains during training, it is essential to make the model adapted to images in the unseen target domain after deployment. Domain Adaptation (DA) that aims to solve the domain gap between training and testing data is attracting increasing attentions recently [4]. Though collecting a set of annotated images in the target domain to fine-tune the pre-trained model can make it adapted to the target domain, the annotations are expensive to obtain and usually unavailable in the target domain for model deployment. Therefore, many researchers have investigated Unsupervised Domain Adaptation (UDA) [4] that uses unannotated images in the target domain for adaptation. Most existing UDA methods require access to source-domain and target-domain images simultaneously for training [5, 6]. However, due to concerns on privacy, bandwidth and other issues, it is not always possible to access source-domain data and target-domain data simultaneously. Source-Free Domain Adaptation (SFDA) [7, 8, 9] aims to adapt a model pre-trained with source-domain images to fit the target data distribution without access to the source data. Due to the absence of annotations in the target domain, the main challenge for SFDA is the lack of sufficient supervision for the model in the target domain. To deal with this problem, some existing works designed auxiliary tasks such as rotation prediction [9], image normalization [10] and auto-encoder-based image reconstruction [11] to assist adaptation in the target domain. However, these works introduce an extra sub-network for the auxiliary task that needs to be trained in the source domain in advance, which makes these SFDA methods only work for a model pre-trained in a specified way in the source domain and cannot be applied to models pre-trained in other manners, e.g., standard supervised learning without auxiliary tasks. In this work, we explore a more flexible approach for SFDA, where only a pre-trained segmentation model and unannotated images are available in the target domain, without restrictions on how the model has been pre-trained in the source domain, and we call it fully SFDA. Note that fully SFDA is independent of the pre-training process, and is more general than the auxiliary task-based methods [9, 10, 11] that require special pre-training strategies and network structures. To deal with unannotated images in the target domain for fully SFDA, several researchers have investigated some regularization methods, such as entropy minimization for the predictions in the target domain [12, 13], which are inspired by entropy minimization in the UDA [14, 15, 16] and semi-supervised learning tasks [17, 18, 19]. However, only using entropy minimization as supervision cannot provide sufficient constraints, which makes the model tend to give high-confidence but incorrect predictions in the target domain. To deal with this problem, some researchers also proposed self-training, which fine-tunes the pre-trained model using its predictions on the target-domain images as pseudo labels [20, 21, 22]. However, due to the change in the target domain distribution, it is hard to obtain accurate pseudo labels, which brings challenges to achieving good performance [23]. To overcome these problems, we propose a novel Uncertainty-aware Pseudo Label guided Source-Free Domain Adaptation (UPL-SFDA) framework for medical image segmentation. Differently from many existing methods that require a special pre-training strategy in the source domain [9, 10, 11], our method is agnostic to the training stage and has a minimal requirement on the network structure, which is applicable in wider scenarios. Given a pre-trained network, we propose Target Domain Growing (TDG) that duplicates the prediction head \(K\) times in the target domain, and add random perturbations (e.g., dropout, spatial transformation) to obtain \(K\) different segmentation predictions. The ensemble of these predictions leads to more robust pseudo labels with efficient uncertainty estimation, which helps to distinguish reliable pseudo labels from unreliable ones. To avoid model degradation commonly faced by self-training, we introduce Twice Forward pass Supervision (TFS) that uses reliable pseudo labels obtained in one forward pass to supervise predictions in a following forward pass. In addition, unlike existing works imposing entropy minimization on each single prediction head [12, 21], we impose entropy minimization on the mean prediction across the \(K\) heads instead, which additionally introduces an implicit multi-head consistency regularization to obtain more robust results. Our contributions are summarized as follows: * We propose a Source-Free Domain Adaptation method based on uncertainty-aware pseudo labels for medical image segmentation, which adapts a model to the target domain without specific requirements on the pre-training strategy and network structure in the source domain. * We introduce Target Domain Growing (TDG) to expand a pre-trained model with perturbed multiple prediction heads in the target domain, which increases the quality of pseudo labels and obtains uncertainty estimation efficiently. * A Twice Forward pass Supervision (TFS) is introduced for self-training, which is combined with a mean prediction-based entropy minimization to robustly learn from pseudo labels in SFDA. Extensive experiments on three applications (multi-site heart MRI segmentation, cross-modality fetal brain segmentation, and fetal tissue segmentation) showed that our method can effectively adapt the model from a source domain to one or multiple target domains. It outperformed several existing SFDA methods for medical image segmentation, and was comparable and even better than supervised training in the target domain. ## 2 Related Works ### Unsupervised Domain Adaption UDA aims to transfer the knowledge learned from labeled source-domain data to an unlabeled target domain. Current UDA methods mainly adapt the model to the target domain in three aspects. The first is image appearance alignment that translates a target-domain image into a source-domain-like image [24, 25, 26, 27], so that the domain gap is alleviated. The second is feature alignment that minimizes the distance of feature distribution between the source and target domains to learn domain-invariant representations [28]. For example, for cardiac image segmentation, Wu et. [29] used Variational Auto-Encoders (VAEs) to align the features in the source and target domains, and Chen et al. [30] used Generative Adversarial networks (GANs) to align the features. The third is output alignment, i.e., using the source model to generate pseudo labels in the target domain for adaptation [6]. However, even relying on unpaired and unsupervised domain translation techniques, these UDA methods require access to source domain images, which is hardly guaranteed at a testing site due to the concerns on privacy, computational cost and bandwidth. Therefore, source-free DA is highly desirable in practice. ### Source-Free Domain Adaption Source-Free Domain Adaption (SFDA) deals with domain adaption without access to source-domain data [7, 9, 21, 31]. Yang et al. [31] proposed a Fourier-style mining-guided framework, which comprises a generation stage and an adaptation stage for adapting the source model to the target domain using paired source-like and target images. Sun et al. [9] introduced an auxiliary branch to predict the rotation angle in the target domain. Karani et al. [10] introduced a shallow image normalization network before the segmentation model, and fine-tuned the normalization network in the target domain based on predictions refined by a Denoising Auto-Encoder (DAE). However, these methods require the segmentation model's structure to be modified in advance to support the auxiliary task and pre-trained with a specified strategy, which is inapplicable to general segmentation models that are unaware of the adaptation process during pre-training. Recently, some methods [12, 32] avoid the coupling between training in the source and target domains, so that the adaptation process does not set a prerequisite for training methods in the source domain, which is more general to arbitrary pre-trained models. Wen et al. [7] proposed a selectively updated Mean Teacher for SFDA, where predictions from a teacher model based on exponential moving average is used to supervise the student. Nado et al. [32] proposed Prediction-Time Batch Normalization (PTBN) that recalculates statistics of batch normalization layers according to the images in the target domain. TENT [12] updates the parameters in batch normalization layers to minimize the entropy of predictions in the target domain. In addition to entropy minimization [12], other loss functions, such as regional nuclear-norm loss with contour regularization [33] and consistency regularization [34], have been proposed for the setting. However, due to the lack of annotations, achieving good performance for SFDA methods is still challenging. ## 3 Method Fig. 1 shows an overview of our proposed Uncertainty-aware Pseudo Label guided Source-Free Domain Adaptation (UPL-SFDA). It is independent of the pre-training stage in the source domain, so it can deal with a model pre-trained in an arbitrary strategy. In UPL-SFDA, we introduce Target Domain Growing (TDG) to extend the source model into a multi-head prediction structure by duplicating the pre-trained prediction head \(K\) times, and then get pseudo labels based on an ensemble of the prediction heads with perturbations using dropout and spatial transformation. Pseudo labels obtained in one forward pass are used to supervise the prediction of the next forward pass, which acts as a consistency regularization between the two forward passes, and they are weighted by the reliability (confidence). For unreliable pixels, we use a mean prediction-based entropy minimization regularization that improves confidence of the predictions and inter-head consistency. ### Pre-trained Model from the Source Domain Let \(S\) and \(T\) be the source and target domains, respectively. Let \(\mathbf{X}_{S}\) = \(\{(\mathbf{x}_{i}^{s},y_{i}^{s}),i=1,...,N_{s}\}\) be the training images and their labels in the source domain, and \(\mathbf{X}_{T}\) = \(\{(\mathbf{x}_{i}^{t}),i=1,...,N_{t}\}\) represent unlabeled images in the target domain for adaptation, where \(N_{s}\) and \(N_{t}\) are the number of samples in the two domains, respectively. Note that the data distributions in \(S\) and \(T\) are different, and we assume that the label has the same distribution across the two domains, i.e., the same type of structure for segmentation. For a general CNN-based segmentation model, it has a feature extractor \(g\) and a prediction head \(h\), and the parameters of the segmentation model are denoted as \(\{\theta_{g},\theta_{h}\}\), where Figure 1: Overview of our proposed Uncertainty-aware Pseudo Label guided Source-Free Domain Adaptation (UPL-SFDA) framework. In the pre-training stage, the model can be trained in the source domain with an arbitrary strategy. We use Target Domain Growing (TDG) to extend the pre-trained model with multiple prediction heads with perturbations in the target domain. Note that the pseudo label and reliability map obtained in one forward pass are used to supervise the predictions in the next forward pass in the Twice Forward pass Supervision (TFS) loss. and \(\theta_{h}\) denote the parameters of \(g\) and \(h\), respectively. As encoder-decoder networks are widely used for medical image segmentation [35, 36], we consider \(g\) as an encoder and \(h\) as a decoder in this work, respectively. The model is pre-trained in the source domain via: \[\theta_{g}^{0},\theta_{h}^{0}=\text{arg}\underset{\theta_{g},\theta_{h}}{\text {min}}\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}L_{s}\Big{(}h\big{(}g(\mathbf{x}_{i}^{s}) \big{)},y_{i}^{s}\Big{)} \tag{1}\] where \(L_{s}\) donates a certain type of supervision loss in the source domain, which might be implemented by fully supervised learning, semi-supervised learning and weakly supervised learning, etc., based on the type of the available labels in the source domain. \(\theta_{g}^{0}\) and \(\theta_{h}^{0}\) denote the optimized parameter values in the source domain, and they are used as initial parameters for the adaptation process in the target domain. ### Target Domain Growing in the Target Domain With the pre-trained feature extractor \(g\) and prediction head \(h\), the model can be applied to a target-domain image to obtain a prediction as the pseudo label. However, due to the gap between source and target domains, directly applying the pre-trained model will lead to a very low quality of pseudo labels. To improve the quality of pseudo labels for a higher adaptation performance, we propose Target Domain Growing (TDG) to extend the source model, i.e., we duplicate the prediction head (i.e., decoder) \(h\) by \(K\) times in the target domain, and they are initialized as the pre-trained prediction head with parameter values of \(\theta_{h}^{0}\). These prediction heads are connected to the same pre-trained feature extractor \(g\) in parallel, as shown in Fig. 1. Let \(h^{k}\) denote the \(k\)-th prediction head in the target domain. As they have the same initial parameter values with the same architecture, their outputs will be the same for a given input. To obtain diversity, we introduce perturbations for the prediction heads so that they produce different results for more robust ensemble. Specifically, we use random spatial transformation and dropout to improve the diversity of predictions. First, for an input image \(\mathbf{x}\in\mathcal{R}^{H\times W}\) in the target domain, where \(H\) and \(W\) are the height and width, respectively, we send it into the network \(K\) times, each time with a random spatial transformation and for a different prediction head \(h^{k}\). The segmentation prediction result for the \(k\)-th head is: \[\mathbf{p}^{k}=\mathcal{T}_{k}^{-1}\circ h^{k}\big{(}g(\mathcal{T}_{k}\circ x) \big{)} \tag{2}\] where \(\mathcal{T}_{k}\) is a random spatial transformation and \(\mathcal{T}_{k}^{-1}\) is the corresponding inverse transformation. \(\mathbf{p}^{k}\in\mathcal{R}^{C\times H\times W}\) is the output segmentation probability map with \(C\) channels obtained by Softmax, where \(C\) is the class number for segmentation. In this paper, we set \(\mathcal{T}_{k}\) as random flipping, random rotation with \(\pi/2\), \(\pi\) and \(3\pi/2\) for efficient implementation. Second, we add a dropout layer before each of the prediction head \(h^{k}\), so that the prediction heads take different random subsets of the features as input. Due to the image-level and feature-level perturbations, the \(K\) predictions are different for an input image. We then average across the \(K\) predicted segmentation probability maps for ensemble: \[\bar{\mathbf{p}}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{p}^{k} \tag{3}\] ### Twice forward pass supervision with Reliable Pseudo Labels With the average probability prediction \(\bar{\mathbf{p}}\), we take an argmax to obtain the pseudo label for the input \(\mathbf{x}\). To reduce noises, we post-process it by only keeping the largest component for each foreground class in segmentation tasks where each foreground class has only one component (e.g, heart structure and fetal brain segmentation in this work). Then the post-processed pseudo label is converted into a one-hot representation, which is denoted as \(\tilde{y}\in\{0,1\}^{C\times H\times W}\). As the domain gap may limit the quality of the pseudo label \(\tilde{y}\), directly using \(\tilde{y}\) to supervise the network will lead to a limited performance. To deal with this problem, we use the uncertainty information in \(\bar{\mathbf{p}}\) to identify pixels with reliable pseudo labels and only use the reliable region to supervise the network. To achieve this, we define a binary reliability map \(M\in\{0,1\}^{H\times W}\) for \(\tilde{y}\), and each element in \(M\) is defined as: \[M_{n}=\begin{cases}1&\text{if }\bar{\mathbf{p}}_{e^{*},n}>\tau\\ 0&\text{otherwise}\end{cases} \tag{4}\] where \(n\) = 1, 2,..., \(HW\) is the pixel index. \(c^{*}\) = \(\arg\max_{c}(\bar{\mathbf{p}}_{c,n})\) is the class with the highest probability for pixel \(n\), and \(\bar{\mathbf{p}}_{c^{*},n}\) represents the confidence for the pseudo label at that pixel. \(\tau\in(1/C,1.0)\) is a confidence threshold. For pseudo label-based self-training, the model may be biased towards its own prediction in each iteration. To avoid this problem, Chen et al. [37] introduced cross supervision where two networks with different predictions guide each other to reduce the bias. However, the use of two networks would increase the computational and memory cost, and it is not suitable for SFDA where only one pre-trained model is provided. Inspired by Chen et al. [37] and to improve the robustness of pseudo label-based SFDA, we introduce Twice Forward pass Supervision (TFS) for robust adaptation. Specifically, for a batch of data in the training set, before each gradient back-propagation, we perform two consecutive forward passes. We employ the pseudo label \(\tilde{y}\) and its associated reliability map \(M\) obtained in the first forward pass to supervise the prediction heads in the second forward pass. Let \(\mathbf{p}^{\prime k}\) denote the output of the \(k\)-th prediction head in the second forward pass. Due to the use of random spatial transformation and dropout as mentioned above, the outputs of the two forward passes are different despite the same parameter values. Using \(\tilde{y}\) to supervise \(\mathbf{p}^{\prime k}\) can introduce a consistency regularization under perturbations, which improves the robustness of the network. The TFS loss is: \[\mathcal{L}_{TFS}=\frac{1}{K}\sum_{k=1}^{K}\mathcal{L}_{w-dice}(\mathbf{p}^{\prime k },\tilde{y},M) \tag{5}\] where \(\mathcal{L}_{w-dice}\) is the reliability map-weighted Dice loss for a single head. Here we use a Dice-based loss for pseudo label supervision, as Dice loss can better deal with class imbalance in segmentation tasks than cross entropy [38], and the segmentation performance is usually evaluated by Dice. \[\mathcal{L}_{w-dice}=1-\frac{1}{C}\sum_{c=1}^{C}\frac{\sum_{n}2M_{n}\mathbf{p}_{c,n}^{ h}\tilde{y}_{c,n}}{\sum_{n}M_{n}(\mathbf{p}_{c,n}^{h}+\tilde{y}_{c,n})+\eta} \tag{6}\] where \(n\) is the pixel index and \(\eta=10^{-5}\) is a small number for numeric stability. ### Mean Prediction-based Entropy Minimization Entropy minimization is widely used for regularization in semi-supervised learning [19] and SFDA [13; 39; 40], which improves the model's confidence by minimizing the entropy of the class distribution in a prediction output. However, existing entropy minimization methods for SFDA are applied to networks with a single prediction head. For our method with multiple prediction heads, enforcing entropy minimization for each prediction head respectively may lead to sub-optimal results when different predication heads obtain opposite results with high confidence. For example, for binary segmentation, when \(h^{k}\) and \(h^{k+1}\) predict one pixel as being the foreground with probability of 0.0 and 1.0 respectively, both branches have the lowest entropy, but their average has a high entropy. To overcome this problem, we apply entropy minimization to the mean prediction across the \(K\) heads: \[\mathcal{L}_{ment}=-\frac{1}{HW}\sum_{n=1}^{HW}\sum_{c=1}^{C}\bar{\mathbf{p}}_{c,n }^{\prime}log(\bar{\mathbf{p}}_{c,n}^{\prime}) \tag{7}\] where \(\bar{\mathbf{p}}^{\prime}\) is the mean probability prediction obtained by the \(K\) heads in the second forward of TFS. Compared with minimizing the entropy of each prediction head respectively, minimizing the entropy of their mean prediction \(\bar{\mathbf{p}}^{\prime}\) can not only reduce the uncertainty of each head, but also encourage the consistency between them. Thus, it helps to improve the robustness of the network on samples in the target domain. ### Adaptation by Self-training Our adaptation method adopts a self-training process on unlabeled images in the target domain. Based on the pseudo labels obtained by TDG, the overall loss function for tuning the network with TFS in the target domain is: \[\mathcal{L}=\mathcal{L}_{TFS}+\lambda\mathcal{L}_{ment} \tag{8}\] where \(\lambda\) is a hyper-parameter to control the weight of \(\mathcal{L}_{ment}\). Note that there are two forward passes for each parameter update step, where the first forward pass obtains pseudo labels, and the loss function is calculated in the second pass for parameter update with back-propagation. ## 4 Experiments ### Datasets and Implementation We used three datasets for experiments: 1) the public Multi-centre, multi-vendor and multi-disease cardiac image segmentation (M&MS) dataset [41], where the images were acquired by devices with four different vendors, 2) an in-house Fetal Brain (FB) segmentation dataset that contains two different MRI sequences, and 3) a public Fetal Tissue Annotation (FeTA) dataset that contains two different super-resolution methods [42]. A summary of these three datasets is listed in Table 1. #### 4.4.1 Cardiac Image Segmentation Dataset (M&MS) The M&MS dataset [41] consists of 345 cardiac MRI volumes collected from six different hospitals, using four different scanner vendors, namely Siemens, Philips, General Electric, and Canon. The imaging devices were MAGNETOM Avanto for hospital 1, Achieva for hospital 2 and 3, Signa Excite, Vantage Orian, and MAGNETOM Skyra for hospital 4, 5 and 6, respectively. Following [41], we divide the dataset into four domains: Domain A for Siemens, comprising data from hospitals 1 and 6; Domain B for Philips, comprising data from hospitals 2 and 3; Domain C for General Electric, comprising data from hospital 4; and Domain D for Canon, comprising data from hospital 5. The slice number per volume varied from 10 to 13. The in-plane resolution ranged from 0.85 to 1.45 mm with slice thickness 9.2-10 mm. Following the setting in [40], we used domain A as the source domain, and B, C and D as the target domains. The target tissues for segmentation are the Left Ventricle (LV), Right Ventricle (RV) and Myocardium (MYO). We randomly split images in each domain into 70%, 10% and 20% for training, validation and testing, respectively, and abandoned labels for the training sets in the target domains. #### 4.2.2 Fetal Brain (FB) Segmentation Dataset The FB dataset had fetal MRI with two imaging protocols acquired from a single center, including 68 volumes of half-Fourier acquired single turbo spin-echo (HASTE) and 44 volumes of true fast imaging with steady state precession (TrueFISP). The slice number for each volume varied from 11 to 22, and the gestational age ranged in 21-33 weeks. The two sequences had an in-plane resolution of 0.64 to 0.70 mm and 0.67 to 1.12 mm respectively, with slice-thickness 6.5 - 7.15 mm and 6.5 mm, respectively. HASTE and TrueFISP were used as the source and target domains, respectively. We randomly split the images in each domain into 70%, 10% and 20% for training, validation and testing, respectively, and abandoned the labels of training images in the target domain. #### Iii-A3 Fetal Tissue Annotation (FeTA) Challenge Dataset The FeTA Dataset [42] used in this study was from the FeTA2022 challenge1 that aims to segment seven different tissues, namely External Cerebrospinal Fluid (ECF), Grey Matter (GM), White Matter (WM), Ventricles (Ven), Cerebellum (Cer), Deep Grey Matter (DGM), and Brain Stem (BS). The official dataset has 120 samples, but only 80 samples are publicly available after the challenge, and they were acquired from the University Children's Hospital Zurich (Kispi) using 1.5T and 3T clinical GE whole-body scanners. T2-weighted single-shot Fast Spin Echo sequences were acquired with an in-plane resolution of 0.5mm \(\times\) 0.5mm and a slice thickness of 3 to 5 mm. To obtain high-resolution fetal brain reconstructions, the mialSR super-resolution (SR) method [43] was used for 40 cases, while the Simple IRTK method [44] was used for the other 40 cases. We used the 40 cases reconstructed by Simple IRTK as the source domain, and the other 40 cases reconstructed by mialSR as the target domain. For each domain, the 3D SR volumes were divided into training, validation, and testing sets in the ratio of 70%, 10% and 20%, respectively. Footnote 1: [https://feta.grand-challenge.org/](https://feta.grand-challenge.org/) #### Iii-A4 Implementation Details All the experiments were implemented with PyTorch, using an NVIDIA GeForce RTX 2080Ti GPU. Our code is made available online2. For M&MS dataset and FB datasets that have a large slice thickness, we selected the widely used 2D UNet [35] to demonstrate the effectiveness of our method, as most medical image segmentation models are based on UNet-like structures [1]. The image intensity was clipped by the 1\({}^{st}\) and 99-th percentiles, and linearly normalized to [-1,1]. Each slice in the M&MS dataset was center cropped to 256\(\times\)256, and the slices in the FB dataset were resized to 256\(\times\)256. For the FeTA dataset, we cropped the 3D volumes based on the brain region during preprocessing, and used the 3D U-Net architecture [45] for implementation. Due to memory limitation, we cropped the images to a patch size of [32, 64, 64]. In the inference stage, we applied a sliding window using the same patch size with a stride of 50% to obtain the final segmentation results. During pre-training in the source domain, we trained the source model for 400 epochs with Dice loss, Adam optimizer and initial learning rate of 0.01 that was decayed to 90% every 4 epochs. The model parameters with the best performance on the validation set in the source domain were used for adaptation. For adaptation in each target domain, we duplicated the decoder for \(K\) times, and updated all the model parameters for 20 epochs with Adam optimizer and a fixed learning rate of \(10^{-4}\). Footnote 2: [https://github.com/HiLab-gi/UPL-SFDA](https://github.com/HiLab-gi/UPL-SFDA) The hyper-parameter setting was determined based on the labeled validation set of the target domain. Specifically, \(K=4\) and \(\lambda=1.0\). \(\tau\) was set to 0.95 for the M&MS and FeTA dataset, and 0.9 for FB dataset, respectively. In the adaptation stages, for M&MS and FB dataset, we set all the slices in a single volume as a batch, and for FeTA dataset, the batch size was set to 4. After training, we used the checkpoint with the best performance on the validation set for inference. Fig 2 shows the evolution of validation Dice, \(\mathcal{L}_{ment}\) and \(\mathcal{L}_{TFS}\). It can be observed that the loss functions converge in 20 epochs, and the best checkpoint was obtained at epoch 6 for M&MS B, C, 4 for M&MS D and 6 for the FB dataset, respectively. For quantitative evaluation of the volumetric segmentation results, we adopted the commonly used 3D Dice score and Average Symmetric Surface Distance (ASSD). As the slice thickness was large (6-10 mm) in the M&MS and FB datasets, we calculated ASSD values with unit of pixel. ### Comparison with State-of-the-art Methods To verify the effectiveness of our proposed UPL-SFDA, we compared it with four state-of-the-art SFDA methods: 1) **PTBN**[32] that updates batch normalization statistics based on unlabeled training images in the target domain without loss Fig. 2: Evolution of validation Dice, \(\mathcal{L}_{ment}\) and \(\mathcal{L}_{TFS}\) during adaptation. The black squares mark the epoch with the highest validation Dice. functions for optimization; 2) **TENT**[12] that only updates the parameters of batch normalization layers by minimizing the entropy of model predictions in the target domain; 3) **TTT**[9] that uses an auxiliary decoder to predict the rotation angle of target-domain images, and the auxiliary task's loss is used to update the model parameters; and 4) **URMA**[21] that uses pseudo labels generated by a frozen main decoder to supervise auxiliary decoders. We also compared our method with four naive methods: 1) **Source only** where the model pre-trained in the source domain is directly used for inference in the target domain, which serves as the lower bound; 2) **Target only** that uses training images and their labels in the target domain to train a model directly, without pre-training in the source domain; and 3) **Fine-tune train** and 4) **Fine-tune valid** that mean the model pre-trained in the source domain is fine-tuned with the annotated training and validation sets in target domain based on fully supervised learning, respectively. In order to investigate the impact of ensembling, we conducted two additional experiments: 1) **Source only**-**Esh** that refers to ensemble based on spatial transformations of input images for inference with the pre-trained source model; 2) **Ours w/o Esb** where our method did not utilize any spatial transformations and made predictions using only one decoder. We implemented all the compared methods with the same backbone, i.e., UNet [35] for M&MS and FB dataset, and 3D UNet [45] for FeTA dataset for a fair comparison. #### 4.2.1 Result for Cardiac Image Segmentation For the M&MS dataset, we used domain A as the source domain, and adapted the pre-trained model to domain B, C and D, respectively. Table 2 and 3 show the quantitative comparison between the compared methods in terms of Dice and ASSD, respectively. It can be observed the "Target only" outperformed "Source only" substantially, showing the large domain gap between the source and target domains. For example, in target domain B, "Source only" achieved an average Dice of 87.54%, 75.50% and 81.50% for LV, MYO and RV, respectively, and the corresponding Dice values obtained by "Target only" were 91.13%, 84.37% and 87.27% respectively. The second sections in Table 2 and 3 show that all the compared methods outperformed "Source only". PTBN [32], TENT [12] and TTT [9] obtained a moderate improvement from "Source only". For example, in Target domain B, they improved the average Dice for LV from 87.54% to 89.62%, 89.03% and 89.41%, respectively. URMA [21] obtained a higher Dice (90.38%) than these three methods, but it was inferior to our method (91.02%). The average Dice across the three target structures obtained by our method was 87.04%, 87.46% and 85.43% in the three target domains, respectively, compared with the corresponding values of 81.51%, 81.49% and 80.56% achieved by "source only", showing that our method improved the average Dice scores by 5.53, 5.97 and 4.87 percentage points in the three target domains respectively. In terms of average Dice values, our method outperformed "Fine-tune valid", and was close to "Target only" (\(p\)-value \(>\) 0.05) in target domain B, and better than "Fine-tune train", "Fine-tune valid" and "Target only" in target domain C. In target domain D, our method also outperformed "Target only". Note that "Target only", and "Fine-tune train" require annotations in the training set of the target domain, while our adaptation method could achieve a similar performance without the annotations. We also analyzed the effectiveness of ensemble of multiple prediction heads with spatial transformations. Taking M&MS B as an example, "Source only-Esb" performed better than "Source only", indicating the positive effect of additional data augmentations for inference. In addition, "Ours w/o Esb" exhibited a decreased performance compared with our com plete method. This suggests that ensembling during inference plays a beneficial role in our approach. A visual comparison between different SFDA methods is shown in Fig. 3. Note that "Source only" achieved a poor performance, and the results of our method were closer to the ground truth than those of the other methods. #### Iv-C2 Results for Fetal Brain Segmentation We further investigated the performance of the compared methods on FB dataset, with HASTE and TrueFISP as the source and target domains, respectively. The quantitative evaluation results are shown in Table IV. It can be observed that "Source only" and "Target only" achieved an average Dice of 84.09% and 88.85%, respectively, showing the large gap between the two domains. "Fine-tune train" outperformed "Target only", achieving an average Dice of 89.71%. The existing methods only achieved a slight improvement compared with "Source only", with the Dice values ranging from 84.12% to 85.84%. In contrast, our method largely improved it to 89.10%, which outperformed "Target only" and was close to "Fine-tune train" (p-value \(>\) 0.05). Our method achieved an average ASSD of 1.08 pixels, which was lower than those of the other SFDA methods. The qualitative comparison in the penultimate row of Fig. 3 shows Fig. 3: Qualitative comparison of different SFDA methods. The top three rows are from domain B, C and D on M&MS dataset respectively. The last two rows are from the target domain of FB and FeTA datasets, respectively. that the existing methods tend to achieve under-segmentation of the fetal brain, while our method can successfully segment the entire fetal brain region with high accuracy. #### 4.2.3 Results for 3D Fetal Tissue Segmentation Quantitative evaluation results on the FeTA dataset in terms of Dice are shown in Table 5. It shows that "Source only" and "Target only" achieved an average Dice of 68.30% and 81.27%, respectively, indicating the large gap between the two domains. Our method increased the average Dice by 6.89 percentage points compared with "Source only", reaching 75.19%. In contrast, the existing methods had a lower performance than ours. The average Dice obtained by PTBN [32], TENT [12], TTT [9] and URMA [21] was 69.53%, 72.64%, 70.90% and 73.79%, respectively. The qualitative comparison in the last row of Fig. 3 demonstrates that our method outperformed the other methods in terms of segmentation performance. ### Ablation Analysis of Our Upl-Sfda #### 4.2.1 Effect of Hyper-parameters There are three important hyper-parameters specific to our method: the number of duplicated prediction heads \(K\), the confidence threshold \(\tau\) to select reliable pseudo labels for supervision, and the loss weight \(\lambda\). We first investigated the effect of \(K\) by setting it to 1 to 5 respectively, and the performance on the validation sets of the two datasets are shown in Fig. 4(a). It can be observed that \(K=1\) performed worse than larger \(K\) values, showing the superiority of using TDG. The performance on both datasets improved when \(K\) changed from 1 to 4, and \(K=5\) did not further bring performance improvement. Therefore, we set \(K=4\) for our method. Then we investigated how \(\tau\) affected the pseudo labels and the SFDA performance. Fig. 5 shows some examples of reliable pseudo labels with different \(\tau\) values. We found that a higher threshold \(\tau\) will lead to smaller reliable regions for each class, which helps to avoid the model being affected by unreliable regions of the pseudo labels. Quantitative comparison between different \(\tau\) values is demonstrated in Fig. 4(b), which shows that the performance on the M&MS dataset was relatively stable with different \(\tau\) values, and \(\tau=0.95\) performed slightly better than the other values in average. The best \(\tau\) value on the FB dataset was 0.90 based on performance on the validation set. Therefore, we set \(\tau\) to 0.95 and 0.9 for the two datasets, respectively. The performance on the validation set with different \(\lambda\) values is shown in Fig. 4(c). It demonstrates that the best \(\lambda\) was 1.0 for the different datasets. Fig. 6 shows the reliable pseudo labels obtained at different training epochs in the target domains. It can be observed that the pseudo labels are updated during the self-training process, and their quality gradually improves at different training epochs. In addition, the confidence of the pseudo labels also improves with the increase of training epochs. #### 4.2.2 Ablation study of each component To evaluate the effectiveness of each of the proposed components in our UPL-SFDA, we set the baseline as updating the source model based on self-training where the network was supervised by its own prediction and an entropy minimization loss. The quantitative results obtained by different variants of our method are shown in Table 6, where \(M\) means using the binary reliability map to weight pseudo labels, TDG means using target domain growing with dropout before each prediction head, and \(\mathcal{T}\) means using random spatial transformation for each prediction head. \(\mathcal{L}_{ment}\) means minimizing entropy of the mean prediction across the \(K\) heads, rather than minimizing entropy of each head respectively. Table 6 shows that each component of our method led to a performance improvement. Take the performance on the domain C of M&MS dataset as an example, the average Dice score obtained by "Source only" was 81.47%. The baseline obtained an average Dice of 84.20%, and introducing reliability map weighting for pseudo labels improved it to 85.09%. For TDG, only using dropout for perturbations obtained an average Dice of 85.22%, and additionally using spatial transformation for the prediction heads improved it to 86.16%, showing that the spatial transformation plays an important role in our method. Then, using our Twice Forward pass Supervision (TFS) loss improved it to 86.79%, and our proposed method combining all these modules with \(\mathcal{L}_{ment}\) obtained the highest Dice score of 87.46%. Note that by removing the spatial transformation for the prediction heads in our method, the average Dice decreased to 85.43%. We also tried to only combine \(L_{ment}\) loss with TDG using the spatial transformations (i.e., removing TFS loss), and the average Dice dropped to 86.94%. In addition, Table 6 shows that our method outperformed "Target only" on domains C and D in the M&MS dataset and the target domain of FB dataset in terms of average Dice score. efficient uncertainty estimation, which prevents the model being corrupted by unreliable pseudo labels. Using entropy minimization on the average prediction across the multiple heads can encourage a consistency between them, which also improves the robustness of our method. The pseudo label-based supervision loss \(\mathcal{L}_{w-dice}\) and the unsupervised regularization loss \(\mathcal{L}_{ment}\) have two similarities. First, both of them are based on multi-head agreement. \(\mathcal{L}_{w-dice}\) uses relatively consensus regions of the \(K\) prediction heads as pseudo labels, and \(\mathcal{L}_{ment}\) encourages the \(K\) prediction heads to obtain consensus results by minimizing the uncertainty in the average prediction. Second, the two terms will increase the confidence of the predictions. \(\mathcal{L}_{w-dice}\) drives the predictions to be closer to the hard pseudo labels, while \(\mathcal{L}_{ment}\) directly minimizes the entropy, and both of them will reduce uncertain predictions. However, they also have several important differences. First, \(\mathcal{L}_{w-dice}\) encourages consistency across two different forward passes with feature perturbations, while \(\mathcal{L}_{ment}\) is for consistency across prediction heads. Second, \(\mathcal{L}_{w-dice}\) is applied to high-confidence pixels (with a threshold of \(\tau\)), while \(\mathcal{L}_{ment}\) is applied to the entire image region. Thirdly, \(\mathcal{L}_{w-dice}\) is a pseudo label-based supervision loss, while \(\mathcal{L}_{ment}\) is an unsupervised loss for regularization. Therefore, the two terms are complementary to each other. Introducing perturbations to the \(K\) prediction heads in TDG is important for achieving good performance. Without perturbation, the \(K\) prediction heads will obtain the same result, which degrades to just using the pre-trained model with a single prediction head. With perturbations, the \(K\) prediction results are different and their ensemble is more robust, which Fig. 5: Effect of confidence threshold \(\tau\) on reliable pseudo labels. The first three rows are from domain B, C and D on M&MS dataset respectively, and the last row is from the target domain of FB dataset. (c) shows pseudo labels obtained by argmax, and (d)-(h) are reliable pseudo labels with different \(\tau\) values, where uncolored regions are pixels with unreliable pseudo labels. can overcome the bias in each prediction head and lead to uncertainty estimation. In addition, we implemented our TDG with an encoder-decoder structure due to that most state-of-the-art CNNs for medical image segmentation have an encoder-decoder structure [35, 36]. It may also be applied to other networks [46] by duplicating the prediction head multiple times with perturbations in the target domain. In our experiment, a validation set with annotations in the target domain is used to select hyper-parameters for the compared methods. The advantage of using the labeled validation set is that it allows to find the optimal hyper-parameters such as learning rate and weights of loss terms of each compared method. In addition, it allows early stopping and checkpoint selection to avoid over-fitting on the training set in the target domain, which ensures a fair comparison between the different methods. One may also use the validation set to update the model weights by fine-tuning, which could provide more supervision signal directly to the model for parameter optimization. However, it may lead the model to over-fit the validation set that is usually small. In addition, using the validation set for hyper-parameter selection rather than model learning is a work standard in the machine learning community. However, in some cases, the labeled validation set may not be available, making it less practical to use the validation set to fine-tune the pre-trained model. This work still has some limitations. First, our method involves performing two forward passes for each gradient back-propagation, which takes more time than using a single forward pass. The training time consumption for our method is slightly higher than TENT [12], but lower than URMA [21]. For instance, in M&MS B, our method takes an average of 0.661s per case to train one epoch, while TENT and URMA require 0.342s and 0.944s in average, respectively. The average inference time for our method is 0.342s per case, and slightly higher than TENT's 0.269s. Second, we have employed a labeled validation set in the target domain to select the optimal hyper-parameters. However, in practical applications, acquiring a validation set could be challenging, making it hard to determine hyper-parameters. Additionally, TDG with multiple prediction heads increase the memory cost, which does not allow a large patch size or batch size for dealing with 3D medical images and may limit the performance. ## VI Conclusion In conclusion, we propose a novel uncertainty-aware pseudo label-guided approach for Source-Free Domain Adaptation (UPL-SFDA) in medical image segmentation, which uses target domain growing to generate multiple predictions for an input to obtain reliable pseudo labels with a weight map based on uncertainty estimation. The network is supervised by the weighted pseudo labels and minimizing the entropy of the average of the multiple predictions. A twice forward pass supervision strategy is also proposed to avoid the network being biased towards its own predictions in self-training. Experimental results on multi-site heart MRI segmentation and cross-modality fetal brain segmentation showed that our Fig. 6: Pseudo labels at different training steps in self-training. \(\epsilon\) means the epoch number with the highest performance on the validation set. The first three rows are from domain B, C and D of M&MS dataset respectively, and the bottom row is from the target domain of FB dataset. In (c)-(g), only reliable pseudo labels are encoded by colors, and pixels without encoded colors will be ignored in the calculation of TFS loss. method outperformed existing SFDA methods, and it was comparable to and even better than supervised training in the target domain. In the future, it is of interest to apply our method to other segmentation tasks.
2309.00057
Aspects of Machian Gravity (II): Testing Theory against Rotation Curves of 175 SPARC Galaxies
Machian Gravity (MG) presents a mathematical framework that captures the essence of Mach's principle. It was formulated to address the limitations of general relativity and provide a gravity theory founded on robust logical principles. Unlike the approach of modifying existing theories by introducing extra scalar and vector degrees of freedom to account for observational data, MG offers a more coherent alternative. Previous investigations have revealed MG's potential to explain diverse phenomena, such as galactic velocity patterns, galaxy cluster mass distribution, and cosmic expansion, without requiring additional dark components in the universe. This study applies the MG acceleration law to a wide array of galaxies sourced from the SPARC galactic database. Through meticulous analysis, we have determined the optimal parameters of the Machian gravity model for each individual SPARC galaxy, consequently fitting their distinctive rotational profiles. Similar to the Modified Newtonian Dynamics (MOND), our results suggest the presence of an acceleration scale linked to galaxies, governing their rotational behavior near the outer regions. Importantly, this acceleration scale exhibits variability across different galaxies, albeit typically remaining around the order of $10^{-8} {\rm cm/s^2}$.
Santanu Das
2023-08-31T18:02:14Z
http://arxiv.org/abs/2309.00057v1
# Aspects of Machian Gravity (11): ###### Abstract Machian Gravity (MG) presents a mathematical framework that captures the essence of Mach's principle. It was formulated to address the limitations of general relativity and provide a gravity theory founded on robust logical principles. Unlike the approach of modifying existing theories by introducing extra scalar and vector degrees of freedom to account for observational data, MG offers a more coherent alternative. Previous investigations have revealed MG's potential to explain diverse phenomena, such as galactic velocity patterns, galaxy cluster mass distribution, and cosmic expansion, without requiring additional dark components in the universe. This study applies the MG acceleration law to a wide array of galaxies sourced from the SPARC galactic database. Through meticulous analysis, we have determined the optimal parameters of the Machian gravity model for each individual SPARC galaxy, consequently fitting their distinctive rotational profiles. Similar to the Modified Newtonian Dynamics (MOND), our results suggest the presence of an acceleration scale linked to galaxies, governing their rotational behavior near the outer regions. Importantly, this acceleration scale exhibits variability across different galaxies, albeit typically remaining around the order of \(10^{-8}\)cm/s\({}^{2}\). ## 1 Introduction The behavior of rotation curves in galaxies, as predicted by Newton's gravitational theory, should exhibit a Keplerian fall-off in the orbital rotational speed \(v\) at the outer edge of the galaxy, given by \(v^{2}\propto M(r)/r\), where \(M(r)\) represents the mass enclosed within the radius \(r\). However, observations tell a different story. Instead of the anticipated decline, rotation curves tend to flatten out [1; 2; 3; 4]. Contrary to expectations, velocities often increase towards the center of galaxies and then stabilize at around \(v\sim 200-300\,\mathrm{km/s}\). On occasion, velocity might fall off or increase at a large radius while not complying with the Newtonian predictions. This inconsistency leads to a dynamic mass for galaxies that significantly surpasses their luminous mass. This behavior of galactic rotation curves can be accounted for by introducing additional invisible matter or dark matter. This dark matter is presumed to exist in a spherical halo enveloping galaxies. The general consensus is that it consists of a cold, pressureless fluid. Their interaction with baryonic matter is extremely small but expected to be nonzero. Numerous candidates have been proposed to explain dark matter, with supersymmetric particles being a prominent contender [5]. However, the lack of experimental confirmation of such particles, especially from the Large Hadron Collider (LHC), strengthens alternative propositions such as axions and ultra-light scalar field dark matter, etc [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. While dark matter may solve some of the velocity profiles, the evidence for dark matter relies on indirect observations. The observations only say that the baryonic matter that is present in a galaxy in the form of stars, neutral and high-temperature gas, etc., is not enough to provide sufficient gravity to explain the observed acceleration in these systems provided the calculations are made using the Newtonian law. Nevertheless, it's plausible that the Newtonian mechanics that hold in laboratories and the solar system might not be directly applicable to galaxies. Adjusting the equations could potentially account for observed accelerations without invoking dark matter. In recent years, a range of theories has emerged in attempts to explain dark matter. Empirical theories like Modified Newtonian Dynamics (MOND) have successfully matched galactic velocity profiles, though it violates the momentum conservation laws [17; 18; 19; 20]. Consequently, a mathematically sound theory that can replicate the empirical achievements of MOND, may offer a plausible explanation for dark matter. Bekenstein proposed AQUAL to provide a mathematical foundation to MOND [21; 22; 23]. Other theories, such as Modified Gravity STVG [28], Tensor-Vector-Scalar (TeVeS) theory [24; 25; 26; 27], and Massive Gravity [29; 30; 31; 32], have also emerged to reproduce galactic velocity profiles without requiring dark matter. Higher-dimensional concepts like Induced Matter Theory has also been proposed by researchers [33; 34; 35; 36; 37]. However, all of these theories are proposed to explain observed data rather than being derived from a foundational logical principle like general relativity. A robust theory should ideally be grounded in a strong logical or mathematical foundation. The inception of the general theory of relativity (GR) was initially intended to provide a mathematical formulation for Mach's hypothesis. However, it became evident that GR did not align with Mach's principle. Despite this, as GR successfully accounted for various observations on the scale of the solar system, Einstein did not further attempt to reconcile this discrepancy or incorporate Mach's argument into the theory. In our previous work, we proposed a theory based on Mach's principle to address the shortcomings of GR and provide an explanation for the inertia of objects. The theory, named as Machian Gravity (MG) is a five-dimensional theory. It can be derived from an action principle, thereby ensuring compliance with conservation principles. The fifth dimension, which we refer to as the background dimension, is intricately linked to the distribution of distant matter across the universe and plays a role in the inertia of particles. It has been demonstrated in [38] that MG converges towards GR in the context of the solar system, assuming all other matter is significantly distant. However, it deviates from GR on the galactic scale. Our earlier studies also illustrated how MG can provide insights into spiral galactic velocity profiles, galaxy cluster mass, and cosmic expansion history without necessitating extra dark components in the universe [39, 40, 38]. This paper undertakes a comprehensive analysis of the SPARC dataset using the Machian gravity model. The Spitzer Photometry and Accurate Rotation Curves (SPARC) dataset comprises observational data from 175 spiral galaxies, furnishing detailed insights into their rotation curves and other characteristics. Within this study, we employ MG to demonstrate how galactic rotational profiles can be accurately expounded using Machian gravity alone, obviating the necessity for additional nonbaryonic dark matter. The structure of this paper is organized as follows. The second section briefly introduces the Machian gravity model and outlines its applicability in calculating galactic velocities. The third section provides a quick overview of the SPARC dataset. In the fourth section, we present our analysis of MG applied to the SPARC dataset. The final section encapsulates our conclusions and further discussion. ## 2 A brief discussion of Machian Gravity The laws of physics should remain unchanged regardless of the choice of the coordinate system. While the general theory of relativity was proposed to satisfy this postulate, it ultimately diverges from this principle. To illustrate the challenge posed by the general theory of relativity, let's consider a thought experiment. The velocity and acceleration of a particle are relative quantities, necessitating a reference frame from which to measure them. Imagine there are only two point masses in the universe, with one being significantly heavier than the other. Let's designate the heavier mass as A and the smaller mass as B, orbiting A in a circular path. We'll examine two coordinate systems, both centered at A, with the z-axis perpendicular to B's orbital plane. Suppose one of these coordinate systems is rotating with respect to the other at an angular velocity \(\omega_{z}\). Since there's no distant object in the universe to establish the inertial coordinate system, we face ambiguity in distinguishing between the inertial and non-inertial coordinate systems. If we assume that the gravitational force is balanced by B's centripetal acceleration, we need to know the angular velocity of B, which differs between the two reference frames. Consequently, the centripetal acceleration of B varies between these frames, rendering the measurement of gravitational acceleration impossible as there is no way to know the centrifugal acceleration. In fact, if we align the x-axis of a reference frame with particle B, B will have no centripetal acceleration in that reference frame, leading to zero/undefined gravitational force on B in that frame. Since there are no distant matter ( stars, or galaxies ) in the universe except these two particles, the reference frames are equivalent. Therefore, the laws of physics should remain independent of the chosen reference frame, and the absence of centrifugal force in one frame should imply its absence in the other. Einstein's general theory of relativity explains acceleration using the curvature of the coordinate system. In cases like this, it becomes impossible to discern the acceleration or predict which coordinate will experience curvature and to what extent. Consider the example of Newton's bucket, where two buckets filled with water rotate with respect to each other in the absence of other particles. Which bucket's water surface will curve? If we attach coordinate frames to both buckets, one frame rotates concerning the other, and both coordinate systems are equivalent in the absence of distant matter. If we assume curvature in one bucket's water surface, it implies different laws of physics in distinct reference frames, which is nonsensical. Thus, we must assume that the laws of physics apply only in the presence of distant objects. The motion of these distant objects (stars, galaxies, etc.) creates a gravitational field in accelerating objects, giving rise to inertial forces such as centrifugal, Coriolis, and Euler forces etc. This concept was initially mathematically derived by Sciama and then studied by various authors [41, 42, 43]. In a 5-dimensional coordinate system, as we have demonstrated in [38], all these forces can be attributed to the motion of background stars and galaxies. The fifth dimension, denoted as the background dimension, is responsible for particle inertia. ### Static, Spherically symmetric, Vacuum solution for weak gravitation field The field equation describing MG is analogous to GR, but it operates in a five-dimensional space instead of the standard four dimensions. The five-dimensional line element can be represented as \(ds^{2}=\widetilde{g}_{AB}dx^{A}dx^{B}\), where \(\widetilde{g}_{AB}\) denotes the five-dimensional metric. Here, the indices \(A\), \(B\), etc., refer to the coordinates in the five-dimensional system, while Greek letters like \(\alpha\), \(\beta\), etc., are reserved for the four-dimensional system. The notation \(\widetilde{\cdot}\) signifies quantities in the five-dimensional context. When considering a vacuum, the field equation for MG becomes \(\widetilde{G}_{AB}=0\), which upon manipulation can be expressed as \(\widetilde{R}_{AB}=0\), where \(\widetilde{R}_{AB}\) denotes the Ricci tensor. In the context of a weak gravitational field, the metric can be approximated as a perturbation over the Minkowski metric, written as \(\widetilde{g}_{AB}=\widetilde{\eta}_{AB}+\widetilde{\gamma}_{AB}\). Here, \(\widetilde{\eta}_{AB}\) is the Minkowski metric defined by \(\widetilde{\eta}_{AB}=\text{diag}(c^{2},-1,-1,-1,-\frac{\hbar^{2}}{4})\), and \(\widetilde{\gamma}_{AB}\) represents the metric perturbation. For weak gravitational fields, the dominant component is the time component. Consequently, the 00 component of the Ricci tensor, \(\widetilde{R}_{00}=\widetilde{R}_{0C0}^{C}\), simplifies to a form involving the Riemann tensor. The Riemann tensor can be expressed as: \[\widetilde{R}_{0A0}^{B}=\partial_{A}\widetilde{\Gamma}_{00}^{B}-\partial_{0} \widetilde{\Gamma}_{A0}^{B}+\widetilde{\Gamma}_{AC}^{B}\widetilde{\Gamma}_{0 0}^{C}-\widetilde{\Gamma}_{0C}^{B}\widetilde{\Gamma}_{A0}^{C}\;. \tag{1}\] The second term here is a time derivative, which vanishes for static fields. The third and fourth terms are of the form \((\widetilde{\Gamma})^{2}\), and since \(\widetilde{\Gamma}\) is first-order in the metric perturbation, these contribute only at second order and can be neglected, giving \[\widetilde{R}_{00}=\widetilde{R}_{0A0}^{A}=\partial_{A}\left(\frac{1}{2} \widetilde{g}^{AC}\left(\partial_{0}\widetilde{g}_{C0}+\partial_{0}\widetilde{ g}_{0C}-\partial_{C}\widetilde{g}_{00}\right)\right)=-\frac{1}{2}\widetilde{g}^{ AB}\partial_{A}\partial_{B}\widetilde{\gamma}_{00}\;. \tag{2}\] For the static solution, the time derivative also vanishes, and the equation becomes \[\partial_{\zeta}^{2}\widetilde{\gamma}_{00}+\partial_{x}^{2}\widetilde{\gamma }_{00}+\partial_{y}^{2}\widetilde{\gamma}_{00}+\partial_{z}^{2}\widetilde{\gamma }_{00}=0\,. \tag{3}\] Here \(\zeta\) is the fifth dimension, which we also sometimes refer to as the background dimension. It is somehow related to the matter distribution in the entire universe and is responsible for the inertial properties of matter [38]. Under the assumption of spherical symmetry of the special part, it can be written as \[\partial_{\zeta}^{2}(r\widetilde{\gamma}_{00})+\partial_{r}^{2}(r\widetilde{ \gamma}_{00})=0\,. \tag{4}\] Using'separation of variables' and considering \((r\widetilde{\gamma}_{00})=R(r)\chi(\zeta)\), we get \[\frac{1}{R}\frac{\partial^{2}R}{\partial r^{2}}=-\frac{1}{\chi}\frac{\partial^ {2}\chi}{\partial\zeta^{2}}=\lambda^{2}\,, \tag{5}\] where, \(\lambda\) is a constant. This gives \[R=P_{1}e^{\lambda r}+P_{2}e^{-\lambda r}\,,\qquad\qquad\chi=Q_{1}\cos(\lambda \zeta)+Q_{2}\sin(\lambda\zeta)\,, \tag{6}\] where, \(P_{1}\), \(P_{2}\), \(Q_{1}\) and \(Q_{2}\) are constants. The term \(\tilde{\gamma}_{00}\), under weak-field approximation is a first-order perturbation, and it cannot increase exponentially with distance. Therefore, taking \(P_{1}=0\), we can get \[(r\widetilde{\gamma}_{00})=S+P_{2}e^{-\lambda r}\left(Q_{1}\cos(\lambda\zeta)+ Q_{2}\sin(\lambda\zeta)\right)\,. \tag{7}\] \(S\) is constant, coming from the complimentary function of the differential equation. If we consider that over a place (a scale of the order of a galaxy) the background is almost similar then the change in \(\zeta\) is really small. Therefore, here we may just take \(\lambda\zeta\sim 0\). There is a constant factor of \(\hbar\) multiplied which is also is very small. So, in this limit \(\cos(\lambda\zeta)\to 1\) and \(\sin(\lambda\zeta)\to 0\). Relating it with Newtonian gravity, we get \(\tilde{\gamma}_{00}=2\Phi\), where \(\Phi\) is the Newtonian potential of the gravitational field. Replacing these limiting values in Eq.(7) and substituting \(P_{2}Q_{1}=2KM\) and \(S=2(1+K)M\) and replacing \(\tilde{\gamma}_{00}=2\Phi\) we can get the potential as \[\Phi=\frac{GM}{r}\left[1+K\left(1-e^{-\lambda r}\right)\right]\,. \tag{8}\] Here, M is the mass at the center and \(G\) is the Newton's gravitational constant. \(\lambda\) and \(K\) are the background-dependent quantities. They may depend on mass \(M\) but are independent on \(r\). Observations of galactic velocity profiles indicate that \(\lambda^{-1}\) typically falls within the order of a few kpc. For small values of \(r\), the exponential term \(e^{-\lambda r}\) approaches unity. Consequently, the potential \(\Phi\) assumes the shape of the Newtonian potential, given by \(\Phi=\frac{GM}{r}\). This alignment with the Newtonian potential is particularly significant on the scale of the solar system. In the asymptotic limit of \(r\to\infty\), the exponential term goes to 0. Hence, for large values of \(r\), it becomes \((1+K)\) times that of Newtonian potential and can provide additional gravitational force in large gravitationally balanced systems, such as galaxies, galaxy clusters, etc. A similar form of potential has previously been used by other groups to explain the galactic velocity profile correctly [24, 25, 26, 27, 44]. It is important to emphasize that both \(K\) and \(\lambda\) should remain independent of the mass of the gravitating objects, as any such dependence would lead to a violation of symmetry. To illustrate this point, consider a scenario where two particles mutually exert gravitational attraction on each other. If \(K\) were to rely on the mass of just one of these particles, the equation governing gravitational energy would become asymmetric with respect to the masses of both particles. Consequently, in order to preserve symmetry, \(K\) and \(\lambda\) are not influenced by the masses of the interacting entities. Instead, \(K\) and \(\lambda\) are shaped by the combined mass distribution of all other particles that exist nearby. Specifically, if there were only two particles in the universe, then \(K\) should be 0. However, upon introducing a third particle, its presence alters the gravitational interaction between the initial two particles through the \(K\) and \(\lambda\) term. This also aligns with the fundamental concepts of Mach's principle, which proposes that local physical phenomena are interconnected with the distribution of matter throughout the universe. In essence, the influence of neighboring masses is coming into the gravitational energy through the \(K\) and \(\lambda\). As the potential due to a static spherically symmetric gravitational field is given by Eq.(8), we can calculate the acceleration due to the gravitational field as \(\frac{\partial\Phi}{\partial r}\). If a particle is orbits mass Figure 1: For different \(\alpha\) there is a range of \(\lambda r\) for which the curve simply flattens out. in a circular orbit of radius \(r\), and its orbital velocity is \(v\), then we can calculate \(v\) by equating the centripetal force with the gravitational field, giving \[v^{2}=\frac{GM}{r}\left[1+K\left(1-e^{-\lambda r}\left(1+\lambda r \right)\right)\right]=\frac{GM(1+K)\lambda}{\lambda r}\left[1-\alpha e^{- \lambda r}\left(1+\lambda r\right)\right]\,. \tag{9}\] where \(\alpha=\frac{K}{1+K}\). The velocity has some interesting property. For \(\alpha\in(0.92,0.95)\) and \(\lambda r\in(0.4,2.5)\) the velocity becomes almost independent of \(r\). This can be seen in Fig. 3. From the range of \(\alpha\) we can derive the range of \(K\) to be \(\in(11,19)\). The velocity in the outer part of the spiral galaxies (rotationally bounded system) does not decrease with increasing radius, as suggested by Keplarian velocity. In fact, it is almost independent of radius \(r\). Therefore, Eq. 9 can be used to explain the velocity profile of spiral galaxies. This was first explained in [45, 46]. For this particular range of \(r\), the velocity of the test particle behaves as \(v^{2}\sim GM(1+K)\lambda\). However, according to the Tully-Fisher relation, the mass of a spiral galaxy is linked to its asymptotic velocity as \(M\sim v^{\gamma}\), where \(\gamma\in(3.5,4)\). If we assume that \(M\sim v^{4}\), then we can take \[(1+K)\propto\frac{1}{\sqrt{M}}\qquad\Longrightarrow\qquad K= \sqrt{\frac{M_{c}}{M}}-1 \tag{10}\] Here \(M_{c}\) is some constant mass. Putting everything together, the expression for the final velocity becomes \[v^{2}=\frac{GM}{r}\left[1+\left(\sqrt{\frac{M_{c}}{M}}-1\right) \left(1-e^{-\lambda r}\left(1+\lambda r\right)\right)\right]\,. \tag{11}\] This equation follows Newtonian velocity for a particle in orbit for \(\lambda r\ll 1\). For \(\lambda r\in(0.4,2.5)\), velocity becomes constant and follows the Tully Fisher relation i.e., \(v^{4}\sim M\). Finally for \(\lambda r\gg 2.5\), it behaves as \(v^{2}\sim\frac{\sqrt{M}}{r}\). At this point, we should also like to point out that \(K\) is a background-dependent quantity. For spiral galaxy \(K\) follows relation Eq. 10 does not imply that \(K\) should follow similar expressions for any kind of mass distribution. For other kinds of mass distribution, the form of \(K\) may vary. ## 3 A brief overview of SPARC galaxy database An ideal galaxy sample should include all the galaxies within a sufficiently extensive volume of the universe [47]. However, in practice, such a sample can never exist as there will be limitations on the Figure 2: The graph illustrates the measured velocities at the outer edges of 175 SPARC galaxies against the masses of these galaxies. Additionally, a blue straight line represents the optimal fit through these data points, accompanied by the equation characterizing this linear fit. The graph provides \(M\sim v^{3.85}\), aligning with the expectations of the Tully Fisher relation. minimum luminosity and surface brightness of galaxies below which the galaxies can not be detected. Therefore, the best thing is to sample randomly across the mass function [48]. SPARC (Spitzer Photometry and Accurate Rotation Curves) database provides a sample of 175 nearby galaxies with new surface photometry at 3.6\(\mu\)m and high-quality rotation curves from previous HI/H\(\alpha\) studies [49]. SPARC rotation curves are drawn from multiple papers. The rotation curves are generally smooth but can show large-scale features with a direct correspondence in the surface brightness profile, in agreement with Renzo's rule: "For any feature in the luminosity profile, there is a corresponding feature in the rotation curve and vice versa" [50, 51]. The baryonic velocity is divided into three components, the gas velocity \(v_{gas}\), disk velocity \(v_{disc}\), and the bulge velocity \(v_{bul}\). The total baryonic velocity is defined as \[v_{bar}=\sqrt{\epsilon_{gas}v_{gas}^{2}+\epsilon_{disc}\gamma_{disc}v_{disc}^ {2}+\epsilon_{bul}\gamma_{bul}v_{bul}^{2}}\;. \tag{1}\] \(\gamma_{disc}\) and \(\gamma_{bul}\) are the mass to light ratio of the stars in the disc and the bulge. \(\epsilon_{...}\) represents the signature of different components of the velocities, which in some cases can be negative. Most importantly \(V_{gas}\) can sometimes be negative in the innermost regions: this occurs when the gas distribution has a significant central depression and the material in the outer regions exerts a stronger gravitational force than that in the inner parts [49]. The values of \(v_{disc}\) and \(v_{bul}\) are provided for \(\gamma_{disc},\gamma_{bul}=1\). Ideally \(v_{obs}>v_{bar}\). However, in the SPARC dataset there are some galaxies for which \(v_{bar}/v_{obs}>1\), mostly at the low radius. There are many such galaxies however, the issue is severe for about 21 galaxies. This may be due to the fixed \(\gamma\) value as described by [49]. For our calculations, we have used \(v_{bar}\) given in the SPARC data set and then we obtain \(M(r)\) inside a radius \(r\) using \(M(r)=v_{bar}^{2}r/G\). In Fig. 2, we have plotted the velocity at the outermost data point of all 175 galaxies against mass of the galaxies. The best linear fit is given by Figure 3: The figure illustrates the relationship between mass discrepancy to four variables: \(r\), \(v/r\), \(a_{N}\), and \(\lambda r\). These plots are based on a dataset comprising 2385 data points from the SPARC galaxies. None of the plots exhibit a very prominent correlation. Specifically, the top two plots display relatively weaker correlations and appear more dispersed, whereas the bottom two plots show relatively stronger correlations. \[\log(v)=0.26\log(M)-0.5989\,. \tag{10}\] This is equivalent to \(M\sim v^{3.85}\), which is in agreement with the Tully-Fisher relation [52]. ## 4 Testing the theory against observations The core objective of this article is to investigate whether Machian gravity can offer an explanation for observed galactic phenomena. We use Eq. 11 to fit it with 175 SPARC galaxies. The primary parameters in our analysis are \(M_{c}\) and \(\lambda^{-1}\), which are determined for each of the galaxies using MCMC analysis. The results have been shown in Table 1. Fig. 3 displays the mass discrepancy of the galaxies against various parameters. This discrepancy is quantified through the ratio of observed acceleration to the Newtonian acceleration, computed from the luminous mass of the galaxy. Mathematically, this can be expressed as [53]: \[\frac{a}{a_{N}}=\frac{v_{obs}^{2}}{v_{bar}^{2}}\;. \tag{11}\] In our investigation, we exclude 21 galaxies with \(v_{bar}/v_{obs}>1\), as these are likely a consequence of fixed mass-to-light ratio selections, as discussed in [49]. Among the remaining 154 galaxies, several data points exhibit \(v_{bar}/v_{obs}>1\) at smaller radii, although the errors may not be severe. Overall, we plot a total of 2385 data points. Different galaxies have distinct rotation curves. Given the substantial dataset encompassing numerous galaxies, it is expected that the resulting plot would exhibit a scattered pattern. This is indeed evident in the first plot (top-left) of Fig. 3, which illustrates mass discrepancies in relation to radius. We can see that there are galaxies where the mass discrepancy is not apparent at low radii while there are other galaxies where the mass discrepancy kicks in even at a very small radius. The lack of a fixed radius where mass discrepancy consistently occurs differs from what one might anticipate if a purely baryon-independent cold dark matter governed the galaxy's dynamics. A similar scenario emerges in the subsequent plot (top-right), illustrating mass discrepancy against orbital angular velocity. The plot shows a slightly higher correlation with the mass discrepancy. However, the scatter plot highlights the absence of a high correlation between the two. In contrast, the lower-left plot depicting mass discrepancy against Newtonian acceleration reveals a noticeable correlation. Generally, the mass deficiency becomes apparent beyond an acceleration of \(a=10^{-8}\,\mathrm{cm/s^{2}}\). Consequently, it establishes a connection between the Newtonian acceleration and mass deficiency. Various authors have explored this linkage in previous works, including [53, 54, 55]. The last (bottom-right) plot depicts the mass deficiency against \(\lambda r\), where the best-fit \(\lambda\) value has been individually computed for each galaxy. Once again, a robust correlation emerges between mass discrepancy and \(\lambda r\). Mass discrepancy becomes apparent for \(\lambda r>0.3\), and this pattern of discrepancy is prevalent in nearly all galaxies within the range of \(\lambda r\in(0.3,5)\). This range aligns with our earlier expectations from the preceding section. This observation bears significant implications, particularly concerning theories such as MoG and TeVeS, which introduce additional massive scalar and vector fields. However, the fixed mass of these fields renders them unable to account for this observed relationship. These trends can be explained through mathematical analysis. Referring to Eq. 11, it is evident that the equation gives a radius-independent velocity as \(\lambda r\to 1\). Therefore, expanding the velocity in a Taylor series centered around \(\lambda r\to 1\) we can get: \[v^{2} = \frac{GM}{r}\left[1+\left(\sqrt{\frac{M_{c}}{M}}-1\right)\left(1-e^{ -1}\left(3-\lambda r\right)\right)\right]\] \[= \frac{GM}{r}\left[3e^{-1}+\left(\sqrt{\frac{M_{c}}{M}}\lambda re^ {-1}-\lambda re^{-1}\right)+\left(1-3e^{-1}\right)\sqrt{\frac{M_{c}}{M}}\right]\] \[\frac{v^{2}}{r} \approx \frac{GM}{r^{2}}\left[1.1+0.37\sqrt{\frac{\frac{GM_{c}}{\lambda^ {-2}}}{\frac{GM}{r^{2}}}}-0.37\lambda r-0.1\sqrt{\frac{M_{c}}{M}}\right] \tag{10}\] Now provided \(\lambda r>0.27\) the second term will dominate over the 4th term. Also, if we assume that \(\frac{M_{c}}{M}\sim 10-50\), then the third term is also small. Therefore, if we ignore these terms, we can write the equation as \[a\approx a_{N}\left[0.73+0.37\sqrt{\frac{a_{0}}{a_{N}}}\right]=a_{N}\mu\left( \frac{a_{0}}{a_{N}}\right)\,. \tag{11}\] Here, \(a_{N}=\frac{GM}{r^{2}}\) is the Newtonian acceleration, while \(a_{0}=\frac{GM_{c}}{\lambda^{-2}}\) is a characteristic acceleration. \(\mu\) is a function. Consequently, near the galaxy's periphery or when \(\lambda r\to 1\), our equation behaves as MOND. Therefore, the mass discrepancy is predominantly linked to acceleration, with a slight dependence on \(\lambda r\). This \(\lambda r\to 1\) dependence is consistent with the findings in Fig. 3. As the mass discrepancy is influenced by \(a_{0}=\frac{GM_{c}}{\lambda^{-2}}\), it also shows that \(M_{c}\) and \(\lambda\) are not entirely independent. Instead, we should expect a strong correlation between \(M_{c}\) and \(\lambda^{-2}\). In Fig. 4, we present the distributions derived from our MCMC analysis for two randomly selected galaxies, NGC0247 and UGC12732. The first two plots in the upper row show the distributions for \(\lambda^{-1}\) and \(M_{c}\), respectively, while the third plot depicts their correlation. Evidently, a robust correlation emerges between \(\lambda^{-2}\) and \(M_{c}\), showing the influence of \(a_{0}=\frac{GM_{c}}{\lambda^{-2}}\) on gravitational field. The second row displays the \(\chi^{2}\) distribution against \(\lambda^{-1}\), \(M_{c}\), and \(a_{0}\) respectively. Strikingly, the \(\chi^{2}\) distributions consistently exhibit a single minimum across all these parameters. This observation bears significance. If the gravitational equation were governed by MOND-like formulations where the gravitational force exclusively depends on a characteristic \(a_{0}\), we would not get such singular minima in the \(\chi^{2}\) plots when plotted against \(\lambda^{-1}\) or \(M_{c}\). This is because if \(a_{0}\) constituted the sole parameter of the theory, there would be an infinite number of possible combinations of \(\lambda\) and \(M_{c}\) resulting in the same \(a_{0}\) value. While we show only two galaxies, the analysis is carried out on all 175 galaxies. Remarkably, apart from a few exceptions, all galaxies exhibited a singular minimum \(\chi^{2}\) value, further reinforcing the consistency of the findings. To sum up, in Fig. 5, we have shown the rotational curves of the 154 SPARC galaxies. The red dots with green errorbars represent the observed velocities from the SPARC dataset. The blue dotted line shows the expected velocity profile from the baryonic matter calculated using standard Newtonian mechanics. The red line represents Machian Gravity fit to the velocity profile. It is interesting to see that many galaxies have unique velocity profiles. Importantly, each feature within the observed velocity profiles corresponds to a corresponding feature in the velocity profiles calculated from the baryonic matter using Newtonian dynamics. Such correspondence poses a challenge for explanations invoking dark matter, which lacks strong coupling to baryonic matter. Intriguingly, the Machian Gravity model is remarkably effective in explaining the velocity profiles, entirely avoiding the need for dark matter. Fig. 6 depicts the rotational profile for the remaining 21 galaxies. A visual inspection of these plots suggests potential issues with the mass model for these galaxies. In these cases, the velocity derived from the baryonic mass using Newtonian mechanics significantly exceeds the observed velocities, hinting at potential inconsistencies between the model and the data. Table 1 provides a summary of our findings, showcasing the best-fit values for \(M_{c}\), \(\lambda^{-1}\), and \(a_{0}\) across various galaxies. These values are accompanied by other galaxy-specific details like baryonic mass and radius. Notably, we observe that while \(\lambda^{-1}\) varies significantly from galaxy to galaxy, for 147 out of 175 galaxies, \(\lambda^{-1}r\) falls within the range of \((0.1,5)\) for all galaxies. This suggests that using a fixed value of \(\lambda\) as often done might not fully capture the complexity of all velocity profiles [25, 56]. Moreover, we notice a considerable variation in the acceleration parameter \(a_{0}\) among different galaxies. Even though these values might roughly be on the order of \(10^{-8}\)cm/s\({}^{2}\), the extent of this Figure 4: The illustration portrays the distributions of \(\lambda^{-1}\) and \(M_{c}\), their correlation, and the relationship of \(\chi^{2}\) with these parameters, alongside \(a_{0}=\frac{GM_{c}}{\lambda^{-2}}\). In the figure, \(\lambda^{-1}\) is scaled in kiloparsecs (kpc) and \(M_{c}\) is scaled in solar masses (\(M_{\odot}\)). Notably, the value of \(a_{0}\) for the best-fit \(\chi^{2}\) is approximately on the order of \(10^{-8}\) cm/s\({}^{2}\). The value of \(R_{max}\) corresponds to the radius of the last velocity data point for each galaxy. Importantly, \(\lambda^{-1}\) aligns roughly with the order of \(R_{max}\). variation indicates that a single acceleration value may not be sufficient to comprehensively explain all galaxies, which contrasts with claims made by MOND theories. ## 5 Discussion and Conclusion This study shows how the Machian gravity (MG) model, as proposed in [38, 39, 40, 57], can explain spiral galactic velocity profiles using the SPARC database. Spiral galaxies, being rotationally bound systems, require dark matter to account for their velocity profiles. Over time, various modified gravity theories have been put forth to explain these velocity profiles in spiral galaxies empirically. Our investigation demonstrates how Machian gravity aligns with these empirical formulations. The velocity profile for spiral galaxies in MG is characterized by Eq. 11. It has two parameters: a characteristic mass scale \(M_{c}\) and a length scale \(\lambda^{-1}\). Our analysis reveals that the length scale \(\lambda^{-1}\) is comparable with the radius of the galaxy and varies across different galaxies. Consequently, the conventional approach of adopting a fixed \(\lambda\), as postulated by some prior researchers [25], may not comprehensively capture the intricacies of galactic velocity profiles. Furthermore, we establish that near the edge of a galaxy, where \(\lambda r\to 1\), an acceleration scale emerges, \(a_{0}=\frac{GM_{c}}{\lambda^{-2}}\). In this region, the stellar acceleration tends to a MOND-like behavior dictated by this acceleration scale. However, our analysis also shows that this acceleration scale diverges for different galaxies, depending on their individual structures. Although these accelerations approximately measure around \(10^{-8}\)cm/sec\({}^{2}\), their variability across galaxies suggests that a single acceleration scale might not completely account for the entire dataset.Our findings also reveal that beyond acceleration, a galaxy's mass discrepancy also hinges on \(\lambda r\), a factor not previously explored in existing research. Importantly, while several other modified gravity has been designed to explain observational phenomena, the Machian gravity model stands apart from these theories as it emerges from a purely mathematical quest to formulate Mach's principle. Notably, this model adeptly explains the galactic velocity profiles of 154 SPARC galaxies, demonstrating remarkable agreement. For the remaining 21 galaxies, discrepancies in the mass model are apparent, suggesting potential inaccuracies. We are confident that refining the mass model for these galaxies through parameter adjustments, such as varying the mass-to-light ratio, could potentially align their velocities with the predictions of the Machian gravity model.
2309.09926
Exponential approximation space reconstruction WENO scheme for dispersive PDEs
In this work, we construct a fifth-order weighted essentially non-oscillatory (WENO) scheme with exponential approximation space for solving dispersive equations. A conservative third-order derivative formulation is developed directly using WENO spatial reconstruction procedure and third-order TVD Runge- Kutta scheme is used for the evaluation of time derivative. This exponential approximation space consists a tension parameter that may be optimized to fit the specific feature of the charecteristic data, yielding better results without spurious oscillations compared to the polynomial approximation space. A detailed formulation is presented for the the construction of conservative flux approximation, smoothness indicators, nonlinear weights and verified that the proposed scheme provides the required fifth convergence order. One and two-dimensional numerical examples are presented to support the theoretical claims.
Lavanya V Salian, Samala Rathan
2023-09-18T16:42:14Z
http://arxiv.org/abs/2309.09926v1
# Exponential approximation space reconstruction WENO scheme for dispersive PDEs ###### Abstract. In this work, we construct a fifth-order weighted essentially non-oscillatory (WENO) scheme with exponential approximation space for solving dispersive equations. A conservative third-order derivative formulation is developed directly using WENO spatial reconstruction procedure and third-order TVD Runge-Kutta scheme is used for the evaluation of time derivative. This exponential approximation space consists a tension parameter that may be optimized to fit the specific feature of the characteristic data, yielding better results without spurious oscillations compared to the polynomial approximation space. A detailed formulation is presented for the the construction of conservative flux approximation, smoothness indicators, nonlinear weights and verified that the proposed scheme provides the required fifth convergence order. One and two-dimensional numerical examples are presented to support the theoretical claims \({}^{\lx@sectionsign}\)Department of Humanities and Sciences, Indian Institute of Petroleum and Energy-Visakhapatnam, India-530003, ([email protected]) \({}^{\dagger}\)Department of Humanities and Sciences, Indian Institute of Petroleum and Energy-Visakhapatnam, India-530003 ([email protected]). due to their pronounced significance. This is owing to the genesis of solitons from the intricate equilibrium between weak nonlinearity and dispersion. Later, a class of solitary waves with compact support that is termed as _compactons_ were discovered by Rosenau and Hyman [3]. The general form of the Rosenau - Hyman equation, also known as \(K(n,n)\) equation, is of the form \[u_{t}+(u^{n})_{x}+(u^{n})_{xxx}=0,\quad n>1, \tag{1.3}\] where \(u(x,t)\) is the wave amplitude as a function of the spatial variable \(x\) and time \(t\). Compactons represent a distinct subclass of solitons, distinguished by their finite wavelength, absence of exponential tails, lack of infinite wings, and resilient adherence to soliton-like behavioral patterns. Compactons and soliton solutions of the KdV equation exhibit shared attributes. For instance, the velocity of an individual compacton is directly proportional to its amplitude. During the movement and interaction of multiple compactons with varying velocities, nonlinear effects come into play, resulting in an altered phase upon exit. In contrast to conventional KdV solitons, the width of a compacton remains constant regardless of its amplitude. The \(K(n,n)\) equation diverges from the typical energy conservation laws held by the KdV equation and can only be derived from a first-order Lagrangian for \(n=1\). While solitons are analytical solutions, compactons are non-analytical solutions distinguished by non-analytical points at their edges. These points of non-analyticity align with genuine nonlinearity points within the differential equations. The resolution of the Rosenau-Hyman equation is challenging due to the concurrent interplay of dispersion effects and nonlinearity. Although the extensively employed pseudo-spectral method [4] in spatial domains effectively preserves solution positivity and incorporates high-pass filters to induce artificial dissipation, the post-compacton collision can lead to sign alterations within the solution. A variety of alternate methodologies have been explored in this context. These include the finite difference method with Pade approximation [5], the local discontinuous Galerkin method [6], the second-order finite difference approach [7], the adaptive mesh refinement-based line method [8], and and the direct WENO scheme employing polynomial bases for dispersion-type equations [9]. The governing equation dictating dispersive waves, as denoted by (1.1), bears significant resemblances to hyperbolic conservation laws. Conspicuously, both equation classes are susceptible to sharp fronts and wavefronts propagating at finite velocities. Consequently, an avenue of inquiry involves extending numerical techniques originally devised for resolving hyperbolic conservation laws, such as the WENO technique, to contend with the intricacies inherent to the dispersive wave equation. However, this extension mandates a meticulous adaptation of the WENO procedure to ensure the preservation of conservation, accuracy, and nonoscillatory behavior. In 1994, Liu et al.[10] introduced the original version of the WENO scheme within the context of a finite volume framework, employed for the resolution of one-dimensional conservation law equations. Subsequently, in 1996, Jiang and Shu [11] introduced an enhanced rendition of the WENO scheme within the finite differences framework, exhibiting greater efficiency compared to the method presented by Liu et al.[10] for addressing both one-dimensional and two-dimensional conservation law equations. Despite the evident enhancements introduced by Jiang and Shu [11], their methodology exhibited certain limitations, primarily manifesting in instances where gradients of higher order became negligible, thereby resulting in a diminution of accuracy order. Numerous researchers have endeavored to mitigate the numerical dissipation near discontinuities and to optimize the computational efficiency of the conventional WENO-JS scheme through various adaptations. The authors of [16], Henrick et. al., proposed a fifth-order WENO method named WENO-M. Notably, WENO-M employs a mapping technique to minimize the deviation of nonlinear weights utilized in the convex combination of stencils from the optimal weights, barring instances involving pronounced discontinuities. A distinct variation of the fifth-order WENO scheme, known as WENO-Z, was proposed by Borges et al. in their work [17]. The weighting formulation employed in this context diverges slightly from that of the WENO-JS formulation. In [20], Ha et al. presented a new smoothness indicator, referred to as WENO-NS, which utilizes the \(L^{1}\)-sum of generalized undivided differences to approximate the derivatives of flux functions. This indicator allows for achieving fifth-order convergence even in smooth regions, including critical points where the first derivatives are zero. Further, Nathan and Naga Raju enhanced the accuracy of WENO scheme at critical points for fifth and seventh order schemes [21, 22, 23]. Various researchers have proposed different high-order WENO schemes with the aim of enhancing the efficiency of these numerical schemes for solving hyperbolic conservation laws [18, 19, 24]. In 2016, Ha et al.[24] presented a novel WENO scheme utilizing exponential polynomials for solving hyperbolic conservation laws. The comparative analysis of numerical outcomes between methodologies based upon exponential, trigonometric and algebraic polynomial constructions for hyperbolic conservation laws [24, 25, 26, 27] and non-linear degenerate parabolic equations [28, 29, 30] has been studied in literature. In instances involving interpolation of data manifesting rapid gradients or high oscillations, the utilization of exponential or trigonometric polynomial bases confers a superior degree of efficiency than algebraic polynomial bases. These alternative polynomial bases prove to be more suitable for accurately capturing and representing such complex features in the data. The primary objective of this study is to propose a novel fifth-order Weighted Essentially Non-Oscillatory scheme named WENO-E, which employs exponential polynomials for solving the dispersion equation. The key design strategy of the WENO-E scheme is to attain the highest possible approximation order in smooth regions while ensuring accuracy is maintained even at critical points. By utilizing exponential polynomials, the scheme aims to enhance the accuracy and robustness of the numerical solution, enabling more effective and reliable computations for the dispersion equation. A global smoothness indicator based on generalized undivided differences is introduced to aid in the design of the nonlinear weights that play a crucial role in WENO reconstructions. Numerical experiments are conducted and compared with the polynomial-based scheme (WENO-Z) to demonstrate the WENO-E scheme's ability to accurately approximate solutions near singularities. This work is the first of its kind for solving prototype dispersive equations (1.1) with non polynomial approximation space, as per the authors' best knowledge. The paper's organization is delineated as follows: Section 2 elaborates on a broad framework for the finite differences WENO scheme and introduces the concept of approximating numerical fluxes with exponential polynomials. This approach is utilized to effectively address the third-order derivative term inherent in the prototype dispersion equation. In Section 3, we introduce an innovative technique that leverages exponential polynomials to construct numerical fluxes, both for the large stencil and its sub-stencils. We also discuss the associated ideal weights. In Section 4, the paper outlines the use of \(L^{1}\)-norm smoothness indicators to construct non-linear weights and provides an analysis of their accuracy in both smooth regions and at critical points. To support our theoretical claim, we present few one and two dimensional numerical results in Section 5. In Section 6, we provide brief concluding remarks. ## 2. Finite difference WENO scheme In this section, we describe a general framework of finite difference WENO schemes based on exponential polynomials to solve prototype dispersion equations. Without loss of generality, we shall focus on the one-dimensional prototype dispersion equations of the form \[u_{t}+f(u)_{x}+g(u)_{xxx} =0,\quad(x,t)\in\Omega\times(0,T],\] \[u(x,0) =u_{0}(x), \tag{2.1}\] along with periodic boundary conditions. To extend the algorithm to higher dimensions, the 1-D algorithm is applied along each coordinate direction. Assume the uniform spatial mesh as follows: \[\left\{\begin{array}{cc}x\in[x_{l},x_{r}],\quad x_{i}=x_{l}+(i-1)\Delta x, \quad i=1:N,\quad,x_{1}=x_{l},\quad x_{N}=x_{r},\\ I_{i}=[x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}],\quad x_{i+\frac{1}{2}}=\frac{(x_{ i+1}+x_{i})}{2},\end{array}\right. \tag{2.2}\] where \(\Delta x\) is the spatial step size and \(u_{i}^{n}\) is defined as a nodal point value \(u(x_{i},t^{n})\). ### Formulation of WENO scheme We use a conservative finite difference scheme in a method of lines (MOL) approach to writing \[\frac{du_{i}}{dt}=-\frac{1}{\Delta x}(\hat{F}_{i+\frac{1}{2}}-\hat{F}_{i-\frac {1}{2}})-\frac{1}{\Delta x^{3}}(\hat{G}_{i+\frac{1}{2}}-\hat{G}_{i-\frac{1}{2} }),\] where \(\hat{F}\) and \(\hat{G}\) are the numerical flux for convection and dispersion, respectively. The detailed construction procedure for convection flux is given in [11]. For the dispersion term, the \(k^{\text{th}}\)-order accurate conservative finite difference scheme is as follows: \[\frac{1}{\Delta x^{3}}(\hat{G}_{i+\frac{1}{2}}-\hat{G}_{i-\frac{1}{2}})\approx g (u)_{xxx}\big{|}_{x=x_{i}}+\mathcal{O}(\Delta x^{k}), \tag{2.3}\] where \(\hat{G}_{i+\frac{1}{2}}\) is a numerical dispersion flux at the cell boundary \(x_{i+\frac{1}{2}}\), \(\hat{G}_{i+\frac{1}{2}}=\hat{G}(u_{i-r},\ldots,u_{i+s})\) of \((r+s)\) variables. It has to satisfy Lipschitz continuity in each of its arguments and is consistent with the physical flux \(\hat{G}(u,\ldots,u)=g(u)\). Ahmat et al. in [9] implicitly considered a function \(h(x)\) as follows to guarantee the conservative property \[g(u)=\frac{1}{\Delta x^{3}}\int_{x-\frac{\Delta x}{2}}^{x+\frac{\Delta x}{2}} \int_{\eta-\frac{\Delta x}{2}}^{\eta+\frac{\Delta x}{2}}\int_{\zeta-\frac{ \Delta x}{2}}^{\zeta+\frac{\Delta x}{2}}h(\theta)\,d\theta d\zeta d\eta, \tag{2.4}\] then by triple derivation, we have \[g(u)_{xxx}=\frac{h(x+\frac{3}{2}\Delta x)-3h(x+\frac{1}{2}\Delta x)+3h(x- \frac{1}{2}\Delta x)-h(x-\frac{3}{2}\Delta x)}{\Delta x^{3}}. \tag{2.5}\] If we define a function \(G(x)\), such that \[G(x)=h(x+\Delta x)-2h(x)+h(x-\Delta x), \tag{2.6}\] then we have \[g(u)_{xxx}\big{|}_{x=x_{i}}=\frac{G(x_{i+\frac{1}{2}})-G(x_{i-\frac{1}{2}})}{ \Delta x^{3}}. \tag{2.7}\] If we have numerical flux \(\hat{G}_{i+\frac{1}{2}}\) which is an approximation to \(G(x_{i+\frac{1}{2}})\) up to \(k^{\text{th}}\)-order, then we have equation (2.3). We now derive the specific form of the numerical flux \(\hat{G}(u)\) for WENO5. We first split the flux into positive and negative parts, that is \(g(u)=g^{+}(u)+g^{-}(u)\) with \(\frac{\partial g^{+}(u)}{\partial u}\geq 0\) and \(\frac{\partial g^{-}(u)}{\partial u}\leq 0\), and \(G^{+}(x)\) and \(G^{-}(x)\) are defined by (2.6) according to \(g^{+}(u)\) and \(g^{-}(u)\), respectively. The reconstructed numerical fluxes \(\hat{G}^{+}_{i+\frac{1}{2}}\) and \(\hat{G}^{-}_{i+\frac{1}{2}}\) to approximate \(G^{+}(x_{i+\frac{1}{2}})\) and \(G^{-}(x_{i+\frac{1}{2}})\) up to \(k^{\text{th}}\)-order, respectively. Finally, we define numerical fluxes \(\hat{G}_{i+\frac{1}{2}}\) as \[\hat{G}_{i+\frac{1}{2}}=\hat{G}^{+}_{i+\frac{1}{2}}+\hat{G}^{-}_{i+\frac{1}{2}}, \tag{2.8}\] and \(\hat{G}_{i+\frac{1}{2}}\) is an approximation to \(G(x_{i+\frac{1}{2}})\) up to \(k^{\text{th}}\)-order. The reconstruction procedure for \(\hat{G}^{+}_{i+\frac{1}{2}}\) is given below and procedure for \(\hat{G}^{-}_{i+\frac{1}{2}}\) is mirror symmetric respect to \(x_{i+\frac{1}{2}}\). ### Approximation using exponential polynomials The WENO scheme for solving the dispersion type equation involves approximating the value of \(G(x_{i+\frac{1}{2}})\) using equation (2.7). The reconstruction process must balance achieving optimal accuracy in smooth regions while maintaining essential non-oscillatory behaviour in non smooth regions. Polynomials are widely used to construct numerical fluxes. However, the limitation of using polynomials is that the approximation space is shift-and-scale invariant. Thus, it cannot be tailored to suit the specific characteristics of the given data. This limitation can result in significant numerical dissipation when interpolating data with rapid gradients, which hinders the ability to produce sharp edges. To overcome this limitation, researchers [24, 25, 26, 27, 28] have explored the use of other types of basis functions, such as exponential polynomials, which have been shown to yield better results in terms of producing sharp edges and reducing numerical dissipation. The general form of exponential polynomials can be written as: \[\Phi(x)=x^{n}e^{\lambda x}, \tag{2.9}\] where \(n\) is a non-negative integer and \(\lambda\in\mathbb{R}\) or \(\lambda\in\iota\mathbb{R}\) (\(\iota^{2}=-1\)). Let \(\{\varPhi_{1},\ldots,\varPhi_{r}\}\) be a set of exponential polynomials of the form in (2.9) and let \(\varGamma_{r}\) be the space defined by \[\varGamma_{r}:=\mathrm{span}\{\varPhi_{1},\ldots,\varPhi_{r}\}. \tag{2.10}\] A necessary condition for the space \(\varGamma_{r}\) is that the set \(\{\varPhi_{1},\ldots,\varPhi_{r}\}\) is linearly independent so that the determinants of the Wronskian matrix related to them are non-zero, i.e., \[\det(\varPhi_{n}(s_{i}):i,n=1,\ldots,r)\neq 0, \tag{2.11}\] for any \(r\)-point stencil \(\{s_{i}:i=1,\ldots,r\}\). The space \(\varGamma_{r}\) needs to satisfy the following basic requirements for the practical computation of the proposed interpolation: * Shift-invariant. The space \(\varGamma_{r}\) should be shift-invariant in the sense that for any \(\alpha\in\mathbb{R}\), \(f\in\varGamma_{r}\) implies \(f(\cdot-\alpha)\in\varGamma_{r}\). This ensures that the interpolation kernel is invariant under the shifting of the evaluation location and stencil. It allows for a set of interpolation kernels to be precomputed for a fixed point and then applied to every evaluation position based on the chosen stencil at a given cell boundary. The space \(\varGamma_{r}\) defined in (2.11) meets this requirement since it is shift-invariant. * Symmetry. The space \(\varGamma_{r}\) should be symmetry means that if a function \(f\) is in the space \(\varGamma_{r}\), then the reflected function \(f(-\cdot)\) should also belong to the same space. Including the polynomial \(\varPhi(x)=1\) in the space \(\varGamma_{r}\) ensures that the sum of the interpolation weights over all basis functions is equal to \(1\), which is necessary for the interpolation kernel to satisfy the partition of unity property. In this paper, we choose \[\varGamma_{7}:=\mathrm{span}\{1,x,x^{2},e^{\lambda x},e^{-\lambda x},\cos \lambda x,\sin\lambda x\} \tag{2.12}\] as the basis functions for global stencil \(S_{7}:=\{x_{i-2},\ldots,x_{i+4}\}\) and similarly, \[\varGamma_{5}=\mathrm{span}\{1,x,x^{2},e^{\lambda x},e^{-\lambda x}\} \tag{2.13}\] for the five-point substencils \(S_{m}:=\{x_{i-2+m},\ldots,x_{i+2+m}\}\), \(m=0,1,2\). Here, \(\varGamma_{7}\) and \(\varGamma_{5}\) constitute an _extended Tchebysheff systems_ on \(\mathbb{R}\) so that the non-singularity of the interpolation matrices in (2.11) is guaranteed as in [31, 24]. ## 3. Fifth-order WENO-E scheme Conservative numerical schemes are developed by approximating the function \(G(x)\) in (2.7). This approximation, represented as \(\hat{G}(x)\), is constructed using a exponential polynomial form with unspecified coefficients. When this polynomial is substituted into (2.4), it results in a system of equations. In this system, the flux is known at the nodes surrounding the relevant interface, enabling the determination of a distinct set of coefficients. After obtaining \(\hat{G}(x)\), the approximation of the spatial derivative in (2.3) is as follows \[g(u)_{xxx}\big{|}_{x=x_{i}}\approx\frac{1}{\Delta x^{3}}(\hat{G}_{i+\frac{1}{2}} -\hat{G}_{i-\frac{1}{2}}). \tag{3.1}\] We will focus on two orders of convergence. The primary concern is the order at which (3.1) is satisfied, as it dictates the spatial convergence rate of the overall scheme. Additionally, we consider the order of individual approximations to the numerical flux, \(\hat{G}_{i\pm\frac{1}{2}}\), which is significant in establishing criteria for the acceptance of non-oscillatory weights. ### Constructions of numerical flux \(\hat{G}_{i+\frac{1}{2}}^{+}\) The approximations of \(\hat{G}_{i+\frac{1}{2}}^{+}\) are denoted by \(p(x)\). We first consider an exponential polynomial approximation to \(h(x)\) on the 7-point stencil \(S_{7}\) \[h(x)\approx q(x)=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}e^{\lambda x}+a_{4}e^{-\lambda x }+a_{5}\cos\lambda x+a_{6}\sin\lambda x, \tag{3.2}\] with undetermined coefficients \(a_{k}\) with \(k=0,\ldots,6\). Substituting (3.2) into (2.4) \[g^{+}(u(x))=\frac{1}{\Delta x^{3}}\int_{x-\frac{\Delta x}{2}}^{x+\frac{ \Delta x}{2}}\int_{\eta-\frac{\Delta x}{2}}^{\eta+\frac{\Delta x}{2}}\int_{ \zeta-\frac{\Delta x}{2}}^{\zeta+\frac{\Delta x}{2}}q(\theta)\,d\theta d \zeta d\eta, \tag{3.3}\] and performing the integration gives \[\begin{split} g^{+}(u(x))&=a_{0}+a_{1}x+a_{2}\bigg{(} \frac{\Delta x^{2}}{4}+x^{2}\bigg{)}+a_{3}\bigg{(}-\frac{4e^{\lambda x}\sinh \big{[}\frac{\lambda\Delta x}{2}\big{]}}{\lambda^{3}\Delta x^{3}}+\frac{4e^{ \lambda x}\cosh[\lambda\Delta x]\sinh\big{[}\frac{\lambda\Delta x}{2}\big{]} }{\lambda^{3}\Delta x^{3}}\bigg{)}\\ &+a_{4}\bigg{(}\frac{e^{\frac{3}{2}(\Delta x-2x)}(-1+e^{\lambda \Delta x})}{\lambda^{3}\Delta x^{3}}+\frac{e^{-\frac{3\lambda x}{2}-\lambda x }(-1+e^{\lambda\Delta x})}{\lambda^{3}\Delta x^{3}}-\frac{4e^{-\lambda x} \sinh\big{[}\frac{\lambda\Delta x}{2}\big{]}}{\lambda^{3}\Delta x^{3}}\bigg{)} \\ &+a_{5}\bigg{(}\frac{8\cos[\lambda x]\sin\big{[}\frac{\lambda \Delta x}{2}\big{]}^{3}}{\lambda^{3}\Delta x^{3}}\bigg{)}+a_{6}\bigg{(}\frac{8 \sin[\lambda x]\sin\big{[}\frac{\lambda\Delta x}{2}\big{]}^{3}}{\lambda^{3} \Delta x^{3}}\bigg{)}.\end{split} \tag{3.4}\] To determine the coefficients \(a_{0},\ldots,a_{6}\), one can consider (3.4) as \(g^{+}(u(x_{i}-2\Delta x))=g^{+}(u_{i-2}),\ldots,g^{+}(u(x_{i}+4\Delta x))=g^{ +}(u_{i+4})\) with \(x_{i}=0\) and solve the resulting \(7\times 7\) system \(AX=B\) are specified in Appendix:B. Substituting the coefficients of vector \(X\) into (3.2) and then calculating \(p(x)=q(x+\Delta x)-2q(x)+q(x-\Delta x)\) at \(x=x_{i+\frac{1}{2}}\), we get \[\hat{G}_{i+\frac{1}{2}}^{+}=C_{0}g^{+}(u_{i-2})+C_{1}g^{+}(u_{i-1})+C_{2}g^{+} (u_{i})+C_{3}g^{+}(u_{i+1})+C_{4}g^{+}(u_{i+2})+C_{5}g^{+}(u_{i+3})+C_{6}g^{+} (u_{i+4}). \tag{3.5}\] After applying the Taylor series to the coefficients \(C_{j},0\leq j\leq 6\) of equation (3.5) is given by \[\begin{split} C_{0}&=-\frac{1}{15}-\frac{229\lambda ^{4}\Delta x^{4}}{226800}+\mathcal{O}(\Delta x^{8}),\quad C_{1}=\frac{21}{40}+ \frac{1493\lambda^{4}\Delta x^{4}}{75600}+\mathcal{O}(\Delta x^{8}),\\ C_{2}&=\frac{1}{8}-\frac{587\lambda^{4}\Delta x^{4}} {15120}+\mathcal{O}(\Delta x^{8}),\qquad C_{3}=-\frac{23}{12}-\frac{65\lambda ^{4}\Delta x^{4}}{9072}+\mathcal{O}(\Delta x^{8}),\\ C_{4}&=\frac{7}{4}+\frac{11\lambda^{4}\Delta x^{4}} {378}+\mathcal{O}(\Delta x^{8}),\qquad C_{5}=-\frac{19}{40}-\frac{323\lambda ^{4}\Delta x^{4}}{18900}+\mathcal{O}(\Delta x^{8}),\\ C_{6}&=-\frac{7}{120}+\frac{103\lambda^{4}\Delta x^{4}} {113400}+\mathcal{O}(\Delta x^{8}).\end{split} \tag{3.6}\] Thus, the equation (3.5) becomes \[\begin{split}\hat{G}^{+}_{i+\frac{1}{2}}=&\bigg{(}- \frac{1}{15}-\frac{229\lambda^{4}\varDelta x^{4}}{226800}\bigg{)}g^{+}(u_{i-2} )+\bigg{(}\frac{21}{40}+\frac{1493\lambda^{4}\varDelta x^{4}}{75600}\bigg{)}g^ {+}(u_{i-1})+\bigg{(}\frac{1}{8}-\frac{587\lambda^{4}\varDelta x^{4}}{15120} \bigg{)}g^{+}(u_{i})\\ &+\bigg{(}-\frac{23}{12}-\frac{65\lambda^{4}\varDelta x^{4}}{9072} \bigg{)}g^{+}(u_{i+1})+\bigg{(}\frac{7}{4}+\frac{11\lambda^{4}\varDelta x^{4}}{ 378}\bigg{)}g^{+}(u_{i+2})\\ &+\bigg{(}-\frac{19}{40}-\frac{323\lambda^{4}\varDelta x^{4}}{189 00}\bigg{)}g^{+}(u_{i+3})+\bigg{(}-\frac{7}{120}+\frac{103\lambda^{4}\varDelta x ^{4}}{113400}\bigg{)}g^{+}(u_{i+4}).\end{split}\] From the definition of (2.4) we also know that \[\begin{split}\hat{G}^{+}_{i+\frac{1}{2}}&=\frac{1} {\varDelta x^{3}}\Bigg{[}\bigg{(}-\frac{1}{15}-\frac{229\lambda^{4}\varDelta x ^{4}}{226800}\bigg{)}\bigg{(}\int_{x_{i-\frac{5}{2}}}^{x_{i-\frac{5}{2}}}\int _{\eta-\frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac {\varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d \eta\bigg{)}\\ &+\bigg{(}\frac{21}{40}+\frac{1493\lambda^{4}\varDelta x^{4}}{756 00}\bigg{)}\bigg{(}\int_{x_{i-\frac{3}{2}}}^{x_{i-\frac{3}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d\eta \bigg{)}\\ &+\bigg{(}\frac{1}{8}-\frac{587\lambda^{4}\varDelta x^{4}}{15120} \bigg{)}\bigg{(}\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d \eta\bigg{)}\\ &+\bigg{(}-\frac{23}{12}-\frac{65\lambda^{4}\varDelta x^{4}}{9072 }\bigg{)}\bigg{(}\int_{x_{i+\frac{3}{2}}}^{x_{i+\frac{3}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d\eta \bigg{)}\\ &+\bigg{(}\frac{7}{4}+\frac{11\lambda^{4}\varDelta x^{4}}{378} \bigg{)}\bigg{(}\int_{x_{i+\frac{3}{2}}}^{x_{i+\frac{5}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d\eta \bigg{)}\\ &+\bigg{(}-\frac{19}{40}-\frac{323\lambda^{4}\varDelta x^{4}}{189 00}\bigg{)}\bigg{(}\int_{x_{i+\frac{5}{2}}}^{x_{i+\frac{5}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d\eta \bigg{)}\\ &+\bigg{(}-\frac{7}{120}+\frac{103\lambda^{4}\varDelta x^{4}}{1134 00}\bigg{)}\bigg{(}\int_{x_{i+\frac{5}{2}}}^{x_{i+\frac{9}{2}}}\int_{\eta- \frac{\varDelta x}{2}}^{\eta+\frac{\varDelta x}{2}}\int_{\zeta-\frac{ \varDelta x}{2}}^{\zeta+\frac{\varDelta x}{2}}h(\theta)\,d\theta d\zeta d\eta \bigg{)}\Bigg{]}.\end{split}\] To analyze the accuracy of the approximation to the third derivative for smooth solutions, we assume \(h(x)\) has sufficient local regularity. Substituting the Taylor series expansion at the grid point \(x_{i}\), \[h(\theta)=h(x_{i})+\sum_{j=1}^{7}\frac{(\theta-x_{i})^{j}}{j!}\frac{d^{j}h}{dx^ {j}}\bigg{|}_{x=x_{i}}+\mathcal{O}(\varDelta x^{8}),\] and performing the integration, we have \[\begin{split}\hat{G}^{+}_{i+\frac{1}{2}}&=\frac{d^{2}h }{dx^{2}}\bigg{|}_{x=x_{i}}\varDelta x^{2}+\frac{1}{2}\frac{d^{3}h}{dx^{3}} \bigg{|}_{x=x_{i}}\varDelta x^{3}+\frac{5}{24}\frac{d^{4}h}{dx^{4}}\bigg{|}_{x= x_{i}}\varDelta x^{4}+\frac{1}{16}\frac{d^{5}h}{dx^{5}}\bigg{|}_{x=x_{i}}\varDelta x ^{5}+\frac{91}{5760}\frac{d^{6}h}{dx^{6}}\bigg{|}_{x=x_{i}}\varDelta x^{6}\\ &+\bigg{(}-\frac{7}{240}\lambda^{4}\frac{d^{3}h}{dx^{3}}\bigg{|}_{ x=x_{i}}+\frac{25}{768}\frac{d^{7}h}{dx^{7}}\bigg{|}_{x=x_{i}}\bigg{)}\varDelta x ^{7}+\mathcal{O}(\varDelta x^{8}).\end{split} \tag{3.7}\] Comparing (3.7) with Taylor series expansion \[\begin{split} G(x_{i+\frac{1}{2}})=& h(x_{i+\frac{3}{2}})-2h (x_{i+\frac{1}{2}})+h(x_{i-\frac{1}{2}})\\ =&\frac{d^{2}h}{dx^{2}}\bigg{|}_{x=x_{i}}\varDelta x ^{2}+\frac{1}{2}\frac{d^{3}h}{dx^{3}}\bigg{|}_{x=x_{i}}\varDelta x^{3}+\frac{5}{24 }\frac{d^{4}h}{dx^{4}}\bigg{|}_{x=x_{i}}\varDelta x^{4}+\frac{1}{16}\frac{d^{5}h} {dx^{5}}\bigg{|}_{x=x_{i}}\varDelta x^{5}\\ &+\frac{91}{5760}\frac{d^{6}h}{dx^{6}}\bigg{|}_{x=x_{i}}\varDelta x ^{6}+\frac{13}{3840}\frac{d^{7}h}{dx^{7}}\bigg{|}_{x=x_{i}}\varDelta x^{7}+ \mathcal{O}(\varDelta x^{8}),\end{split} \tag{3.8}\] we obtain \[\hat{G}^{+}_{i+\frac{1}{2}} =G(x_{i+\frac{1}{2}})+\biggl{(}-\frac{7}{240}\lambda^{4}\frac{d^{3} h}{dx^{3}}\biggr{|}_{x=x_{i}}+\frac{7}{240}\frac{d^{7}h}{dx^{7}}\biggr{|}_{x=x_{i}} \biggr{)}\Delta x^{7}+\mathcal{O}(\Delta x^{8}),\] \[=G(x_{i+\frac{1}{2}})+\mathcal{A}^{+}\Delta x^{7}+\mathcal{O}( \Delta x^{8}). \tag{3.9}\] Similarly, we calculate the numerical flux \(\hat{G}^{+}_{i-\frac{1}{2}}\) \[\hat{G}^{+}_{i-\frac{1}{2}} =\biggl{(}-\frac{1}{15}-\frac{229\lambda^{4}\Delta x^{4}}{226800} \biggr{)}g^{+}(u_{i-3})+\biggl{(}\frac{21}{40}+\frac{1493\lambda^{4}\Delta x^ {4}}{75600}\biggr{)}g^{+}(u_{i-2})+\biggl{(}\frac{1}{8}-\frac{587\lambda^{4} \Delta x^{4}}{15120}\biggr{)}g^{+}(u_{i-1})\] \[+\biggl{(}-\frac{23}{12}-\frac{65\lambda^{4}\Delta x^{4}}{9072} \biggr{)}g^{+}(u_{i})+\biggl{(}\frac{7}{4}+\frac{11\lambda^{4}\Delta x^{4}}{3 78}\biggr{)}g^{+}(u_{i+1})+\biggl{(}-\frac{19}{40}-\frac{323\lambda^{4}\Delta x ^{4}}{18900}\biggr{)}g^{+}(u_{i+2})\] \[+\biggl{(}-\frac{7}{120}+\frac{103\lambda^{4}\Delta x^{4}}{113400 }\biggr{)}g^{+}(u_{i+3})\] \[\hat{G}^{+}_{i-\frac{1}{2}} =G(x_{i-\frac{1}{2}})+\biggl{(}-\frac{7}{240}\lambda^{4}\frac{d^{ 3}h}{dx^{3}}\biggr{|}_{x=x_{i}}+\frac{7}{240}\frac{d^{7}h}{dx^{7}}\biggr{|}_{x =x_{i}}\biggr{)}\Delta x^{7}+\mathcal{O}(\Delta x^{8})=G(x_{i-\frac{1}{2}})+ \mathcal{A}^{-}\Delta x^{7}+\mathcal{O}(\Delta x^{8}), \tag{3.10}\] where the values \(\mathcal{A}^{\pm}\) are independent of \(\Delta x\), but dependent on \(\lambda\). Therefore, substituting (3.9) and (3.10) into (2.7), we have the fifth order approximation \[g(u)_{xxx}\bigr{|}_{x=x_{i}}=\frac{\hat{G}^{+}_{i+\frac{1}{2}}-\hat{G}^{+}_{i- \frac{1}{2}}}{\Delta x^{3}}+\mathcal{O}(\Delta x^{5}). \tag{3.11}\] ### Constructions of numerical flux \(\hat{G}^{m}_{i+\frac{1}{2}}\) for \(m=0,1,2\) When the stencil \(S_{7}\) contains a discontinuity, the numerical flux \(\hat{G}^{+}_{i+\frac{1}{2}}\) formulation on the big stencil \(S_{7}\) can result in oscillations because all seven nodes are affected. To mitigate this issue, the WENO procedure considers smaller stencils \(S_{m}=\{x_{i-2+m},\ldots,x_{i+2+m}\}\), \(m=0,1,2\), of the big stencil \(S_{7}\). Following a similar argument, on each small stencils \(S_{m}\), \(m=0,1,2\), by taking approximation to \(h(x)\) as \[h(x)\approx q_{m}(x)=a_{0}^{m}+a_{1}^{m}x+a_{2}^{m}x^{2}+a_{3}^{m}e^{\lambda x }+a_{4}^{m}e^{-\lambda x}, \tag{3.12}\] with undetermined coefficients \(a_{k}^{m}\) with \(k=0,\ldots,4\), \(m=0,1,2\). Substituting (3.12) into (2.4) \[\frac{1}{\Delta x^{3}}\int_{x_{j}-\frac{\Delta x}{2}}^{x_{j}+\frac{Ax}{2}}\int _{\eta-\frac{\Delta x}{2}}^{\eta+\frac{Ax}{2}}\int_{\zeta-\frac{\Delta x}{2} }^{\zeta+\frac{Ax}{2}}q_{m}(\theta)\,d\theta d\zeta d\eta=g^{+}(u_{j}),\quad j =i-2+m,\ldots,i+2+m,\quad m=0,1,2, \tag{3.13}\] and performing the integration gives \[g^{+}(u(x))=a_{0}^{m}+a_{1}^{m}x+a_{2}^{m}\biggl{(}\frac{\Delta x^{2}}{4}+x^{2 }\biggr{)}+a_{3}^{m}\biggl{(}\frac{e^{\frac{-3\lambda\Delta x}{2}+x\lambda}(- 1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\biggr{)}+a_{4}^{m}\biggl{(} \frac{e^{\frac{-3\lambda\Delta x}{2}-\lambda x}(-1+e^{\lambda\Delta x})^{3}}{ \lambda^{3}\Delta x^{3}}\biggr{)}. \tag{3.14}\] To determine the coefficients \(a_{0}^{m},\ldots,a_{4}^{m}\) we solve the resulting \(5\times 5\) systems \(A_{m}X_{m}=B_{m}\) is given by, 1. For \(m=0\), \[A_{0}=\begin{bmatrix}1&-2\Delta x&\frac{17}{4}\Delta x^{2}&\frac{e^{-\frac{ \gamma_{\Delta x}}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}& \frac{e^{\frac{\lambda\Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3} \Delta x^{3}}\\ 1&-\Delta x&\frac{5}{4}\Delta x^{2}&\frac{e^{-\frac{5\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{\lambda\Delta x }{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&0&\frac{1}{4}\Delta x^{2}&\frac{e^{-\frac{3\lambda\Delta^{2}}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{3\lambda\Delta^{2 }}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&\Delta x&\frac{5}{4}\Delta x^{2}&\frac{e^{-\frac{\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{5\lambda\Delta x }{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&2\Delta x&\frac{17}{4}\Delta x^{2}&\frac{e^{\frac{\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{7\lambda\Delta^{2 }}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\end{bmatrix},\] \[B_{0}=\left[g^{+}(u_{i-2})\quad g^{+}(u_{i-1})\quad g^{+}(u_{i})\quad g^{+}(u_{i+1 })\quad g^{+}(u_{i+2})\right]^{T},\] \[X_{0}=\left[a_{0}^{0}\quad a_{1}^{0}\quad a_{2}^{0}\quad a_{3}^{0}\quad a_{4}^{0 }\right]^{T}.\] 2. For \(m=1\), \[A_{1}=\begin{bmatrix}1&-\Delta x&\frac{5}{4}\Delta x^{2}&\frac{e^{-\frac{5 \lambda\Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}& \frac{e^{-\frac{\lambda\Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3} \Delta x^{3}}\\ 1&0&\frac{1}{4}\Delta x^{2}&\frac{e^{-\frac{3\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{3\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&\Delta x&\frac{5}{4}\Delta x^{2}&\frac{e^{-\frac{3\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{5\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&2\Delta x&\frac{17}{4}\Delta x^{2}&\frac{e^{\frac{3\lambda\Delta x}{2}}(-1+e^ {\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{7\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&3\Delta x&\frac{37}{4}\Delta x^{2}&\frac{e^{\frac{3\lambda\Delta x}{2}}(-1+e^ {\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{3\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ \end{bmatrix},\] \[B_{1}=\left[g^{+}(u_{i-1})\quad g^{+}(u_{i})\quad g^{+}(u_{i+1})\quad g^{+}(u_ {i+2})\quad g^{+}(u_{i+3})\right]^{T},\] \[X_{1}=\left[a_{0}^{1}\quad a_{1}^{1}\quad a_{2}^{1}\quad a_{3}^{1}\quad a_{4 }^{1}\right]^{T}.\] 3. For \(m=2\), \[A_{2}=\begin{bmatrix}1&0&\frac{1}{4}\Delta x^{2}&\frac{e^{-\frac{3\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{- \frac{3\lambda\Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{ 3}}\\ 1&\Delta x&\frac{5}{4}\Delta x^{2}&\frac{e^{-\frac{\lambda\Delta x}{2}}(-1+e^{ \lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{5\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&2\Delta x&\frac{17}{4}\Delta x^{2}&\frac{e^{\frac{3\lambda\Delta x}{2}}(-1+e^ {\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{7\lambda \Delta x}{2}}(-1-e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&3\Delta x&\frac{37}{4}\Delta x^{2}&\frac{e^{\frac{3\lambda\Delta x}{2}}(-1+e^ {\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{8\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ 1&4\Delta x&\frac{65}{4}\Delta x^{2}&\frac{e^{\frac{5\lambda\Delta x}{2}}(-1+e^ {\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}&\frac{e^{-\frac{11\lambda \Delta x}{2}}(-1+e^{\lambda\Delta x})^{3}}{\lambda^{3}\Delta x^{3}}\\ \end{bmatrix},\] \[B_{2}=\left[g^{+}(u_{i})\quad g^{+}(u_{i+1})\quad g^{+}(u_{i+2})\quad g^{+}( u_{i+3})\quad g^{+}(u_{i+4})\right]^{T},\] \[X_{2}=\left[a_{0}^{2}\quad a_{1}^{2}\quad a_{2}^{2}\quad a_{3}^{2}\quad a_{4}^{2 }\right]^{T}.\] Substituting the obtained coefficients of \(X_{0}\), \(X_{1}\) and \(X_{2}\) into (3.12) and then calculating \(p_{m}(x)=q_{m}(x+\Delta x)-2q_{m}(x)+q_{m}(x-\Delta x)\) at \(x=x_{i+\frac{1}{2}}\), we get \[\hat{G}^{(0)}_{i+\frac{1}{2}} =C_{0}^{0}g^{+}(u_{i-2})+C_{1}^{0}g^{+}(u_{i-1})+C_{2}^{0}g^{+}(u_{ i})+C_{3}^{0}g^{+}(u_{i+1})+C_{4}^{0}g^{+}(u_{i+2}),\] \[\hat{G}^{(1)}_{i+\frac{1}{2}} =C_{0}^{1}g^{+}(u_{i-1})+C_{1}^{1}g^{+}(u_{i})+C_{2}^{1}g^{+}(u_{i+ 1})+C_{3}^{1}g^{+}(u_{i+2})+C_{4}^{1}g^{+}(u_{i+3}),\] \[\hat{G}^{(2)}_{i+\frac{1}{2}} =C_{0}^{2}g^{+}(u_{i})+C_{1}^{2}g^{+}(u_{i+1})+C_{2}^{2}g^{+}(u_{i+ 2})+C_{3}^{2}g^{+}(u_{i+3})+C_{4}^{2}g^{+}(u_{i+4}). \tag{3.15}\] After applying the Taylor series to the coefficients \(C_{j}^{m},0\leq j\leq 4,0\leq m\leq 2\) of equation (3.15), we get \[\hat{G}_{i+\frac{1}{2}}^{(0)} =\bigg{(}-\frac{1}{4}+\frac{7\lambda^{2}\Delta x^{2}}{120}-\frac{ 263\lambda^{4}\Delta x^{4}}{30240}\bigg{)}g^{+}(u_{i-2})+\bigg{(}\frac{3}{2}- \frac{13\lambda^{2}\Delta x^{2}}{120}+\frac{97\lambda^{4}\Delta x^{4}}{6048} \bigg{)}g^{+}(u_{i-1})\] \[+\bigg{(}-2-\frac{\lambda^{2}\Delta x^{2}}{40}+\frac{41\lambda^{4 }\Delta x^{4}}{10080}\bigg{)}g^{+}(u_{i})+\bigg{(}\frac{1}{2}+\frac{17\lambda^ {2}\Delta x^{2}}{120}-\frac{649\lambda^{4}\Delta x^{4}}{30240}\bigg{)}g^{+}(u_ {i+1})\] \[+\bigg{(}\frac{1}{4}-\frac{\lambda^{2}\Delta x^{2}}{15}+\frac{19 \lambda^{4}\Delta x^{4}}{1890}\bigg{)}g^{+}(u_{i+2}),\] \[\hat{G}_{i+\frac{1}{2}}^{(1)} =\bigg{(}\frac{1}{4}-\frac{\lambda^{2}\Delta x^{2}}{15}+\frac{19 \lambda^{4}\Delta x^{4}}{1890}\bigg{)}g^{+}(u_{i-1})+\bigg{(}\frac{1}{2}+\frac {17\lambda^{2}\Delta x^{2}}{120}-\frac{649\lambda^{4}\Delta x^{4}}{30240} \bigg{)}g^{+}(u_{i})\] \[+\bigg{(}-2-\frac{\lambda^{2}\Delta x^{2}}{40}+\frac{41\lambda^{4 }\Delta x^{4}}{10080}\bigg{)}g^{+}(u_{i+1})+\bigg{(}\frac{3}{2}-\frac{13 \lambda^{2}\Delta x^{2}}{120}+\frac{97\lambda^{4}\Delta\Delta x^{4}}{6048} \bigg{)}g^{+}(u_{i+2})\] \[+\bigg{(}-\frac{1}{4}+\frac{7\lambda^{2}h^{2}}{120}-\frac{263 \lambda^{4}\Delta x^{4}}{30240}\bigg{)}g^{+}(u_{i+3}),\] \[\hat{G}_{i+\frac{1}{2}}^{(2)} =\bigg{(}\frac{7}{4}+\frac{7\lambda^{2}\Delta x^{2}}{120}-\frac{ 103\lambda^{4}\Delta x^{4}}{6048}\bigg{)}g^{+}(u_{i})+\bigg{(}-\frac{9}{2}- \frac{13\lambda^{2}\Delta x^{2}}{120}+\frac{1241\lambda^{4}\Delta x^{4}}{3024 0}\bigg{)}g^{+}(u_{i+1})\] \[+\bigg{(}4-\frac{\lambda^{2}\Delta x^{2}}{40}-\frac{211\lambda^{4 }\Delta x^{4}}{10080}\bigg{)}g^{+}(u_{i+2})+\bigg{(}-\frac{3}{2}+\frac{17 \lambda^{2}\Delta x^{2}}{120}-\frac{397\lambda^{4}\Delta x^{4}}{30240}\bigg{)} g^{+}(u_{i+3})\] \[+\bigg{(}\frac{1}{4}-\frac{\lambda^{2}\Delta x^{2}}{15}+\frac{19 \lambda^{4}\Delta x^{4}}{1890}\bigg{)}g^{+}(u_{i+4}). \tag{3.16}\] By shifting each index by -1, we obtain the flux \(\hat{G}_{i-\frac{1}{2}}^{(m)}\). Hence the Taylor series expansions of (3.16) gives \[\hat{G}_{i\pm\frac{1}{2}}^{(0)} =G(x_{i\pm\frac{1}{2}})+\bigg{(}-\frac{1}{8}\lambda^{2}\frac{d^{3} h}{dx^{3}}\bigg{|}_{x=x_{i}}+\frac{1}{8}\frac{d^{5}h}{dx^{5}}\bigg{|}_{x=x_{i}} \bigg{)}\Delta x^{5}+\mathcal{O}(\Delta x^{6}),\] \[\hat{G}_{i\pm\frac{1}{2}}^{(1)} =G(x_{i\pm\frac{1}{2}})+\bigg{(}\frac{1}{8}\lambda^{2}\frac{d^{3} h}{dx^{3}}\bigg{|}_{x=x_{i}}-\frac{1}{8}\frac{d^{5}h}{dx^{5}}\bigg{|}_{x=x_{i}} \bigg{)}\Delta x^{5}+\mathcal{O}(\Delta x^{6}),\] \[\hat{G}_{i\pm\frac{1}{2}}^{(2)} =G(x_{i\pm\frac{1}{2}})+\bigg{(}-\frac{1}{8}\lambda^{2}\frac{d^{ 3}h}{dx^{3}}\bigg{|}_{x=x_{i}}+\frac{1}{8}\frac{d^{5}h}{dx^{5}}\bigg{|}_{x=x_{ i}}\bigg{)}\Delta x^{5}+\mathcal{O}(\Delta x^{6}),\] \[\hat{G}_{i\pm\frac{1}{2}}^{(m)} =G(x_{i\pm\frac{1}{2}})+\mathcal{B}_{m}\Delta x^{5}+\mathcal{O}( \Delta x^{6}), \tag{3.17}\] where the values \(\mathcal{B}_{m}\) are independent of \(\Delta x\), but dependent on \(\lambda\). Hence, we have third order approximations \[g(u)_{xxx}\big{|}_{x=x_{i}} =\frac{\hat{G}_{i+\frac{1}{2}}^{(0)}-\hat{G}_{i-\frac{1}{2}}^{(0)} }{\Delta x^{3}}+\mathcal{O}(\Delta x^{3}),\] \[g(u)_{xxx}\big{|}_{x=x_{i}} =\frac{\hat{G}_{i+\frac{1}{2}}^{(1)}-\hat{G}_{i-\frac{1}{2}}^{(1)} }{\Delta x^{3}}+\mathcal{O}(\Delta x^{3}),\] \[g(u)_{xxx}\big{|}_{x=x_{i}} =\frac{\hat{G}_{i+\frac{1}{2}}^{(2)}-\hat{G}_{i-\frac{1}{2}}^{(2)} }{\Delta x^{3}}+\mathcal{O}(\Delta x^{3}). \tag{3.18}\] ### Ideal weights based on exponential polynomials The final WENO-E approximation is defined by a convex combination of local fluxes with non-linear weights \(\omega_{m}\): \[\hat{G}_{i\pm\frac{1}{2}}=\sum_{m=0}^{2}\omega_{m}^{\pm}\hat{G}_{i+\frac{1}{2}}^{( m)}, \tag{3.19}\] To derive the non-linear weights \(\omega_{m}\), we initially determine the \(d_{m}\), referred to as ideal (or optimal) weights. These \(d_{m}\) values are chosen such that their linear combination with \(\hat{G}_{i+\frac{1}{2}}^{(m)}\) retains the fifth convergence order to \(G(x_{i+\frac{1}{2}})\). That is, \[\hat{G}_{i+\frac{1}{2}}^{+}=\sum_{m=0}^{2}d_{m}\hat{G}_{i+\frac{1}{2}}^{(m)}, \tag{3.20}\] satisfying \(\sum_{m=0}^{2}d_{m}=1\). The ideal weights \(d_{m}\), for the proposed WENO-E scheme can be obtained as \[d_{0}=\frac{C_{0}}{C_{0}^{0}},\quad d_{1}=\frac{C_{1}-d_{0}C_{1}^{0}}{C_{0}^{1 }},\quad d_{2}=\frac{C_{2}-d_{0}C_{2}^{0}-d_{1}C_{1}^{1}}{C_{0}^{2}}. \tag{3.21}\] In contrast to the classical WENO scheme, the optimal weights \(d_{m}\) in the proposed WENO method may exhibit variation depending on the parameter \(\lambda\)'s choice, but they converge towards the original ideal weights as \(\Delta x\to 0\). Adding and subtracting \(\sum_{m=0}^{2}d_{m}\hat{G}_{i\pm\frac{1}{2}}^{(m)}\) to (3.19), gives: \[\hat{G}_{i\pm\frac{1}{2}}=\sum_{m=0}^{2}d_{m}\hat{G}_{i\pm\frac{1}{2}}^{(m)}+ \sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})\hat{G}_{i\pm\frac{1}{2}}^{(m)}=\big{[}G (x_{i\pm\frac{1}{2}})++\mathcal{A}^{\pm}\Delta x^{7}+\mathcal{O}(\Delta x^{8} )\big{]}+\sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})\hat{G}_{i\pm\frac{1}{2}}^{(m)}. \tag{3.22}\] (The superscripts \(\pm\) corresponds to the \(\pm\) in the \(\hat{G}_{i\pm\frac{1}{2}}^{(m)}\)). Expanding the second term with the help of (3.17) we obtain \[\begin{split}\sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})\hat{G}_{i\pm \frac{1}{2}}^{(m)}&=\sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})\bigg{(} G(x_{i\pm\frac{1}{2}})+\mathcal{B}_{m}\Delta x^{5}+\mathcal{O}(\Delta x^{6}) \bigg{)}\\ &=G(x_{i\pm\frac{1}{2}})\sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})+ \Delta x^{5}\sum_{m=0}^{2}\mathcal{B}_{m}(\omega_{m}^{\pm}-d_{m})+\sum_{m=0}^ {2}(\omega_{m}^{\pm}-d_{m})\mathcal{O}(\Delta x^{6})\end{split} \tag{3.23}\] Substituting the result above at a finite difference formula for the exponential polynomial approximation \(\hat{G}_{i\pm\frac{1}{2}}\): \[\begin{split}\frac{\hat{G}_{i+\frac{1}{2}}-\hat{G}_{i-\frac{1}{2 }}}{\Delta x^{3}}&=\frac{G(x_{i+\frac{1}{2}})-G(x_{i-\frac{1}{2}} )}{\Delta x^{3}}+\mathcal{O}(\Delta x^{5})+\Bigg{[}\frac{\sum_{m=0}^{2}(\omega_ {m}^{+}-d_{m})\hat{G}_{i+\frac{1}{2}}^{(m)}-\sum_{m=0}^{2}(\omega_{m}^{-}-d_{ m})\hat{G}_{i-\frac{1}{2}}^{(m)}}{\Delta x^{3}}\Bigg{]},\\ &=g(u)_{xxx}\big{|}_{x=x_{i}}+\mathcal{O}(\Delta x^{5})+\Bigg{[} \frac{G(x_{i+\frac{1}{2}})\sum_{m=0}^{2}(\omega_{m}^{+}-d_{m})-G(x_{i-\frac{1 }{2}})\sum_{m=0}^{2}(\omega_{m}^{-}-d_{m})}{\Delta x^{3}}\Bigg{]}\\ &+\Delta x^{2}\sum_{m=0}^{2}\mathcal{B}_{m}(\omega_{m}^{+}-\omega_ {m}^{-})+\Bigg{[}\sum_{m=0}^{2}(\omega_{m}^{+}-d_{m})-\sum_{m=0}^{2}(\omega_{m }^{-}-d_{m})\Bigg{]}\mathcal{O}(\Delta x^{3}).\end{split} \tag{3.24}\] The \(\mathcal{O}(\Delta x^{5})\) term remains after division by \(\Delta x^{3}\) because \(\mathcal{A}^{+}=\mathcal{A}^{-}\) in (3.22). Thus, necessary and sufficient conditions for fifth-order convergence in (2.7) are given by \[\begin{split}\sum_{m=0}^{2}(\omega_{m}^{\pm}-d_{m})& =\mathcal{O}(\Delta x^{8}),\\ \sum_{m=0}^{2}\mathcal{B}_{m}(\omega_{m}^{+}-\omega_{m}^{-})& =\mathcal{O}(\Delta x^{3}),\\ \omega_{m}^{\pm}-d_{m}&=\mathcal{O}(\Delta x^{2}).\end{split} \tag{3.25}\] Note that the first constraint is always satisfied due to normalization \(\sum_{m=0}^{2}\omega_{m}^{\pm}=\sum_{m=0}^{2}d_{m}\) and, from (3.23), we see that a sufficient condition for fifth-order of convergence is given by \[\omega_{m}^{\pm}-d_{m}=\mathcal{O}(\varDelta x^{3}). \tag{3.26}\] ## 4. Smoothness indicators and non-linear weights The smoothness indicator plays a pivotal role in WENO reconstruction, as it serves as a fundamental factor in determining the non-linear weights. These weights are derived by assessing the smoothness of the local solution within each sub-stencil \(S_{m}\). In this section, we introduce a novel set of non-linear weights that enhances existing fifth-order WENO schemes. We constructs the local and global smoothness indicators employing an \(L^{1}\)-norm approach [26]. This global indicator gauges the approximate magnitude of the derivatives of the local solution within each sub-stencil. Moreover, it is demonstrated that within smooth regions, the nonlinear weights closely approximate the linear weights at a rate of \(\mathcal{O}(\varDelta x^{3})\), even near the critical points. The proof presented in this section establishes the fulfilment of condition (3.26), thereby ensuring that the newly proposed scheme achieves fifth-order accuracy in smooth areas. ### Development of global and local smoothness indicators For the construction of a smoothness indicator, we use \(n^{\text{th}}\)-order generalized undivided differences \(\mathcal{D}_{m}^{n}g(u)\) (\(n=3,4\)) of \(g(u)\) on the stencil \(S_{m}\), \(m=0,1,2\) is given by \[\mathcal{D}_{m}^{n}g(u_{i+\frac{1}{2}}):=\sum_{x_{j}\in S_{m}}a_{m,j}^{[n]}g( u(x_{j})). \tag{4.1}\] Let \(n_{m}\) denote the number of points inside the stencil \(S_{m}\) and define the coefficient vector \(\mathbf{a}_{m}^{[n]}:=(a_{m,j}^{[n]}:x_{j}\in S_{m})^{T}\) in (4.1) by solving the linear system \[\mathbf{V}\cdot\mathbf{a}_{m}^{[n]}=\mathbf{f}\mathbf{H}^{[n]},\] for the non-singular matrix \[\mathbf{V}:=\bigg{(}\frac{(x_{j}-x_{i+\frac{1}{2}})^{l}}{\varDelta x^{l}l!}:x _{j}\in S_{m},l=0,\ldots,n_{m}-1\bigg{)},\quad\text{and}\quad\mathbf{f} \mathbf{H}^{[n]}:=(\delta_{n,l}:l=0,\ldots,n_{m}-1)^{T}.\] Note that the coefficients in (4.1) are independent of \(\varDelta x\) and evaluation point \(x_{i+\frac{1}{2}}\). The operators \(\mathcal{D}_{m}^{3}g(u)\) and \(\mathcal{D}_{m}^{4}g(u)\) for \(m=0,1,2\), can be written as \[\begin{split}\mathcal{D}_{0}^{3}g(u)&=-g(u_{i-1})+3 g(u_{i})-3g(u_{i+1})+g(u_{i+2}),\\ \mathcal{D}_{1}^{3}g(u)&=-g(u_{i-1})+3g(u_{i})-3g(u_ {i+1})+g(u_{i+2}),\\ \mathcal{D}_{2}^{3}g(u)&=-2g(u_{i})+7g(u_{i+1})-9g(u _{i+2})+5g(u_{i+3})-g(u_{i+4}),\\ \mathcal{D}_{0}^{4}g(u)&=g(u_{i-2})-4g(u_{i-1})+6g(u _{i})-4g(u_{i+1})+g(u_{i+2}),\\ \mathcal{D}_{1}^{4}g(u)&=g(u_{i-1})-4g(u_{i})+6g(u _{i+1})-4g(u_{i+2})+g(u_{i+3}),\\ \mathcal{D}_{2}^{4}g(u)&=g(u_{i})-4g(u_{i+1})+6g(u _{i+2})-4g(u_{i+3})+g(u_{i+4}).\end{split} \tag{4.2}\] Then a simple calculation with Taylor expansion shows that they can approximate the derivatives with higher accuracy than classical undivided differences. **Theorem 4.1**.: _Let the stencil \(S_{m}\) be a stencil around \(x_{i+\frac{1}{2}}\) with \(\#S_{m}=n_{m}\), and assume that \(g\in\mathcal{C}^{n_{m}}(\varOmega)\), where \(\varOmega\) is an open interval containing \(S_{m}\). Then, the functional \(\mathcal{D}_{m}^{n}g(u_{i+\frac{1}{2}})\) in (4.1) has the convergence property_ \[\mathcal{D}_{m}^{n}g(u_{i+\frac{1}{2}})=\frac{d^{n}}{dx^{n}}g(u)\bigg{|}_{x=x_ {i+\frac{1}{2}}}\varDelta x^{n}+\mathcal{O}(\varDelta x^{5}),\quad n=3,4. \tag{4.3}\] We define the smoothness indicator \(\beta_{m}\) in each substencil by \[\beta_{m}=|\mathcal{D}_{m}^{3}g(u)|+|\mathcal{D}_{m}^{4}g(u)|,\quad m=0,1,2, \tag{4.4}\] and the global smoothness indicator \(\zeta\) is simply defined as the absolute difference between \(\beta_{0}\) and \(\beta_{2}\), i.e., \[\zeta=|\beta_{0}-\beta_{2}|. \tag{4.5}\] ### Construction of non-linear weights and analysis of convergence order The non-linear weights for the scheme are defined as [32] \[\omega_{m}=\frac{\alpha_{m}}{\sum_{j=0}^{2}\alpha_{j}},\quad\alpha_{m}=d_{m} \bigg{(}1+\frac{\zeta}{\beta_{m}+\Delta x^{2}}\bigg{)},m=0,1,2. \tag{4.6}\] To attain the optimal order approximation for \(G(x_{i+\frac{1}{2}})\) in smooth regions, the weights should converge appropriately towards the ideal weights as \(\Delta x\) approaches zero. Conversely, in regions where a discontinuity is present, the weights should effectively eliminate the contribution of stencils that contain the discontinuity. Using the concept of Taylor expansion, the operators \(\mathcal{D}_{m}^{3}g(u)\) and \(\mathcal{D}_{m}^{4}g(u)\) for \(m=0,1,2\) can be represented as follows \[\mathcal{D}_{0}^{3}g(u) =\Delta x^{3}g_{i+\frac{1}{2}}^{(3)}+\frac{1}{8}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}),\] \[\mathcal{D}_{1}^{3}g(u) =\Delta x^{3}g_{i+\frac{1}{2}}^{(3)}+\frac{1}{8}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}),\] \[\mathcal{D}_{2}^{3}g(u) =\Delta x^{3}g_{i+\frac{1}{2}}^{(3)}-\frac{7}{8}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}),\] \[\mathcal{D}_{0}^{4}g(u) =\Delta x^{4}g_{i+\frac{1}{2}}^{(4)}-\frac{1}{2}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}),\] \[\mathcal{D}_{1}^{4}g(u) =\Delta x^{4}g_{i+\frac{1}{2}}^{(4)}+\frac{1}{2}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}),\] \[\mathcal{D}_{2}^{4}g(u) =\Delta x^{4}g_{i+\frac{1}{2}}^{(4)}+\frac{3}{2}\Delta x^{5}g_{i +\frac{1}{2}}^{(5)}+\mathcal{O}(\Delta x^{6}). \tag{4.7}\] By definition of each \(\beta_{m}\) with \(m=0,1,2\), it is straightforward that the truncation error of the local smooth indicators \(\beta_{m}\) is of the form \[\beta_{0} =|g_{i+\frac{1}{2}}^{(3)}|\Delta x^{3}+|g_{i+\frac{1}{2}}^{(4)}| \Delta x^{4}-\frac{3}{8}|g_{i+\frac{1}{2}}^{(5)}|\Delta x^{5}+\frac{7}{24}|g_{ i+\frac{1}{2}}^{(6)}|\Delta x^{6}-\frac{187}{1920}|g_{i+\frac{1}{2}}^{(7)}| \Delta x^{7}+\mathcal{O}(\Delta x^{8}),\] \[\beta_{1} =|g_{i+\frac{1}{2}}^{(3)}|\Delta x^{3}+|g_{i+\frac{1}{2}}^{(4)}| \Delta x^{4}+\frac{5}{8}|g_{i+\frac{1}{2}}^{(5)}|\Delta x^{5}+\frac{7}{24}|g_{ i+\frac{1}{2}}^{(6)}|\Delta x^{6}+\frac{71}{640}|g_{i+\frac{1}{2}}^{(7)}| \Delta x^{7}+\mathcal{O}(\Delta x^{8}),\] \[\beta_{2} =|g_{i+\frac{1}{2}}^{(3)}|\Delta x^{3}+|g_{i+\frac{1}{2}}^{(4)}| \Delta x^{4}+\frac{5}{8}|g_{i+\frac{1}{2}}^{(5)}|\Delta x^{5}+\frac{7}{24}|g_{ i+\frac{1}{2}}^{(6)}|\Delta x^{6}+\frac{71}{640}|g_{i+\frac{1}{2}}^{(7)}| \Delta x^{7}+\mathcal{O}(\Delta x^{8}).\] \[\zeta =|g_{i+\frac{1}{2}}^{(5)}|\Delta x^{5}+\frac{5}{24}|g_{i+\frac{1} {2}}^{(7)}|\Delta x^{7}+\mathcal{O}(\Delta x^{9}). \tag{4.8}\] It implies that \[\alpha_{m}=d_{m}\bigg{(}1+\frac{\zeta}{\beta_{m}+\Delta x^{2}}\bigg{)}=\begin{cases} d_{m}\big{(}1+\mathcal{O}(\Delta x^{3})\big{)},&\text{if}\quad g_{xxx}^{+}\neq 0,\\ d_{m}\big{(}1+\mathcal{O}(\Delta x^{3})\big{)},&\text{if}\quad g_{xxx}^{+}=0,g_{ xxxx}^{+}\neq 0.\end{cases}. \tag{4.9}\] Then, using the fact that \(\sum_{m=0}^{2}d_{m}=1\) and \(\omega_{m}=\frac{\alpha_{m}}{\sum_{j=0}^{2}\alpha_{j}}\) \[d_{m}=\frac{1}{1+\frac{\zeta}{\beta_{m}+\Delta x^{2}}}\omega_{m}\big{(}1+ \mathcal{O}(\Delta x^{3})\big{)}\sum_{m=0}^{2}d_{m}=\omega_{m}+\mathcal{O}( \Delta x^{3}). \tag{4.10}\] Thus, the non-linear weights satisfy the sufficient condition (3.26). **Theorem 4.2**.: _Assume that \(d_{m},m=0,1,2\), is the linear weights and that the function \(g(u)_{xxx}\) is smooth around the big stencil \(S_{7}\), so the nonlinear weights (4.6) satisfy the conditions (3.26) even in the presence of critical points where the third and higher derivatives vanish._ Now we summarise the proposed scheme in algorithmic fashion. We iteratively solve equation (2.1) until the final time \(t=T\). The numerical solution at the \(n^{\text{th}}\) time step \(t=t^{n}\) is denoted by \(\{u_{i}^{n}:=u^{n}(x_{i})\}\). Commencing with \(n=0\) and \(\{u_{i}^{0}\}\) being a given initial condition, the following steps are taken: **Algorithm 1** 1. Establishing a uniform mesh distribution. 1. Define the spatial domain as \(x\in[x_{l},x_{r}]\). 2. Set the number of grid points as \(N\). 3. Calculate the spatial step size as \(\Delta x=\frac{(x_{r}-x_{l})}{N}\). 2. Finite difference WENO scheme for the third derivatives. 1. The conservative form of the equation within the MOL framework is defined, employing the numerical fluxes \(\hat{F}\) and \(\hat{G}\) for convection and dispersion, correspondingly. 2. For the convection term, the flux construction procedure outlined in reference [11] is followed. 3. The approximation of fluxes for the dispersion term is achieved by applying the WENO reconstruction approach as detailed in equation (2.7). 3. Construct exponential WENO-E reconstruction at the cell interface \(x_{i+\frac{1}{2}}\). 1. Split the numerical flux \(\hat{G}_{i+\frac{1}{2}}\) into positive part \(\hat{G}_{i+\frac{1}{2}}^{+}\) and negative part \(\hat{G}_{i+\frac{1}{2}}^{-}\) as in (2.8). 2. Construct the approximations \(\hat{G}_{i+\frac{1}{2}}^{+}\) based on exponential polynomial (2.12) on stencil \(S_{7}\) and local approximations \(\hat{G}_{i+\frac{1}{2}}^{(m)}\) based on exponential polynomial (2.13) on each substencil \(S_{m}\) for \(m=0,1,2\). 3. Determine the ideal weights \(d_{m}\) and form the approximation given by (3.20). 4. Analogously, replication of steps 3(b) and 3(c) to obtain \(\hat{G}_{i+\frac{1}{2}}^{-}\). 4. Compute non-linear weights. 1. Calculate undivided differences of \(g(u)\) on the associated substencils : \(\mathcal{D}_{S_{m}^{+}}^{n}\), \(\mathcal{D}_{S_{m}^{-}}^{n}\) for \(n=3,4\), \(m=0,1,2\). 2. Evaluate the smoothness indicators \(\beta_{m}\) on each substencil \(S_{m}\) as in (4.4) and determine the value \(\zeta\) as in (4.5). 3. Compute the unnormalize weights \(\alpha_{m}\) for each \(m=0,1,2\) and using \(\zeta\) and \(\alpha_{m}\), compute the nonlinear weights \(\omega_{m}\) as in (4.6). 5. Repeat step 3 and 4, to construct numerical flux \(\hat{G}_{i-\frac{1}{2}}\) by shifting each index by -1. 6. Time discretization. 1. Update the time step from \(t^{n}\) to \(t^{n+1}=t^{n}+\Delta t^{n}\), by applying third order SSP Runge-Kutta scheme (5.2). 2. If \(t^{n+1}<T\), set the time step size \(\Delta t^{n+1}\) by (5.3) with updated wave speeds. In case \(t^{n+1}+\Delta t^{n+1}>T\), \(\Delta t^{n+1}\) is set \(T-t^{n+1}\). 3. Repeat steps beginning with step 3 for each time step until time \(T\). **Remark 4.3**.: When \(\lambda\Delta x\) is close to zero, the WENO-E scheme reduces to the WENO scheme based on algebraic polynomials as given in [9]. In the polynomial case, the numerical flux on the big stencil \(S_{7}\) is given by \[\hat{G}_{i+\frac{1}{2}}^{+}=\bigg{[}\frac{-1}{15}g^{+}(u_{i-2})+\frac{21}{40}g ^{+}(u_{i-1})+\frac{1}{8}g^{+}(u_{i})-\frac{23}{12}g^{+}(u_{i+1})+\frac{7}{4} g^{+}(u_{i+2})-\frac{19}{40}g^{+}(u_{i+3})+\frac{7}{120}g^{+}(u_{i+4})\bigg{]}. \tag{4.11}\] Also the numerical fluxes on the small stencils \(S^{m}=\{x_{i-2+m},\ldots,x_{i+2+m}\}\) with \(m=0,1,2\) are given by \[\hat{G}^{(0)}_{i+\frac{1}{2}} =-\frac{1}{4}g^{+}(u_{i-2})+\frac{3}{2}g^{+}(u_{i-1})-2g^{+}(u_{i} )+\frac{1}{2}g^{+}(u_{i+1})+\frac{1}{4}g^{+}(u_{i+2}),\] \[\hat{G}^{(1)}_{i+\frac{1}{2}} =\frac{1}{4}g^{+}(u_{i-1})+\frac{1}{2}g^{+}(u_{i})-2g^{+}(u_{i+1} )+\frac{3}{2}g^{+}(u_{i+2})-\frac{1}{4}g^{+}(u_{i+3}),\] \[\hat{G}^{(2)}_{i+\frac{1}{2}} =\frac{7}{4}g^{+}(u_{i})-\frac{9}{2}g^{+}(u_{i+1})+4g^{+}(u_{i+2} )-\frac{3}{2}g^{+}(u_{i+3})+\frac{1}{4}g^{+}(u_{i+4}). \tag{4.12}\] The smoothness indicator \(\beta_{m}\) of the interpolation polynomials on the smaller stencils \(S_{m}\) are given by \[\beta_{m}^{Z}=\sum_{\kappa=1}^{r}\varDelta x^{2\kappa-1}\int_{I_{j}}\biggl{(} \frac{d^{\kappa}}{dx^{\kappa}}\tilde{p}_{m}(x)\biggr{)}^{2}dx. \tag{4.13}\] In order to increase the accuracy of the nonlinear weights, an alternative to mapped nonlinear weights [9], we chose to use the WENO-Z nonlinear weights introduced in [17]. The nonlinear weights \(\omega_{m}^{Z}\) of WENO-Z are defined by \[\omega_{m}^{Z}=\frac{\alpha_{m}^{Z}}{\sum_{k=0}^{2}\alpha_{k}^{Z}},\quad \alpha_{m}^{Z}=d_{m}\biggl{(}1+\frac{\tau_{5}}{\beta_{m}^{Z}+\epsilon}\biggr{)},\quad\tau_{5}:=|\beta_{0}^{Z}-\beta_{2}^{Z}|,\quad m=0,1,2, \tag{4.14}\] where \(d_{0}=\frac{4}{15}\), \(d_{1}=\frac{1}{2}\), \(d_{2}=\frac{7}{30}\) are linear weights, \(\alpha_{m}^{Z}\) stands for unnormalized weights. **Remark 4.4**.: For the convection term, we use the fifth-order finite difference WENO scheme with WENO-Z nonlinear weights as (4.14) and its corresponding linear weights in [17]. **Remark 4.5**.: In later numerical experiments, we use \(\epsilon=\varDelta x^{2}\) to ensure the fifth-order convergence for WENO-Z and WENO-E schemes, even at critical points, while keeping ENO property near discontinuity [32, 33]. **Remark 4.6**.: The extension of the present scheme to the two-dimensional case can be done by discretizing the spatial dimensions one at a time. ## 5. Numerical results In this section, we illustrate several examples to test the proposed WENO-E-\(\lambda\varDelta x\) scheme (henceforth referred to as the WENO-E scheme) derived by the exponential approximation space for linear and nonlinear dispersion-type equations with various initial conditions. We compared the performance of the proposed scheme with the WENO-Z scheme, which is based on polynomial approximations. In the WENO-E scheme, the tension parameter \(\lambda\) plays a significant role. The numerical value for \(\lambda\) can be tuned according to the characteristics of the initial data; which however is time-consuming and resource intense. To simplify, the \(\lambda\) value is chosen based on the initial cell size such that \(0\leq\lambda\varDelta x\leq 0.1\). When \(\lambda\varDelta x\) is close to zero, the WENO-E scheme reduces to the standard WENO-Z scheme based on algebraic polynomials. On the other hand, when \(\lambda\varDelta x\) is large, the exponential polynomials can better capture the rapid variations in the solution near discontinuities, resulting in improved interpolation results. However, choosing a very large value of \(\lambda\varDelta x\) can lead to numerical instability and accuracy issues. Thus, an optimal value of \(\lambda\) is chosen depending on the given data feature and the required accuracy. For the time integration, we use the third-order explicit strong stability preserving (SSP) Runge-Kutta method [13]. For the ordinary differential equation of the form \[\frac{du}{dt}=RHS(u), \tag{5.1}\] the SSP Runge-Kutta method is given by \[\begin{split} u^{(1)}&=u^{n}+\Delta tRHS(u^{n}),\\ u^{(2)}&=\frac{3}{4}u^{n}+\frac{1}{4}u^{(1)}+\frac{1 }{4}\Delta tRHS(u^{(1)}),\\ u^{n+1}&=\frac{1}{3}u^{n}+\frac{2}{3}u^{(2)}+\frac{2 }{3}\Delta tRHS(u^{(2)}).\end{split} \tag{5.2}\] To ensure the \(CFL\) stability condition, the time step is given by \[\Delta t\leq\min\Biggl{(}\frac{CFL\cdot\Delta x^{\frac{5}{3}}}{\max(|f^{\prime }(u)|)},\frac{CFL\cdot\Delta x^{3}}{\max(|g^{\prime}(u)|)}\Biggr{)}. \tag{5.3}\] We have chosen \(CFL=0.3\) as obtained by Fourier analysis for the linear dispersion equation [9] for further analysis. **Example 5.1**.: Consider the linear KdV equations (Airy equation), \[\begin{cases}u_{t}+u_{xxx}=0,\quad(x,t)\in[0,2\pi]\times[0,T],\\ u_{0}(x)=\sin(x),\quad x\in[0,2\pi],\end{cases} \tag{5.4}\] with periodic boundary conditions and the exact solution is given by \(u(x,t)=\sin(x+t)\). We solve this linear equation up to the final time \(T=1\) by the WENO-E scheme and the WENO-Z scheme. The following error norms are used to compute the accuracy of schemes. \[L^{\infty}=\max_{0\leq i\leq N}|u_{e}-u_{a}|,\quad\ L^{1}=\frac{1}{N+1}\sum_{i =0}^{N}|u_{e}-u_{a}|,\] where \(u_{e}\) and \(u_{a}\) denote the exact and approximate solutions of the PDE. A comprehensive analysis of error norms, specifically \(L^{1}\) and \(L^{\infty}\) errors, is conducted across various combinations of the parameter \(\lambda\Delta x\) and grid resolution \(N\). The results are detailed in Appendix 1, where we have considered 15 distinct \(\lambda\Delta x\) values carefully chosen to span the entire error spectrum within the \([0,1]\) range. \(\lambda\Delta x\) can be categorized into three classes based on the observed outcomes. Firstly, there exists a category where \(\lambda\Delta x\) consistently yields precise and accurate results. Secondly, values in this category demonstrate reasonably accurate results for lower resolution and remains stagnant over larger values of \(N\). Lastly, there is a subset of \(\lambda\Delta x\) values significantly deviating from the optimal \(\lambda\Delta x\), resulting in inconsistent outcomes. In this example, we observe proximate convergence within the range of \(0.02\leq\lambda\Delta x\leq 0.04\), with \(\lambda\Delta x=0.02\) exhibiting the lowest errors among these values. Additionally, we deliberately included \(\lambda\Delta x\) values of \(0.06\) and \(0.1\) in our analysis to illustrate the substantial incongruity in results that arises when \(\lambda\Delta x\) values significantly diverge from the optimal choice. Figure 2 provides a comparison of errors between the WENO-Z scheme and WENO-E scheme with \(\lambda\Delta x\) values of \(0.02\), \(0.04\), \(0.06\), and \(0.1\) in terms of the \(L^{\infty}\)- and \(L^{1}\)-norms, respectively. Furthermore, Table 1 presents a summary of the \(L^{\infty}\)-error, \(L^{1}\)-error, and order of convergence for all schemes under consideration. **Example 5.3**.: Consider the nonlinear KdV equation \[\left\{\begin{aligned} u_{t}-3(u^{2})_{x}+u_{xxx}&=0, \quad x\in[-10,10],\quad t\geq 0,\\ u_{0}(x)&=-2\operatorname{sech}^{2}(x),\quad x\in[-10,1 0].\end{aligned}\right. \tag{5.6}\] \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E-0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-10} & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** \\ \hline 10 & 2.6042e-03 & - & 2.5610e-03 & - & 2.5611e-03 & - & 2.5611e-03 & - & 2.5602e-03 & - \\ 20 & 8.704e-05 & 4.9029 & 8.7186e-05 & 4.8765 & 8.7170e-05 & 4.8768 & 8.7085e-05 & 4.8782 & 8.6335e-05 & 4.8902 \\ 40 & 2.7752e-06 & 4.9711 & 2.7735e-06 & 4.9743 & 2.7627e-06 & 4.9797 & 2.7154e-06 & 5.0032 & 2.3194e-06 & 5.2181 \\ 80 & 8.7052e-08 & 4.9946 & 8.6670e-08 & 5.0001 & 8.1182e-08 & 5.0888 & 5.7395e-08 & 5.5641 & 1.4167e-07 & 4.0331 \\ 160 & 2.7262e-09 & 4.9969 & 2.5429e-09 & 5.0910 & 2.0500e-10 & 8.6294 & 1.2112e-08 & 2.2445 & 1.1176e-07 & 3.4211e-01 \\ 320 & 1.1102e-10 & 4.6180 & 2.1082e-11 & 6.9143 & 1.3553e-09 & -2.7250 & 7.3103e-09 & 7.2848e-01 & 5.7148e-08 & 9.6769e-01 \\ \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E-0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-10} & \(L^{1}\)-error & **Rate** & \(L^{1}\)-error & **Rate** & \(L^{1}\)-error & **Rate** & \(L^{1}\)**-error & **Rate** \\ \hline 10 & 1.7456e-03 & - & 1.7519e-03 & - & 1.7520e-03 & - & 1.7520e-03 & - & 1.7516e-03 & - \\ 20 & 5.6856e-05 & 4.9403 & 5.7100e-05 & 4.9393 & 5.7089e-05 & 4.9396 & 5.7031e-05 & 4.9411 & 5.6536e-05 & 4.9534 \\ 40 & 1.7815e-06 & 4.9962 & 1.7820e-06 & 5.0019 & 1.7751e-06 & 5.0072 & 1.7447e-06 & 5.0307 & 1.4902e-06 & 5.2456 \\ 80 & 5.5652e-08 & 5.0005 & 5.5421e-08 & 5.0070 & 5.1911e-08 & 5.0957 & 3.6701e-08 & 5.5710 & 9.0592e-08 & 4.0400 \\ 160 & 1.7390e-09 & 5.0001 & 1.6221e-09 & 5.0945 & 1.3076e-10 & 8.6330 & 7.7265e-09 & 2.2479 & 7.1295e-08 & 3.4559e-01 \\ 320 & 7.0742e-11 & 4.6195 & 1.3412e-11 & 6.9182 & 8.6372e-10 & -2.7236 & 4.6589e-09 & 7.2983e-01 & 3.6420e-08 & 9.6905e-01 \\ \hline \end{tabular}. \end{table} Table 1. Comparison of WENO-Z and WENO-E schemes in terms of \(L^{\infty}\)- and \(L^{1}\)- errors along with their convergence rate for Example 5.1 over the domain \(\Omega=[0,2\pi]\) at time \(T=1\). Figure 2. Comparison WENO-Z and WENO-E schemes in terms of \(L^{\infty}\) and \(L^{1}\) errors (in \(\log_{10}\) scale) for Example 5.1 at \(T=1\) The exact solution is given by \(u(x,t)=-2\,\mathrm{sech}^{2}(x-4t)\). We embark on the numerical computation of the solution within the spatial domain \(x\in[-10,10]\) at a specific time \(T=0.5\) while employing periodic boundary conditions. The \(L^{\infty}\)-error, \(L^{1}\)-error and convergence order are reported in Table 3 and Figure 5. A comparison of numerical solutions obtained from WENO-Z and WENO-E schemes with \(N=640\) at \(T=0.5\) in \(x\in[-10,10]\) is given in Figure 6. The result of WENO-E with \(\lambda\Delta x=0.04\) is close to the exact solution \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \(N_{x}\times N_{y}\) & \multicolumn{2}{c}{WENO-Z} & \multicolumn{2}{c}{WENO-E-0.02} & \multicolumn{2}{c}{WENO-E-0.04} & \multicolumn{2}{c}{WENO-E-0.06} & \multicolumn{2}{c}{WENO-E-0.1} \\ \cline{2-10} & \(L^{\infty}\)**-error** & **Rate** & \(L^{\infty}\)**-error** & **Rate** & \(L^{\infty}\)**-error** & **Rate** & \(L^{\infty}\)**-error** & **Rate** \\ \hline \(10\times 10\) & 5.4139e-03 & - & 5.3032e-03 & - & 5.3035e-03 & - & 5.3035e-03 & - & 5.3020e-03 & - \\ \(20\times 20\) & 1.7482e-04 & 4.9527 & 1.7505e-04 & 4.9210 & 1.7502e-04 & 4.9214 & 1.7484e-04 & 4.9228 & 1.7333e-04 & 4.9349 \\ \(40\times 40\) & 5.5487e-06 & 4.9776 & 5.5462e-06 & 4.9801 & 5.5245e-06 & 4.9855 & 5.4300e-06 & 5.0090 & 4.6380e-06 & 5.2239 \\ \(80\times 80\) & 1.7411e-07 & 4.9940 & 1.7336e-07 & 4.9997 & 1.6238e-07 & 5.0884 & 1.1480e-07 & 5.5637 & 2.8337e-07 & 4.0327 \\ \(160\times 160\) & 5.4551e-09 & 4.9963 & 5.0885e-09 & 5.0904 & 4.0691e-10 & 8.6405 & 2.4219e-08 & 2.2450 & 2.2350e-07 & 0.3425 \\ \hline \(N_{x}\times N_{y}\) & \multicolumn{2}{c}{WENO-Z} & \multicolumn{2}{c}{WENO-E-0.02} & \multicolumn{2}{c}{WENO-E-0.04} & \multicolumn{2}{c}{WENO-E-0.06} & \multicolumn{2}{c}{WENO-E-0.1} \\ \cline{2-10} & \(L^{1}\)**-error** & **Rate** & \(L^{1}\)**-error** & **Rate** & \(L^{1}\)**-error** & **Rate** & \(L^{1}\)**-error** & **Rate** \\ \hline \(10\times 10\) & 3.4560e-03 & - & 3.4272e-03 & - & 3.4273e-03 & - & 3.4273e-03 & - & 3.4261e-03 & - \\ \(20\times 20\) & 1.1104e-04 & 4.9599 & 1.1149e-04 & 4.9420 & 1.1147e-04 & 4.9423 & 1.1136e-04 & 4.9438 & 1.1039e-04 & 4.9559 \\ \(40\times 40\) & 3.5318e-06 & 4.9746 & 3.5329e-06 & 4.9800 & 3.5191e-06 & 4.9853 & 3.4588e-06 & 5.0088 & 2.9543e-06 & 5.2237 \\ \(80\times 80\) & 1.1080e-07 & 4.9944 & 1.1033e-07 & 5.0009 & 1.0335e-07 & 5.0896 & 7.3069e-08 & 5.5649 & 1.8036e-07 & 4.0339 \\ \(160\times 160\) & 3.4732e-09 & 4.9955 & 3.2400e-09 & 5.0898 & 2.5903e-10 & 8.6402 & 1.5421e-08 & 2.2443 & 1.4231e-07 & 0.3418 \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of WENO-Z and WENO-E schemes in terms of \(L^{\infty}\)- and \(L^{1}\)- errors along with their convergence rate for Example 5.2 over the domain \(\Omega=(0,2\pi)\times(0,2\pi)\) at time \(T=1\). Figure 3. Comparison WENO-Z and WENO-E schemes in terms of \(L^{1}\) and \(L^{\infty}\) errors (in \(\log_{10}\) scale) for Example 5.2 at \(T=1\). and better compared to the polynomial case. As the number of grid points increases, the error converges to machine epsilon. **Example 5.4**.: Consider the following classical KdV equation with zero dispersion limit of conservation law \[u_{t}+\left(\frac{u^{2}}{2}\right)_{x}+\epsilon u_{xxx}=0,\quad x\in[0,1],\quad t \geq 0, \tag{5.7}\] Figure 4. Comparison of numerical solutions obtained from using WENO-Z and WENO-E schemes with \(N_{x}\times N_{y}=80\times 80\) at \(T=1\) in \((0,2\pi)\times(0,2\pi)\), for Example, 5.2. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E- 0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-10} & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)-error & **Rate** & \(L^{\infty}\)**-error & **Rate** \\ \hline 80 & 8.3944e-02 & - & 1.2351e-02 & - & 1.2351e-02 & - & 1.2352e-02 & - & 1.2354e-02 & - \\ 160 & 3.6399e-04 & 7.8494 & 4.1591e-04 & 4.8922 & 4.1591e-04 & 4.8922 & 4.1583e-04 & 4.8926 & 4.1486e-04 & 4.8962 \\ 320 & 1.2547e-05 & 4.8586 & 1.3086e-05 & 4.9901 & 1.3072e-05 & 4.9918 & 1.3004e-05 & 4.9990 & 1.2429e-05 & 5.0608 \\ 640 & 8.5693e-07 & 3.8720 & 8.5676e-07 & 3.9330 & 8.5721e-07 & 3.9306 & 8.5918e-07 & 3.9199 & 8.7566e-07 & 3.8272 \\ 1280 & 8.9345e-07 & -0.0602 & 8.9347e-07 & -0.0605 & 8.9369e-07 & -0.0601 & 8.9467e-07 & -0.0584 & 9.0287e-07 & -0.0441 \\ \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E-0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-10} & \(L^{1}\)-error & **Rate** & \(L^{1}\)**-error** & **Rate** & \(L^{1}\)**-error** & **Rate** & \(L^{1}\)**-error** & **Rate** \\ \hline 80 & 2.3854e-02 & - & 2.1970e-03 & - & 2.1972e-03 & - & 2.1974e-03 & - & 2.1979e-03 & - \\ 160 & 5.9718e-05 & 8.6419 & 6.8676e-05 & 4.9996 & 6.8676e-05 & 4.9997 & 6.8663e-05 & 5.0001 & 6.8506e-05 & 5.0038 \\ 320 & 1.9878e-06 & 4.9089 & 2.0648e-06 & 5.0557 & 2.0624e-06 & 5.0574 & 2.0510e-06 & 5.0651 & 1.9547e-06 & 5.1312 \\ 640 & 8.4259e-08 & 4.5602 & 8.4750e-08 & 4.6067 & 8.3420e-08 & 4.6278 & 7.7660e-08 & 4.7230 & 4.4961e-08 & 5.4421 \\ 1280 & 2.5115e-08 & 1.7463 & 2.5080e-08 & 1.7567 & 2.4566e-08 & 1.7637 & 2.5510e-08 & 1.6061 & 4.8688e-08 & -0.1149 \\ \hline \end{tabular} \end{table} Table 3. Comparison of WENO-Z and WENO-E schemes in terms of \(L^{\infty}\)- and \(L^{1}\)- errors along with their convergence rate for Example 5.3 over the domain \(\Omega=[-10,10]\) at time \(T=0.5\). with continuous initial condition \[u(x,0)=2+0.5\sin(2\pi x),\quad x\in[0,1], \tag{5.8}\] Figure 5. Comparison WENO-Z and WENO-E schemes in terms of \(L^{1}\) and \(L^{\infty}\) errors (in \(\log_{10}\) scale) for Example 5.3 at \(T=0.5\). Figure 6. Comparison of numerical solutions obtained from using WENO-Z and WENO-E schemes with \(N=640\) at \(T=0.5\) in \(x\in[-10,10]\), for Example 5.3. and discontinuous initial condition \[u(x,0)=\begin{cases}1,&\text{if}\quad 0.25<x<4,\\ 0,&\text{else}.\end{cases} \tag{5.9}\] The dispersive non-linear KdV equation with initial condition (5.8) with periodic boundary condition has a dispersive shock wave behavior and produces continuous wavelets in the vicinity of the discontinuity for small \(\epsilon\). This example demonstrates the effectiveness of our scheme in accurately resolving high-frequency \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E-0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-11} & \(\mathbf{L^{\infty}}\)-error & **Rate** & \(\mathbf{L^{\infty}}\)-error & **Rate** & \(\mathbf{L^{\infty}}\)-error & **Rate** & \(\mathbf{L^{\infty}}\)-error & **Rate** & \(\mathbf{L^{\infty}}\)-error & **Rate** \\ \hline 40 & 6.3724e-04 & - & 6.3724e-04 & - & 6.3724e-04 & - & 6.3718e-04 & - & 6.3673e-04 & - \\ 80 & 1.7932e-05 & 5.1513 & 1.7927e-05 & 5.1516 & 1.7920e-05 & 5.1522 & 1.7890e-05 & 5.1544 & 1.7642e-05 & 5.1736 \\ 160 & 7.0911e-07 & 4.6604 & 7.0899e-07 & 4.6602 & 7.0660e-07 & 4.6646 & 6.9624e-07 & 4.6835 & 6.6316e-07 & 4.7336 \\ 320 & 4.6784e-08 & 3.9219 & 4.6796e-08 & 3.9213 & 4.6983e-08 & 3.9107 & 4.7792e-08 & 3.8647 & 6.4315e-08 & 3.3661 \\ 640 & 5.0636e-09 & 3.2078 & 5.0663e-09 & 3.2074 & 5.1073e-09 & 3.2015 & 5.3168e-09 & 3.1682 & 3.4122e-08 & 0.9144 \\ 1280 & 4.1144e-10 & 3.6214 & 4.0924e-10 & 3.6299 & 5.0185e-10 & 3.3472 & 2.3221e-09 & 1.1951 & 1.7924e-08 & 0.9288 \\ \hline \hline \(N\) & \multicolumn{2}{c}{**WENO-Z**} & \multicolumn{2}{c}{**WENO-E-0.02**} & \multicolumn{2}{c}{**WENO-E-0.04**} & \multicolumn{2}{c}{**WENO-E-0.06**} & \multicolumn{2}{c}{**WENO-E-0.1**} \\ \cline{2-11} & \(\mathbf{L^{1}}\)-error & **Rate** & \(\mathbf{L^{1}}\)**-error & **Rate** & \(\mathbf{L^{1}}\)**-error & **Rate** & \(\mathbf{L^{1}}\)**-error & **Rate** \\ \hline 40 & 3.1608e-04 & - & 3.1654e-04 & - & 3.1654e-04 & - & 3.1644e-04 & - & 3.1628e-04 & - \\ 80 & 1.1146e-05 & 4.8256 & 1.1149e-05 & 4.8273 & 1.1146e-05 & 4.8278 & 1.1091e-05 & 4.8345 & 1.1005e-05 & 4.8449 \\ 160 & 4.5757e-07 & 4.6064 & 4.5746e-07 & 4.6072 & 4.5581e-07 & 4.6120 & 4.2927e-07 & 4.6914 & 3.8846e-07 & 4.8243 \\ 320 & 2.0152e-08 & 4.5050 & 2.0104e-08 & 4.5081 & 1.9391e-08 & 4.5550 & 1.3932e-08 & 4.9454 & 2.5432e-08 & 3.9330 \\ 640 & 1.4181e-09 & 3.8290 & 1.4092e-09 & 3.8346 & 1.3336e-09 & 3.8620 & 2.1872e-09 & 2.9288 & 1.7251e-08 & 0.5600 \\ 1280 & 1.5958e-10 & 3.1516 & 1.5226e-10 & 3.2102 & 2.1720e-10 & 2.6182 & 1.0973e-09 & 0.9951 & 8.7921e-09 & 0.9724 \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of WENO-Z and WENO-E schemes in terms of \(L^{\infty}\)- and \(L^{1}\)- errors along with their convergence rate for Example 5.5 over the domain \(\Omega=[0,2\pi]\) at time \(T=\pi/2\). wavelets even for very small values of \(\epsilon\). The solution is computed with \(\epsilon=10^{-4},10^{-5},10^{-6}\), respectively at \(T=0.5\) and plotted in Figure 7. The findings indicate that the simulations accurately capture physical oscillations, and the solutions are devoid of noise, especially before and after the occurrence of dispersive shocks. Stable numerical oscillations can be observed in both downstream and upstream of the continuous wavelets in solutions obtained with significantly coarser grids. Upon considering the aforementioned equation with a discontinuous initial condition and inflow-outflow boundary condition, we can see a dispersion-shock wave propagating to the left at each discontinuous interface. The amplitude of the wave increases over time. Figure 8 demonstrates how a top-hat initial solution transforms into a series of traveling waves. Even with physically discontinuous initial data, the zero-dispersion limit solutions do not exhibit discontinuity. Instead, they develop continuous fine-scale wavelets and eventually break up into solitary waves. The dispersion-shock wave plotted with the current scheme is more accurate and flexible than those plotted in previous studies [14, 15, 9] with the same mesh. **Example 5.5**.: The canonical traveling wave solution for the \(K(2,2)\) equation \[u_{t}+(u^{2})_{x}+(u^{2})_{xxx}=0, \tag{5.10}\] is given by the compacton \[u(x,t)=\begin{cases}\frac{4c}{3}\cos^{2}\left(\frac{x-ct}{4}\right),&\text{if }\ \ |x-ct|\leq 2\pi,\\ 0,&\text{else.}\end{cases}\] The accuracy of the scheme is measured away from the interference of the non-smooth interfaces in the interval \([0,2\pi]\) at \(T=\pi/2\) with the periodic boundary condition. Figure 12 and Table 4 present a comparison of the errors in the \(L^{\infty}\)- and \(L^{1}\)-norms between the WENO-Z scheme and WENO-E scheme with different values of \(\lambda\lambda x\)\((0.02,0.04,0.06,0.1)\). WENO-E scheme with \(\lambda\Delta x=0.02\) produces better accuracy as plotted in Figure 11. To observe the behavior of the decomposition of compacton into compacton- anticompacton pairs, we take the initial data \[u(x,0)=\begin{cases}\frac{4}{3}\cos^{2}\left(\frac{x}{8}\right),&\text{if} \quad-4\pi\leq x\leq 4\pi,\\ 0,&\text{else}.\end{cases} \tag{5.11}\] In Figure 9, wave motion is simulated at \(T=0,10,25\), and \(50\) in the domain of \([-5\pi,25\pi]\) with \(N=400\) cells. The compactons are observed to split from one initial compacton and move to the right over time. A small residue is also developed at the left interface, similar to the LDG scheme [6, 9], but no Gibbs oscillations occur in the non-smooth interface (i.e., the edges of the compactons). The small residue, which appears to be a compacton-anti-compacton pair on the left side of the compacton-packet, is believed to be physical and was also detected by the LDG scheme [6, 9]. Our scheme can differentiate between numerical and physical oscillation without any additional special treatment. Figure 9. Numerical solution of K(2,2) equation with \(N=400\) at \(T=0,10,50,120\) and \(\lambda\Delta x=0.02\) WENO-E in \(x\in[-5\pi,25\pi]\) for initial condition (5.11) of Example 5.5. **Example 5.6**.: Consider the \(K(3,3)\) equation \[u_{t}+(u^{3})_{x}+(u^{3})_{xxx}=0. \tag{5.12}\] In order to observe the interaction between three compactons, we take the initial data as \[u(x,0)=\begin{cases}\sqrt{3}\cos\left(\frac{x-10}{3}\right),&\text{if}\quad|x-10 |\leq\frac{3\pi}{2},\\ \frac{3}{2}\cos\left(\frac{x-25}{3}\right),&\text{if}\quad|x-25|\leq\frac{3 \pi}{2},\\ \sqrt{\frac{3}{2}}\cos\left(\frac{x-40}{3}\right),&\text{if}\quad|x-40|\leq \frac{3\pi}{2},\\ 0,&\text{otherwise}.\end{cases}\] The exact solution is given by \[u(x,t)=\begin{cases}\pm\sqrt{\frac{3c}{2}}\cos\left(\frac{x-ct}{3}\right),& \text{if}\quad|x-ct|\leq\frac{3\pi}{2},\\ 0,&\text{otherwise}.\end{cases}\] Figure 10 shows that as the compactons propagate to the right, three compactons with different speeds (\(c=10,25,40\)) pass through each other during non-linear interaction while maintaining their coherent shapes after the collision. Although the compactons emerge intact from the collision, a minor residue is reflected back Figure 10. Numerical solution of K(3,3) equation with \(N=600\) at \(T=10,25,50\) and \(\lambda\Delta x=0.02\) WENO-E in \(x\in[0,30\pi]\) for Example 5.6. from the collision on the left. Similar findings can also be found in [6, 9]. These compactons are discovered to be not fully elastic, as mentioned in [5] and identified in the original compacton study [7]. The wavelets of the residue become even more apparent with mesh refinement. Based on earlier compacton studies and numerical observations, we conclude that this phenomenon is not numerically induced. Figure 11. Comparison of numerical solutions obtained from using WENO-Z and WENO-E schemes with \(N=320\) at \(T=\pi/2\) in \(x\in[-4\pi,4\pi]\), for Example 5.5 Figure 12. Comparison WENO-Z and WENO-E schemes in terms of \(L^{1}\) and \(L^{\infty}\) errors (in \(\log_{10}\) scale) for Example 5.5 at \(T=\pi/2\). Next, we solve (5.6) subject to the initial data \[u(x,0)=\begin{cases}\sqrt{\frac{3}{2}}\cos\left(\frac{x}{6}\right),&\text{if} \quad-3\pi\leq x\leq 3\pi,\\ 0,&\text{else}.\end{cases} \tag{5.13}\] The results of our numerical simulations are shown in Figure 13. As time evolves, a train of canonical compactons splits from the initial data and moves to the right. At the same time, a rapid oscillation develops at the left interface of the initial data. The oscillations that we get on the left side of the solution indicate that the oscillations are an integral part of the solution and not a numerical artifact. After examining the six aforementioned examples, it becomes evident that there is no universally optimal value for the tension parameter, denoted as \(\lambda\varDelta x\). Its effectiveness is contingent upon the specific characteristics of the equation under consideration. In linear scenarios, regardless of whether the number of grid points \(N\) is small or large, we observe improved accuracy within the range of \(\lambda\varDelta x\) between \(0.02\) and \(0.04\). Conversely, in non-linear cases, superior accuracy can be achieved by setting \(\lambda\varDelta x\) to \(0.1\) for low values of \(N\) and maintaining the range of \(\lambda\varDelta x\) between \(0.02\) and \(0.04\) for high values of \(N\). More details are provided in Appendix:A. In summary, it is advisable to employ the range, \(\lambda\varDelta x\in[0.02,0.04]\), to ensure the optimal performance of the proposed scheme. Figure 13. Numerical solution of K(3,3) equation with \(N=400\) at \(T=0,2,6,8\) and \(\lambda\varDelta x=0.02\) WENO-E in \(x\in[-6\pi,6\pi]\) for initial condition (5.13) of Example 5.6 ## 6. Conclusion In this study, a fifth-order WENO scheme based on exponential polynomials for solving the non-linear dispersion-type equations has been proposed. The proposed approximation space utilizes the exponential basis with a tension parameter that can be optimized to better fit the specific features of the characteristics of the initial data, resulting in improved results without the presence of spurious oscillations. Numerical tests were conducted to simulate several equations with various initial conditions, including the 1-D and 2-D linear and non-linear KdV equations, \(K(2,2)\) equation, and \(K(3,3)\) equation. In direct comparison to the WENO-Z method, our proposed method WENO-E, when executed with a optimized tension parameter, has superior capacity for resolving non-linear dispersion-type equations at higher resolutions. ## Acknowledgements The author Samala Rathan is supported by NBHM, DAE, India (Ref. No. 02011/46/2021 NBHM(R.P.)/R & D II/14874) and IIPE, Visakhapatnam, India (IRG Grant No. IIPE/DORD/IRG/001). ## Conflict of interest The authors declare no potential conflict of interests. ## Data availability The data that support the findings of this study are available upon reasonable request. ## Appendix:A In this section, we provide a detailed tabulated values that enumerates the \(L^{1}\)-errors associated with a range of \(\lambda\Delta x\) values in Table 5, 6, 7, and 8 for examples 5.1, 5.2, 5.3, and 5.5 respectively. The chosen spectrum of \(\lambda\Delta x\) values is carefully selected to ensure that there is a balanced and adequate representation of errors. This comprehensive overview allows us to effectively showcase the impact and significance of varying \(\lambda\Delta x\). The optimal value of \(\lambda\Delta x\) are highlighted for each example in bold. To see a universally applicable parameter, we've categorized \(\lambda\Delta x\) values into three groups which are as follows: * Category A (Cat-A) is \(\lambda\Delta x\in[0.02,0.04]\) which comprises optimal values that consistently yield accuracy across various grid point counts \(N\). * Category B (Cat-B) is \(\lambda\Delta x\in[0.06,0.1]\) which includes values closely resembling Cat-A at lower grid points \(N\), but they neither improve nor maintain accuracy as \(N\) increases, ultimately deteriorating. * Category C (Cat-C) is \(\lambda\Delta x>0.1\) which encompasses values that exhibit inconsistency and substantial deviation from the actual solution, numerically distant from Cat-A. As per these observations, the choice of the tension parameter, \(\lambda\Delta x\), does not possess a universally optimal value, as its effectiveness relies on the specific characteristics of the equation under consideration. In linear scenarios, whether the number of grid points \(N\) is small or large, we observe enhanced accuracy within the \(\lambda\Delta x\) range of 0.02 to 0.04. Conversely, in nonlinear cases, superior accuracy is achieved by setting \(\lambda\Delta x\) to 0.1 for low \(N\) and adhering to the \(\lambda\Delta x\) range of 0.02 to 0.04 for high \(N\). In overall, one can utilize the Cat-A i.e., \(\lambda\Delta x\in[0.02,0.04]\) to see the good performance of the proposed scheme. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\mathbf{L^{1}}\) & \multicolumn{5}{|c|}{\(\mathbf{N_{x}\times N_{y}}\)} \\ \cline{2-6} \multicolumn{1}{|c|}{**error**} & 10\(\times\)10 & 20\(\times\)20 & 40\(\times\)40 & 80\(\times\)80 & 160\(\times\)160 \\ \hline WENO-Z & 3.4560e-03 & 1.1104e-04 & 3.5318e-06 & 1.1080e-07 & 3.4732e-09 \\ \hline WENO-E-0.01 & 3.4272e-03 & 1.1149e-04 & 3.5338e-06 & 1.1077e-07 & 3.4586e-09 \\ \hline WENO-E-0.02 & 3.4272e-03 & 1.1149e-04 & 3.5329e-06 & 1.1034e-07 & 3.2400e-09 \\ \hline WENO-E-0.03 & 3.4273e-03 & 1.1149e-04 & 3.5292e-06 & 1.0845e-07 & 2.2923e-09 \\ \hline **WENO-E-0.04** & **3.4273e-03** & **1.1147e-04** & **3.5191e-06** & **1.0335e-07** & **2.5903e-10** \\ \hline WENO-E-0.05 & 3.4273e-03 & 1.1143e-04 & 3.4977e-06 & 9.2607e-08 & 5.6387e-09 \\ \hline WENO-E-0.06 & 3.4273e-03 & 1.1136e-04 & 3.4589e-06 & 7.3069e-08 & 1.5421e-08 \\ \hline WENO-E-0.07 & 3.4272e-03 & 1.1124e-04 & 3.3949e-06 & 4.0895e-08 & 3.1531e-08 \\ \hline WENO-E-0.08 & 3.4270e-03 & 1.1105e-04 & 3.2966e-06 & 8.4582e-09 & 5.6242e-08 \\ \hline WENO-E-0.09 & 3.4267e-03 & 1.1078e-04 & 3.1537e-06 & 8.0231e-08 & 9.2177e-08 \\ \hline WENO-E-0.1 & 3.4262e-03 & 1.1039e-04 & 2.9544e-06 & 1.8036e-07 & 1.4231e-07 \\ \hline WENO-E-0.2 & 3.3973e-03 & 9.3414e-05 & 5.7438e-06 & 4.5470e-06 & 2.3286e-06 \\ \hline WENO-E-0.3 & 3.2631e-03 & 1.9555e-05 & 4.3422e-05 & 2.3459e-05 & 1.1798e-05 \\ \hline WENO-E-0.4 & 2.8944e-03 & 1.7936e-04 & 1.4476e-04 & 7.4321e-05 & 3.7262e-05 \\ \hline WENO-E-0.5 & 2.1115e-03 & 5.9823e-04 & 3.5799e-04 & 1.8133e-04 & 9.0833e-05 \\ \hline WENO-E-0.6 & 6.8564e-04 & 1.3574e-03 & 7.4421e-04 & 3.7510e-04 & 1.8783e-04 \\ \hline \end{tabular} \end{table} Table 6. WENO-E schemes in terms of \(L^{1}\)- errors for Example 5.2 over the domain \(\Omega=(0,2\pi)\times(0,2\pi)\) at time \(T=1\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(\mathbf{L^{1}}\) & \multicolumn{5}{|c|}{\(\mathbf{N}\)} \\ \cline{2-7} \multicolumn{1}{|c|}{**error**} & 10 & 20 & 40 & 80 & 160 & 320 & 640 \\ \hline WENO-Z & 1.7456e-03 & 5.6856e-05 & 1.7815e-06 & 5.5652e-08 & 1.7390e-09 & 7.0742e-11 & 1.3677e-10 \\ \hline WENO-E-0.01 & 1.7519e-03 & 5.7101e-05 & 1.7825e-06 & 5.5640e-08 & 1.7317e-09 & 6.7075e-11 & 1.3520e-10 \\ \hline **WENO-E-0.02** & **1.7519e-03** & **5.7100e-05** & **1.7821e-06** & **5.5421e-08** & **1.6221e-09** & **1.3413e-11** & **1.0871e-10** \\ \hline WENO-E-0.03 & 1.7519e-03 & 5.7097e-05 & 1.7802e-06 & 5.4471e-08 & 1.1474e-09 & 2.2517e-10 & 3.5957e-11 \\ \hline WENO-E-0.04 & 1.7520e-03 & 5.7089e-05 & 1.7751e-06 & 5.1911e-08 & 1.3076e-10 & 8.6372e-10 & 3.3622e-10 \\ \hline WENO-E-0.05 & 1.7520e-03 & 5.7069e-05 & 1.7643e-06 & 4.6515e-08 & 2.8258e-09 & 2.2103e-09 & 1.0080e-09 \\ \hline WENO-E-0.06 & 1.7520e-03 & 5.7032e-05 & 1.7447e-06 & 3.6701e-08 & 7.7266e-09 & 4.6589e-09 & 2.2314e-09 \\ \hline WENO-E-0.07 & 1.7520e-03 & 5.6970e-05 & 1.7124e-06 & 2.0541e-08 & 1.5797e-08 & 8.6913e-09 & 4.2465e-09 \\ \hline WENO-E-0.08 & 1.7520e-03 & 5.6874e-05 & 1.6629e-06 & 4.2486e-09 & 2.8176e-08 & 1.4877e-08 & 7.3376e-09 \\ \hline WENO-E-0.09 & 1.7519e-03 & 5.6734e-05 & 1.5908e-06 & 4.0299e-08 & 4.6179e-08 & 2.3872e-08 & 1.1833e-08 \\ \hline WENO-E-0.1 & 1.7516e-03 & 5.6537e-05 & 1.4902e-06 & 9.0593e-08 & 7.1295e-08 & 3.6421e-08 & 1.8105e-08 \\ \hline WENO-E-0.2 & 1.7378e-03 & 4.7842e-05 & 2.8973e-06 & 2.2839e-06 & 1.1666e-06 & 5.8367e-07 & 2.9160e-07 \\ \hline WENO-E-0.3 & 1.6711e-03 & 1.0023e-05 & 2.1903e-05 & 1.1783e-05 & 5.9103e-06 & 2.9538e-06 & 1.4761e-06 \\ \hline WENO-E-0.4 & 1.4862e-03 & 9.1817e-05 & 7.3016e-05 & 3.7329e-05 & 1.8667e-05 & 9.3276e-06 & 4.6615e-06 \\ \hline WENO-E-0.5 & 1.0921e-03 & 3.0621e-04 & 1.8055e-04 & 9.1071e-05 & 4.5503e-05 & 2.2736e-05 & 1.4356e-03 \\ \hline WENO-E-0.6 & 3.7362e-04 & 6.9462e-04 & 3.7529e-04 & 1.8838e-04 & 9.4091e-05 & 4.7011e-05 & 1.4620e+32 \\ \hline \end{tabular} \end{table} Table 5. WENO-E schemes in terms of \(L^{1}\)- errors for Example 5.1 over the domain \(\Omega=[0,2\pi]\) at time \(T=1\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\mathbf{L^{1}}\) & \multicolumn{6}{c|}{\(\mathbf{N}\)} \\ \cline{2-7} **error** & 40 & 80 & 160 & 320 & 640 & 1280 \\ \hline WENO-Z & 3.1369e-02 & 2.3854e-02 & 5.9718e-05 & 1.9878e-06 & 8.4259e-08 & 2.5115e-08 \\ \hline WENO-E-0.01 & 3.0119e-02 & 2.1970e-03 & 6.8675e-05 & 2.0650e-06 & 8.4833e-08 & 2.5116e-08 \\ \hline WENO-E-0.02 & 3.0119e-02 & 2.1970e-03 & 6.8676e-05 & 2.0648e-06 & 8.4750e-08 & 2.5080e-08 \\ \hline WENO-E-0.03 & 3.0120e-02 & 2.1971e-03 & 6.8676e-05 & 2.0642e-06 & 8.4390e-08 & 2.4925e-08 \\ \hline **WENO-E-0.04** & **3.0120e-02** & **2.1972e-03** & **6.8676e-05** & **2.0624e-06** & **8.3420e-08** & **2.4566e-08** \\ \hline WENO-E-0.05 & 3.0120e-02 & 2.1973e-03 & 6.8672e-05 & 2.0584e-06 & 8.1376e-08 & 2.4272e-08 \\ \hline WENO-E-0.06 & 3.0121e-02 & 2.1974e-03 & 6.8663e-05 & 2.0510e-06 & 7.7660e-08 & 2.5510e-08 \\ \hline WENO-E-0.07 & 3.0122e-02 & 2.1975e-03 & 6.8645e-05 & 2.0389e-06 & 7.1545e-08 & 2.8296e-08 \\ \hline WENO-E-0.08 & 3.0122e-02 & 2.1977e-03 & 6.8615e-05 & 2.0201e-06 & 6.2423e-08 & 3.2753e-08 \\ \hline WENO-E-0.09 & 3.0123e-02 & 2.1978e-03 & 6.8571e-05 & 1.9928e-06 & 5.1088e-08 & 3.9343e-08 \\ \hline WENO-E-0.1 & 3.0124e-02 & 2.1979e-03 & 6.8506e-05 & 1.9547e-06 & 4.4961e-08 & 4.8688e-08 \\ \hline WENO-E-0.2 & 3.0136e-02 & 2.1967e-03 & 6.5444e-05 & 7.2544e-07 & 8.6967e-07 & 4.7576e-07 \\ \hline WENO-E-0.3 & 3.0145e-02 & 2.1807e-03 & 5.1841e-05 & 7.1637e-06 & 4.5905e-06 & 2.3462e-06 \\ \hline WENO-E-0.4 & 3.0135e-02 & 2.1291e-03 & 2.2468e-05 & 2.6852e-05 & 1.4602e-05 & 7.3805e-06 \\ \hline WENO-E-0.5 & 3.0085e-02 & 2.0125e-03 & 7.3678e-05 & 6.8390e-05 & 3.5670e-05 & 1.7974e-05 \\ \hline WENO-E-0.6 & 2.9965e-02 & 1.7948e-03 & 2.1361e-04 & 1.4370e-04 & 7.3836e-05 & 4.3984e-05 \\ \hline \end{tabular} \end{table} Table 7. WENO-E schemes in terms of \(L^{1}\)- errors for Example 5.3 over the domain \(\Omega=[-10,10]\) at time \(T=0.5\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(\mathbf{L^{1}}\) & \multicolumn{6}{c|}{\(\mathbf{N}\)} \\ \cline{2-7} **error** & 40 & 80 & 160 & 320 & 640 & 1280 \\ \hline WENO-Z & 3.1608e-04 & 1.1146e-05 & 4.5757e-07 & 2.0152e-08 & 1.4181e-09 & 1.5958e-10 \\ \hline WENO-E-0.01 & 3.1654e-04 & 1.1150e-05 & 4.5757e-07 & 2.0149e-08 & 1.4175e-09 & 1.5902e-10 \\ \hline **WENO-E-0.02** & **3.1654e-04** & **1.1150e-05** & **4.5747e-07** & **2.0105e-08** & **1.4092e-09** & **1.5226e-10** \\ \hline WENO-E-0.03 & 3.1654e-04 & 1.1149e-05 & 4.5702e-07 & 1.9910e-08 & 1.3782e-09 & 1.4107e-10 \\ \hline WENO-E-0.04 & 3.1654e-04 & 1.1146e-05 & 4.5581e-07 & 1.9391e-08 & 1.3336e-09 & 2.1720e-10 \\ \hline WENO-E-0.05 & 3.1653e-04 & 1.1141e-05 & 4.5326e-07 & 1.8373e-08 & 1.4339e-09 & 5.1372e-10 \\ \hline WENO-E-0.06 & 3.1652e-04 & 1.1131e-05 & 4.4862e-07 & 1.6655e-08 & 2.1872e-09 & 1.0973e-09 \\ \hline WENO-E-0.07 & 3.1649e-04 & 1.1115e-05 & 4.4098e-07 & 1.4819e-08 & 3.9253e-09 & 2.0677e-09 \\ \hline WENO-E-0.08 & 3.1644e-04 & 1.1091e-05 & 4.2927e-07 & 1.3932e-08 & 6.8062e-09 & 3.5647e-09 \\ \hline WENO-E-0.09 & 3.1638e-04 & 1.1055e-05 & 4.1223e-07 & 1.7225e-08 & 1.1160e-08 & 5.7471e-09 \\ \hline WENO-E-0.1 & 3.1629e-04 & 1.1006e-05 & 3.8847e-07 & 2.5432e-08 & 1.7251e-08 & 8.7921e-09 \\ \hline WENO-E-0.2 & 3.1212e-04 & 8.8400e-06 & 7.5150e-07 & 5.5608e-07 & 2.8347e-07 & 1.4158e-07 \\ \hline WENO-E-0.3 & 2.9387e-04 & 2.2379e-06 & 5.4111e-06 & 2.8729e-06 & 1.4366e-06 & 7.1667e-07 \\ \hline WENO-E-0.4 & 2.4463e-04 & 2.5784e-05 & 1.7949e-05 & 9.1032e-06 & 4.5375e-06 & 2.2632e-06 \\ \hline WENO-E-0.5 & 1.4122e-04 & 7.8889e-05 & 4.4326e-05 & 2.2210e-05 & 1.1061e-05 & 5.5166e-06 \\ \hline WENO-E-0.6 & 5.1057e-05 & 1.7505e-04 & 9.2092e-05 & 4.5940e-05 & 2.2871e-05 & 1.1407e-05 \\ \hline \end{tabular} \end{table} Table 8. WENO-E schemes in terms of \(L^{1}\)- errors for Example 5.5 over the domain \(\Omega=[0,2\pi]\) at time \(T=\pi/2\). ## Appendix B The matrix \(A\) is \[\begin{array}{ll}1&-2\Delta x\ \frac{1+\lambda^{2}}{2^{3}\lambda^{2}}\ \left(\frac{- \frac{\left(-4\lambda^{2}\lambda\Delta x\sin\left[\Delta x\right]\right)\left[ \Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2}}}{-\frac{\left(-3\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\right)}{\lambda^{2}\lambda^{2} \lambda^{2}}}+\frac{\left(-3\lambda^{2}\lambda\sin\left[\Delta x\right]\right) \left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2}}\right)\ \left(\frac{\left(-3\lambda^{2}\lambda\sin\left[\Delta x\right]\right) \left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2}}\right)\\ 1&-\Delta x\ \frac{1+\lambda^{2}}{4}\ \left(-\frac{\left(-4\lambda^{2}\lambda\sin\left[ \Delta x\right]\right)\left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}}+\frac{\left(-3\lambda^{2}\lambda\sin\left[\Delta x\right]\right) \left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2}}+\frac{\left(-3 \lambda^{2}\lambda\sin\left[\Delta x\right]\right)\left[\Delta x\right]}{\lambda^ {2}\lambda^{2}\lambda^{2}}\right)\\ 1&0&\frac{\lambda^{2}}{2^{3}\lambda^{2}}\ \left(-\frac{\left(-4\lambda^{2}\lambda\sin \left[\Delta x\right]\right)\left[\Delta x\right]}{\lambda^{2}\lambda^{2} \lambda^{2}}+\frac{\left(-3\lambda^{2}\lambda\sin\left[\Delta x\right]\right) \left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^{2}}\right)\\ \end{array}\] \[\begin{array}{ll}1&0&\frac{\lambda^{2}}{2^{3}\lambda^{2}\lambda^{2}\lambda^{2 }\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\left(\frac{-3\lambda^{2}\lambda\sin\left[\Delta x\right] }{\lambda^{2}\lambda^{2}\lambda^{2}}+\frac{\left(-3\lambda^{2}\lambda\sin \left[\Delta x\right]\right)\left[\Delta x\right]}{\lambda^{2}\lambda^{2}\lambda^ {2}}\right)\\ 1&\frac{\lambda^{2}}{2^{3}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2} \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda \lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{2}\lambda^{
2304.00040
A robust deep learning-based damage identification approach for SHM considering missing data
Data-driven method for Structural Health Monitoring (SHM), that mine the hidden structural performance from the correlations among monitored time series data, has received widely concerns recently. However, missing data significantly impacts the conduction of this method. Missing data is a frequently encountered issue in time series data in SHM and many other real-world applications, that harms to the standardized data mining and downstream tasks, such as condition assessment. Imputation approaches based on spatiotemporal relations among monitoring data are developed to handle this issue, however, no additional information is added during imputation. This paper thus develops a robust method for damage identification that considers the missing data occasions, based on long-short term memory (LSTM) model and dropout mechanism in the autoencoder (AE) framework. Inputs channels are randomly dropped to simulate the missing data in training, and reconstruction errors are used as the loss function and the damage indicator. Quasi-static response (cable tension) of a cable-stayed bridge released in 1st IPC-SHM is employed to verify this proposed method, and results show that the missing data imputation and damage identification can be implemented together in a unified way.
Fan Deng, Xiaoming Tao, Pengxiang Wei, Shiyin Wei
2023-03-31T18:00:56Z
http://arxiv.org/abs/2304.00040v1
# A robust deep learning-based damage identification approach for SHM considering missing data ###### Abstract Data-driven method for Structural Health Monitoring (SHM), that mine the hidden structural performance from the correlations among monitored time series data, has received widely concerns recently. However, missing data significantly impacts the conduction of this method. Missing data is a frequently encountered issue in time series data in SHM and many other real-world applications, that harms to the standardized data mining and downstream tasks, such as condition assessment. Imputation approaches based on spatiotemporal relations among monitoring data are developed to handle this issue, however, no additional information is added during imputation. This paper thus develops a robust method for damage identification that considers the missing data occasions, based on long-short term memory (LSTM) model and dropout mechanism in the autoencoder (AE) framework. Inputs channels are randomly dropped to simulate the missing data in training, and reconstruction errors are used as the loss function and the damage indicator. Quasi-static response (cable tension) of a cable-stayed bridge released in 1st IPC-SHM is employed to verify this proposed method, and results show that the missing data imputation and damage identification can be implemented together in a unified way. structural health monitoring, missing data, damage identification, deep learning ## 1 Introduction Big data of the in-service bridges are collected by Structural Health Monitoring (SHM) systems, in which lies structural performance that can be mined with data mining methods. Due to the promising of mining inherit patterns in data, machine learning and pattern recognition have been widely employed in the data mining and forms the data-driven method in SHM. In these data-driven approaches, structural performance is always exhibited in the correlation pattern in data [1]. As the currently most popular tool in artificial intelligence and data mining, artificial neural networks, especially its deep version (deep neural networks), provide an effective and efficient way to approximate the nonlinear mapping for the correlation mining. Recent years have seen an emerged development of applying artificial neural networks to conduct damage identification and condition assessment in the SHM community[2, 3]. However, the proposed methods seldomly consider the occasion of missing data, which is a frequently encountered problem in SHM and other real-world applications [4, 5, 6, 7, 8, 9, 10] due to the sensor fault and made a well-trained model highly un-robust. Generally, missing data in SHM can be divided into 3 types: discrete missing at random time points, continuous missing of continuous time points, and continuous missing of the whole channel. Missing data introduce incomplete and nonstandard data format, thus affecting the data-driven methods for SHM. Missing data imputation is thus developed to estimate the missing values from the available data, and to impute the missing data to form the standard inputs of the data-driven model[11]. Model-based methods and the deep learning-based methods are divided. Model-based methods develop a model (usually a regression model, or a statistical model) to consider the relationships among the datasets and reconstruct the missing data with the estimated statistical model, such as compressed sensing (CS) based, singular value decomposition (SVD) based, likelihood based, and K-nearest neighbor based, etc. Chen et. al. develops a distribution-to-warping function regression model of the the distributions of various channels based on the functional transformation technique[12]. Deep learning-based methods develops a block-box model of the spatiotemporal correlationship among multi channels of the monitoring time series and reconstruct the missing value to minimize the reconstruction error. Various DNN models a discussed such as recurrent neural networks (RNN), denoising autoencoders, convolutional neural networks (CNN), and generative adversarial neural networks (GAN), etc. Tang et. al. develops Group sparsity-aware CNN model for the continuous missing data imputation, where CNN is employed to generates the base matrix and the optimization in the group-sparsity reconstruction[13]. While most research focus on the missing data of missing type (a) and type (b), missing type (c) continuous missing is rarely considered. Moreover, most imputation methods are introduced as the data pre-processing and normalization procedure of the downstream tasks, and statistical/spatiotemporal correlations among the observed data is used in these methods[3]. However, these methods do not add any additional information during imputation, thus do not help the downstream tasks. For damage identification task in SHM, we need to infer whether damaged or not only by the information in the observed data. Therefore, in this paper, we proposes a robust deep learning-based damage identification approach for SHM considering missing data. Autoencoder framework is employed to learn the inner relationship among monitoring data of different channels and to extract hidden representations, and LSTM is employed to construct the encoder and decoder module. The data reconstruction error is employed as the loss function and the damage indicator. In the training process, channels of the inputs are randomly dropped to simulate the missing data, and the LSTM-structured autoencoder model tries to reconstruct the data of all channels. And the SHM for the cable tension data, which was released in the 1st IPC-SHM (International Project Competition for Structural Health Monitoring)[14], is used as validation. This paper is organized as follows. Section 2 first introduces the basic modules, i.e., the dropout mechanism, the LSTM cell, and the autoencoder framework, thereafter, proposes the model used in this paper that combines the basic modules. Section 3 then introduces the opensource dataset, the preprocessing procedure, and the implementation details of the proposed method. Section 4 discusses the results, including the proposed method, and the conventional DNN model. ## 2 Methodology ### DNN and Dropout mechanism A conventional deep neural network (DNN)[15] consists of three parts: the input layer, the output layer, and the hidden layers, as illustrated in Fig. 1 (a). While the input layer and output layer both has only one layer of units, hidden layers can have numbers of layers, and the number of the hidden layers is the 'depth' of the DNN model. Each layer contains numbers of units, and the number of units in each layer is the 'width' of the DNN model. Units of different layers are densely connected, and information flows forward from the lower layers to the upper layers, from the input layers to the output layers, therefore, this type of connection is also named as the feedforward neural networks. This process writes as: \[h_{i}^{l}=\sigma\Bigg{(}b_{i}^{l}+\sum_{j}w_{i}^{l}h_{j}^{l-1}\Bigg{)}=\sigma \left(\mathbf{b}^{l}+\mathbf{W}_{i}^{l}\mathbf{h}^{l-1}\right) \tag{1}\] Where \(\mathbf{h}^{l}\)=\(\left[h_{i}^{l},\cdots,h_{m}^{l}\right]\) is the hidden state of the \(l\)th hidden layer, \(h_{i}^{l}\) is the hidden state of the \(i\)th unit, and \(\mathbf{h}^{0}\)=\(\mathbf{x}\) is the input dataset; \(w_{i}^{l}\) is the weight of the \(i\)th unit in \(l\)-th layer and the the \(j\)th unit in \(l\)-1th layer, and \(b_{i}^{l}\) is the bias; and \(\sigma\left(\cdot\right)\) is the nonlinear activation function. The information flows from the dataset \(\mathbf{x}\) in the input layer to the predicted \(\mathbf{\hat{y}}\) in the output layer for a DNN with \(L\) hidden layers writes: \[\mathbf{\hat{y}}=f\left(\mathbf{x};\mathbf{\theta}\right)\textgreater{=}f^{L+1}\left(\mathbf{x }\right)=o\left(f^{L}\left(\cdots f^{2}\left(f^{1}\left(\mathbf{x}\right)\right) \right)\right) \tag{2}\] Where \(f^{l}\left(\cdot\right)\) represents the nonlinear function of the \(l\)th hidden layer, \(o\left(\cdot\right)\) is the output function, \(\mathbf{\theta}\)=\(\left\{\mathbf{W}^{1}\),\(\mathbf{b}^{1}\),\(\cdots\),\(\mathbf{W}^{L+1}\),\(\mathbf{b}^{L+1}\right\}\) is the parameter to be learned in DNN. Given the target \(\mathbf{y}\), a loss function that evaluates the prediction performance of the DNN model can be defined, e.g., the Mean Square Error (MSE) illustrated in Eq. (3). \[L\left(\hat{y}_{n},y_{n}\right)=L\left(f\left(x_{n};\mathbf{\theta} \right),y_{n}\right)=\frac{1}{2}\sum_{n=1}^{N}\left\|f\left(x_{n};\mathbf{\theta} \right)-y_{n}\right\|^{2} \tag{3}\] \(L\) is a function of parameters \(\mathbf{\theta}\) and \(N\) is the total number in the dataset. A DNN model learns to predict the target \(\mathbf{y}\) by adjusting its parameters \(\mathbf{\theta}\) to minimize the loss \(L\left(\mathbf{\hat{y}},\mathbf{y};\mathbf{\theta}\right).\) Gradient descent method is usually adopted for the parameter updating, the gradient \(\nabla_{\theta}L\) w.r.t. parameters \(\mathbf{\theta}\) is calculated and backpropagated to parameters of all layers base on Eq. (2-3) and the chain rule. This training process is known as the Backpropagation (BP) algorithm. Compared to shallow neural networks with hand-crafted features[16], deep learning is designed to learn more effective representations that extract the nonlinear relationships hidden in data through end-to-end training. However, parameter space can be extremely huge for a deep neural network with dense connections, inducing the training hard and overfitting issue. Dropout mechanism is thus proposed and is now a standard technique for training deep neural networks[17]. As illustrated in Fig. 1(b), dropout randomly ignores some units during training by enforcing the weights as 0 to reduce the connects in the neural network with a probability of \(p\)[18], which can be expressed as: \[h^{\text{'}}=\begin{cases}0&\text{with probability }p\\ \frac{h}{1-p}&\text{otherwise}\end{cases} \tag{4}\] Where \(h\) and \(h^{\text{'}}\) represent the original the dropout hidden state respectively. It's obviously that the expectation \(\mathrm{E}\left[h^{\prime}\right]=\mathrm{E}\left[h\right],\) therefore, dropout is equivalent to add unbiased noises on the hidden units. Dropout is usually employed as a regularization to avoid overfitting and can be viewed as a type of ensemble learning[18]. Recent research proves that dropout in early and late stage of the training process is useful to avoid both overfitting and underfitting[19]. In the missing data imputation tasks, the dataset may have several random missing channels, thus making some units in the input layers zero. The randomly missing channels in the dataset will harm a well-trained network. Here we introduce the dropout mechanism in the input layers in the training process by randomly ignoring the units in the input layer to simulate the missing data in the inputs, we enforce the model to learn the invariant patterns in a missing data condition. ### Lstm In DL tasks, the input and the output are usually specified according to the real applications, most DL models differ in the architecture of hidden layers. Temporal relation in time series data is usually hard to be modelled by the conventional DNN; recurrent connection that map outputs of earlier step to the later step is introduced to the hidden layers to model the temporal correlation. DNN with recurrent connections is the Recurrent Neural Network (RNN), as illustrated in Fig. 2 (a), where the red arrow represents the recurrent connections. Deep RNN model with multiple layers is usually illustrated in the folded expression in Fig. 2(b). where the rectangular blocks represent the hidden layers, and the cyclic arrows are the recurrent connections. Long-short term memory (LSTM) is proven to be an efficient framework to model sequence data thus been employed as the correlation model in this study[20, 21, 22]. In LSTM model, the hidden layers are LSTM cells with gate units as illustrated in Fig. 4 (c). The updating of LSTM unit and its parameters are illustrated in Eq. (6) - (7). Figure 1: Conventional DNN and dropout (arrow denotes the information flow direction) \[\begin{split}\mathbf{f}^{(i)}&=\sigma\Big{(}\mathbf{W}_{j} \mathbf{h}^{(i\cdot)}+\mathbf{U}_{j}\mathbf{x}^{(i)}+\mathbf{b}_{f}\Big{)}\\ \mathbf{i}^{(i)}&=\sigma\Big{(}\mathbf{W}_{i}\mathbf{h}^{ (i\cdot)}+\mathbf{U}_{i}\mathbf{x}^{(i)}+\mathbf{b}_{i}\Big{)}\\ \mathbf{o}^{(i)}&=\sigma\Big{(}\mathbf{W}_{o}\mathbf{h}^{ (i\cdot)}+\mathbf{U}_{o}\mathbf{x}^{(i)}+\mathbf{b}_{o}\Big{)}\end{split} \tag{5}\] States updating: \[\bar{\mathbf{C}}^{(i)}=\tanh\Big{(}\mathbf{W}_{c}\mathbf{h}^{(i\cdot)}+\mathbf{U}_ {c}\mathbf{x}^{(i)}+\mathbf{b}_{c}\Big{)} \tag{6}\] Outputs: \[\begin{split}\mathbf{C}^{(i)}&\!\!\!=\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! by minimizing the difference between the input and the output, also known as the reconstruction error. The popularity of autoencoder can be attributed to its ability to learn meaningful representations of complex data without requiring explicit supervision. By this way, this compressed representation promises to capture the most important underlying structure of the input data and removes the redundant information. And the ability of autoencoder to learn meaningful representations and the underlying structure of complex data has made it a valuable model in many areas. In SHM, the learned compressed representation and the reconstructed errors are used as the damage indicator[3], given the facts that a well-trained autoencoder on the SHM big data can be viewed as a data-driven agent model of the bridge structural performance, and the reconstruction error actually reflects the changes of the underlying structures of the input data and the condition varies of the bridge structure compared to the initial condition. ### The robust damage identification model based on LSTM The proposed model integrates the above introduced modules and forms a LSTM-structured autoencoder model. Two stages are included: the first is the LSTM-structured encoder module that learns the underlying relationships and the hidden compressed representations and the LSTM-structured decoder module that learns to reconstruct the inputs from the hidden representation. And linear layers are added to the LSTM cell to reshape the size of the outputs of the LSTM-structured encoder and decoder. Inputs are the monitored time series data of all investigated channels, while some of them are masked as dropout to simulate the missing data occasion. According to the above mentioned, this is equivalent to add unbiased noises on the training dataset, thus missing data cases are augmented in the training process, leading to an easier way to train the robust model that considers missing data. In this way, the spatiotemporal relationship among the time scale of all investigated channels are learned from the augmented dataset, and missing data is simply viewed as dropout. And the proposed model remain unchanged when the missing data occasion really occurs which makes the proposed method robust to missing data cases. ## 3 Case Study ### Dataset Cables are one of the most critical and vulnerable components in a cable-stayed bridge that suffer cyclic loads and harsh environments[28, 29], and cable tension force is the most direct indicator. Opensource cable tension dataset of an in-service cable-stayed bridge released in 1st IPC-SHM ([http://www.schm.org.cn/#/IPC-SHM_2020/dataDownload](http://www.schm.org.cn/#/IPC-SHM_2020/dataDownload)) is used for the case study. As shown in Fig. 3, the bridge is a double-cable-plane cable-stayed bridge with 168 cables (84 pairs). All cables are allocated with load cells for the monitoring of the dynamic cable tension. The monitoring data of 14 cables in 10 days are released, with the sampling frequency of 2Hz. Cables are numbered from left to right as: SJS08 to SJS14 for the upstream side of the bridge and SJX08 to SJX14 for the downstream side of the bridge. And the dates of the released dataset are a weeklong in 2006 from 2006-05-13 to 2006-05-19, and three separate days on 2007-12-14, 2009-05-05, and 2011-11-01. One of the released 14 cables are intended to be damaged while missing data occurs to 3 cables in the year 2011. A typical monitoring cable tension time series in one-day scale is illustrated in Fig. 4 (a) and details in 35 s scale is illustrated in Fig. 4 (b). Low frequency trend item induced by temperature and high frequency peak points item induced by vehicles can be observed. Further considering the dead load and noises, the monitored cable tension \(T_{{}_{total}}\) writes as: \(T_{{}_{total}}=T_{{}_{d}}+T_{{}_{e}}+T_{{}_{v}}+T_{{}_{v}}\), where \(T_{{}_{d}}\), \(T_{{}_{e}}\), \(T_{{}_{v}}\) and \(T_{{}_{v}}\) represent the effects of the dead load, the temperature, the vehicle and the noises, respectively. In the moving concentrated force assumption, the vehicle-induced cable tension can be expressed as: \[\begin{bmatrix}T_{{}_{v,1}}\\ T_{{}_{v,2}}\\ \vdots\\ T_{{}_{v,M}}\end{bmatrix}=\begin{bmatrix}d_{11}&d_{12}&\cdots&d_{1N}\\ d_{21}&d_{22}&\cdots&d_{2N}\\ \vdots&\vdots&\vdots&\vdots\\ d_{M1}&d_{M2}&\cdots&d_{MN}\end{bmatrix}\begin{bmatrix}F_{1}\\ F_{2}\\ \vdots\\ F_{N}\end{bmatrix} \tag{9}\] Where \(T_{{}_{v,i}}T_{{}_{v,j}}\) is the vehicle-induced cable tension item for ith and jth cable, \(D\left(\cdot\right)=\left[d_{m}\right]_{M\times N}\) is the discretized flexibility matrix, and \(F_{n}\) is the \(n\)th moving force loading on the discretized position. Considering the cable tension of \(i\)th cable under a single vehicle, \[T_{{}_{v,i}}\left(t\right)=g_{i}\cdot F_{n}=\eta_{i}\left(x_{n}\left(t\right), y_{n}\left(t\right)\right)\cdot F_{n} \tag{10}\] Where \(\left(x_{n}\left(t\right),y_{n}\left(t\right)\right)\) represents the discretized location of the vehicle on the girder and \(g_{i}=\eta_{i}\left(x_{n}\left(t\right),y_{n}\left(t\right)\right)\) is the influence surface of the vehicle that is determined by the relative stiffness of stay cables and can be decoupled as the influence line on the longitudinal direction and the transverse direction \(\eta_{i}\left(x_{n}\left(t\right),y_{n}\left(t\right)\right)=\eta_{i,i}\left(x _{n}\left(t\right)\right)\cdot\eta_{j,i}\left(y_{n}\left(t\right)\right).^{\left[ 1\right]}\) thus provides a single vehicle-based cable tension ratio as the damage indicator. Considering multiple vehicles on the bridge, which is more frequent in realistic, the vehicle-induced item of the \(i\)th and \(j\)th cable are: \[\begin{split} T_{{}_{v,i}}\left(t\right)=&\sum_{n}F_{ n}\cdot\eta_{i,i}\left(x_{n}\left(t\right)\right)\cdot\eta_{j,i}\left(y_{n} \left(t\right)\right)\\ T_{{}_{v,j}}\left(t\right)=&\sum_{n}F_{n}\cdot\eta_{j,i}\left(x_{n}\left(t\right)\right)\cdot\eta_{j,i}\left(y_{n}\left(t\right) \right)\end{split} \tag{11}\] ### Preprocessing Figure 3: The investigated cable-stayed bridge (Red cables denote the 14 cables in the released dataset) Vehicle-induced term of the cable tension is valuable since it provides a load-test style response information. Thus, it is necessary to decouple the multi-source effects and obtains the vehicle-induced term. However, due to the non-stationarity of this term as can be seen in Fig. 5(a,b), it's usually not easy to get the ideal term, smaller segmentations are suggested in this case[1, 30]. Considering the sparsity of the vehicles on the bridge, a larger percentage of the observed data points are near the trend term that induced by temperature; thus violin plot is employed as the detrending technique in the preprocessing procedure. A violin plot is a type of data visualization that combines the features of a box plot and a kernel density plot. It displays the distribution of a continuous variable across different levels of a categorical variable, as illustrated in Fig. 4(b). By visualizing the distribution of the data, it can be easily to identify that most observed datapoints are near the trend, thus the median value of a specified segmentation obtained by the violin plot is used as the trend term. Fig. 4 (b) shows the obtained trend term by this method in a 30-second segmentation, and Fig. 4(d) shows the obtained trend term of the whole-day scale by segmentations and interpolation, under the smooth assumption of the temperature-induced trend term. Figure 4: Multi-source of typical cable tension time series and trend item based on violin plots ### Implementation details For the dataset of the vehicle-induced cable tension \(D=\left\{T_{v,j}\left(t\right)\right\}\), the vehicle loads are the same, therefore, the relations among them related to their influence surface and their relative stiffness only. Therefore, the model learned using the dataset of a specified period forms an agent model of that period, once the model changes in another period, the structural condition can be inferred as changed. And the reconstruction error predicted by the pretrained baseline model of a specific period (i.e., the early days of bridge operation) can be used as the damage indicator. The proposed model learns the spatiotemporal relationships among the monitored data of all channels with the input and output nodes: \[T_{v,j}\left(t\right)=AE_{\rho}\left(T_{v,j}\left(t+\tau\right)\right) \tag{12}\] Following is the pseudo-code for the training procedure, the length of the LSTM inputs is the sequence of length \(T\): \(\left\{\mathbf{x}^{\left(t_{n}+1\right)},\mathbf{x}^{\left(t_{n}+2\right)},\cdots,\mathbf{ x}^{\left(t_{n}+7\right)}\right\}\), inputs \(\mathbf{x}^{\left(t_{n}\right)}:\left\{T_{v,j}^{b}\left(t_{n}\right)\right\}\in R ^{\text{batch\_size\_M}}\) represents the batch_size of the vehicle-induced cable tension of M cables at \(t_{n}\) step, outputs \(\hat{\mathbf{y}}^{\left(t\right)}\) represents the corresponding predictions, in the unsupervised learning way, target outputs are equal to inputs \(\mathbf{y}^{\left(t\right)}=\mathbf{x}^{\left(t\right)}\). Adam optimization algorithm is employed to training the model. \begin{tabular}{l} \hline \hline Pseudo-code: LSTM-structured autoencoder training \\ \hline 1. Initialize parameters \(\mathbf{\theta}\) : \\ 2. Specifies batch_size, the length of the input \(T\), learning rate \(\alpha\) : \\ 3. For \(k=\)1,\(\cdots,max\_iteration\) \\ 4. Randomly generates batch_size integers from \(\left[1,N-T\right]\) ; \\ 5. Generates normalized training set \(\mathbf{x}=\left\{T_{v}^{b,j}\left(t_{n}:t_{n}+T\right)\right\}\in R^{\text{ batch\_size\_T\_M}}\), \(target\_y=\mathbf{x}\), \\ randomly sets dimensions \(M_{1}\in\left[1,M\right]\) in \(\mathbf{x}\) to 0 to obtain \(train\_x=\mathbf{\bar{x}}\) ; \\ 6. Updating model parameters using AdamOptimizer. \\ \end{tabular} In this task, units for the inputs and outputs are 14, corresponding the the 14 channels of the monitoring dataset. The LSTM-structured encoder and the decoder cell are both with 3 layers and 32 units in each layer. Linear layers of \(layer\)\(1\): \(\mathbb{R}^{32}\rightarrow\mathbb{R}^{5}\) and \(layer\)\(2\): \(\mathbb{R}^{32}\rightarrow\mathbb{R}^{14}\) are added to the encoder and the decoder respectively to satisfies the shapes of the hidden layer and the outlayer. Other hyperparameters are set as: \(T=2400\), \(max\_iteration\)=100,000, \(batch\_size=30\) and \(\alpha\)=0.005, and MSE error are employed. The whole model is build on TensorFlow and optimizer is chosen as Adam Optimizer. An autoencoder structured by a 3-layer DNN with [64, 32, 5] hidden units and a decoder structured by a 3-layer DNN with [5, 32, 14] hidden units are also developed as a comparison model. ## 4 Results and Discussions 0-12 cables are simulated to be dropped (corresponding to 0-85.7% missing rate) in the training procedure. Figure 5 (a) shows the loss curves of DNN-structured and the LSTM-structured autoencoder model trained with a 50% data loss rate (i.e., 7 randomly missing cable data are dropped out). Figure 5 (b) shows the training errors of DNN and LSTM networks under different data loss rates. It can be observed from Figure 6 that in the case of data loss, the training performance of DNN and LSTM networks increases exponentially with the increase of the data loss rate (i.e., the number of missing cables k). Compared with the DNN-structured AE model, the LSTM-structured AE model can form a memory of historical data within the network, thereby extracting spatiotemporal correlation features of cable forces in the cable group. Therefore, under the missing date occasion, the performance of the LSTM-structured AE model and spatiotemporal correlation of cable tension far exceeds that of the DNN-structured AE model and spatial correlation only. Fig. 6 shows the prediction results of the pre-trained LSTM network for cable tension on November 1, 2011. While cable tension of SJX08, SJS13, and SJX13 are real missing, cable tension of SJS08, SJS09, SJS11, SJX11, SJX12, SJS13, and SJX13 are dropped out. Eight cable forces were actually missing in the input data. With the pretrained model, the prediction is illustrated in Fig 6. Indicating that the LSTM-structured AE model can reconstruct the missing data and diagnose cable health based on the pretrained agent model of the cable group's initial performance. The real cable tension value of cable SJS11 in Fig. 6(c) is lower than the benchmark prediction value, indicating a decrease in the actual carrying capacity of the cable and thus indicating the cable damage, which is consistent with the cable state evaluation results released in [14]. Figure 5: Comparison of DNN-structured and LSTM -structured AE model [MISSING_PAGE_POST] Fig. 7 shows the diagnosis results of the cable group based on the 3-\(\sigma\) criterion. The results indicate that only cable SJS11 is damaged based on the standard values of the prediction error calculated using the pre-trained LSTM-structured AE model, while the other cables are healthy condition, which is consistent with the cable state evaluation results in [14]. ## 5 Conclusion This paper discussed the missing data issue in SHM and the dropout mechanism for the consideration of missing data in data-driven approaches of SHM. Results show that we can view the missing data with dropped input units to develop the robust model for damage identification and other tasks, instead of the standard flow of missing data imputation pre-processing and downstream tasks. Following conclusions can be also obtained. Unsupervised learning representations learned based on LSTM-structured AE model forms a baseline agent model for the correlation of a group of cables, and the reconstruction error can be used as the damage indicator. The baseline model for the cable group establishes a many-to-many spatiotemporal mapping model for cables in the group. Which improves the efficiency of structural health monitoring data processing and health diagnosis. The unsupervised learning representation of LSTM-structured AE model and the dropout mechanism employed in this paper can loosen the requirements of data quality (missing data occasion) while achieving an accurate and robust baseline agent model. And with this model, Figure 6: Prediction results of DNN and LSTM in 50% data loss rate case (2011-11-01) Figure 7: Damage identification based on the reconstruction error missing data imputation and damage identification can be conducted in the unified way at the meantime. ## Acknowledgements This work is financially supported by the National Natural Science Foundation of China (Grant Nos. 52208311, 51921006 and 52192661). The authors would like to thank the organizations of the International Project Competition for SHM (IPC-SHM 2020) ANCRiSST, Harbin Institute of Technology (China), and University of Illinois at Urbana-Champaign (USA) for their generously providing the invaluable data from actual structures.
2307.16372
LP-MusicCaps: LLM-Based Pseudo Music Captioning
Automatic music captioning, which generates natural language descriptions for given music tracks, holds significant potential for enhancing the understanding and organization of large volumes of musical data. Despite its importance, researchers face challenges due to the costly and time-consuming collection process of existing music-language datasets, which are limited in size. To address this data scarcity issue, we propose the use of large language models (LLMs) to artificially generate the description sentences from large-scale tag datasets. This results in approximately 2.2M captions paired with 0.5M audio clips. We term it Large Language Model based Pseudo music caption dataset, shortly, LP-MusicCaps. We conduct a systemic evaluation of the large-scale music captioning dataset with various quantitative evaluation metrics used in the field of natural language processing as well as human evaluation. In addition, we trained a transformer-based music captioning model with the dataset and evaluated it under zero-shot and transfer-learning settings. The results demonstrate that our proposed approach outperforms the supervised baseline model.
SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam
2023-07-31T02:32:02Z
http://arxiv.org/abs/2307.16372v1
# LP-MusicCaps: LLM-based Pseudo Music Captioning ###### Abstract Automatic music captioning, which generates natural language descriptions for given music tracks, holds significant potential for enhancing the understanding and organization of large volumes of musical data. Despite its importance, researchers face challenges due to the costly and time-consuming collection process of existing music-language datasets, which are limited in size. To address this data scarcity issue, we propose the use of large language models (LLMs) to artificially generate the description sentences from large-scale tag datasets. This results in approximately 2.2M captions paired with 0.5M audio clips. We term it **L**arge Language Model based **P**seudo music caption dataset, shortly, **LP-MusicCaps**. We conduct a systemic evaluation of the large-scale music captioning dataset with various quantitative evaluation metrics used in the field of natural language processing as well as human evaluation. In addition, we trained a transformer-based music captioning model with the dataset and evaluated it under zero-shot and transfer-learning settings. The results demonstrate that our proposed approach outperforms the supervised baseline model. 1 Footnote 1: Our dataset and codes are available at [https://github.com/seungheondoh/lp-music-caps](https://github.com/seungheondoh/lp-music-caps) SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). **Attribution**: SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam, LP-MusicCaps: LLM-Based Pseudo Music Captioning1, in _Proc. of the 24th Int. Society for Music Information Retrieval Conf._, Milan, Italy, 2023. Footnote 1: footnotemark: ## 1 Introduction Music captioning is a music information retrieval (MIR) task of generating natural language descriptions of given music tracks. The text descriptions are usually sentences, distinguishing the task from other music semantic understanding tasks such as music tagging. Recently, there have been some progress in music captioning including track-level captioning [1, 2] and playlist-level captioning [3, 4, 5, 6]. These approaches usually utilize a deep encoder-decoder framework which is originally developed for neural machine translation [7]. Choi _et al._[3] used a pre-trained music tagging model as a music encoder and an RNN layer initialized with pre-trained word embeddings for text generation. Manco _et al._[1] introduced a temporal attention mechanism for alignment between audio and text by pairing a pre-trained harmonic CNN encoder [8] with an LSTM layer. Gabbolini _et al._[5] generated playlist titles and descriptions using pre-trained GPT-2 [9]. Currently, the primary challenge of track-level music captioning is the scarcity of large-scale public datasets. Manco _et al._[1] used private production music datasets. Huang _et al._[10] also used a private dataset with 44M music-text pairs on YouTube, but this approach is hardly reproducible or affordable for other researchers. To address this data issue, a community-driven data collection initiative has been proposed [11]. As of now, the only publicly available dataset for track-level music captioning is MusicCaps [12], which includes high-quality music descriptions from ten musicians. However, it is limited to 5521 music-caption pairs as it was originally created as an evaluation set for a text-prompt music generator. With the scale of the aforementioned datasets, it remains difficult to train a music captioning model successfully. A workaround for this situation is to use music tagging datasets and generate sentences with tag concatenation [13, 2] or prompt template [14]. As relying on tagging datasets, however, the tag-to-sentence approaches would have the same limitation tagging datasets have. For example, high false-negative rates of tagging datasets [15]. Tag Figure 1: The generation process of pseudo captions by feeding a large language model with instructions and manually-annotated labels. ging datasets also has some typical issues text data have, for example, synonyms, punctuation, and singular/plural inconsistencies. Without proper treatment, these can limit the performance of the corresponding music captioning models. A potential solution is to use strong language models, i.e., large language models (LLMs). LLMs refer to the recent large-scale models with over a billion parameters that exhibit strong few-shot and zero-shot performance [9, 16]. Large language models are usually trained with text data from various domains such as Wikipedia, GitHub, chat logs, medical articles, law articles, books, and crawled web pages [17]. When successfully trained, they demonstrate an understanding of words in various domains [9]. There have been similar and successful use cases of LLMs for general audio understanding [18] and music generation [19]. Motivated by the recent success of LLMs, we propose creating a music captioning dataset by applying LLMs carefully to tagging datasets. Our goal is to obtain captions that are i) semantically consistent with the provided tags, ii) grammatically correct, and iii) with clean and enriched vocabulary. This dataset-level approach is rather pragmatic than sophisticated; it alleviates the difficulty of music captioning tasks not by theory or model, but by data. The aforementioned ambiguous aspects of the music captioning task are addressed by the powerful LLMs that cost reasonably [20], considering the training cost music researchers would spend otherwise. Once the creation is complete, it is straightforward to train some music captioning models by supervised learning. There are some existing works in the pseudo-labeling using language models. Huang _et al._[19] introduced the MuLaMCap dataset, which consists of 400k music-caption pairs generated using the large language model and the music-language joint embedding model. They utilized a large language model (LaMDA [21]) to generate 4M sentences using 150k song metadata as input in the format of {title} by {artist}. Then the text and music-audio joint embedding model, MuLan, calculates the similarity between music and generated captions, annotating pairs with high similarity [10]. However, it is not possible to reproduce or evaluate this work as the adopted language model as well as the final music-audio embedding model are not publicly available. Moreover, using metadata has some issues - a popularity-biased, limited coverage and a low reliability - as we discuss later in Section 2.1. Wu _et al._[22] introduce keyword-to-caption augmentation (K2C Aug) to generate captions based on the ground truth tags of audio clips in AudioSet. They used a pre-trained T5 model without any instruction. Finally, Mel _et al._. [18] introduce WavCaps, a 400k audio captioning dataset using ChatGPT [23]. However, previous approaches only reported task performance and did not directly evaluate the quality of generated captions. We propose a solution in this paper with three-fold key contribution. First, we propose an LLM-based approach to generate a music captioning dataset, **LP-MusicCaps**. Second, we propose a systemic evaluation scheme for music captions generated by LLMs. Third, we demonstrate that models trained on LP-MusicCaps perform well in both zero-shot and transfer learning scenarios, justifying the use of LLM-based pseudo-music captions. ## 2 Pseudo Caption Generation Using Large Language Models In this section, we introduce how music-specific pseudo captions are created using a large language model in the proposed method. ### Large Language Model for Data Generation We first take multi-label tags from existing music tagging datasets. The list of tags are appended with a carefully written task instruction as an input (prompt) to a large language model. The model then generates and returns sentences that (may) describe the music in a way the task instruction conditions. Table 1 shows examples of generated captions according to multi-label tags and task instructions. For the language model, we choose GPT-3.5 Turbo [23] for its strong performance in various tasks. During its training, it was first trained with a large corpus and immense computing power, then fine-tuned by reinforcement learning with human feedback (RLHF) [24] for better interaction with given instruction. As a result, GPT-3.5 Turbo demonstrates state-of-the-art zero-shot abilities in understanding, reasoning, and generating human-like responses to natural language inputs. Since LLMs contain a wide range of information, music captions may be generated based on some famous musical entities such as the artist name or album name. However, LLMs may generate inaccurate text in a confident tone which is hard to detect without ground truth. This issue, known as hallucination, can be a fun aspect when using LLMs for creative purposes [25]. However, hallucination should be avoided in an application like ours as the resulting captions should be factual. Therefore, we do not use any metadata unlike a previous work [19]. We also added a question to measure hallucination in the proposed evaluation scheme. ### Task Instruction Design Our proposed caption generation follows the formulation: \(\tilde{y}_{\text{cap}}=f_{\text{LLM}}(y_{\text{tag}},i)\), where \(y_{\text{tag}}\) and \(\tilde{y}_{\text{cap}}\) refer to the multi-label tag and the generated caption, respectively, and \(i\) is the task instruction provided. Given that the output can vary based on the task instruction, even with the same model and input, task instructions become a crucial aspect of data generation. Therefore, we define four different tasks and generate captions accordingly. **Writing**: _Write a song description sentence including the following attributes._ {input tags} **Summary**: _Write a single sentence that summarizes a song with the following attributes. Don't write the artist name or album name._ {input tags} **Paraphrase**: _Write a song description sentence including the following attributes. Creative paraphrasing is acceptable._ {input tags} **Attribute Prediction**: _Write the answer as a Python dictionary with new_attribute and description as keys. For new_attribute, write new attributes that show high co-occurrence with the following attributes. For description, write a song description sentence including the following attributes and new attributes._ {input tags} In every instruction, we add 'include / with the following attributes' to prevent hallucination. The "Writing" task instruction is a simple prompt that uses tags to generate a sentence. The "Summary" task instruction aims to compress information into a short length. The "Paraphrase" task instruction expands the vocabulary. Finally, the "Attribute Prediction" task instruction predicts new tags based on tag co-occurrence in large corpora (i.e. the training data of GPT-3.5 Turbo), which is expected to address the issue of high false-negative rates in existing tagging datasets while mitigating the risk of hallucination. In this instruction, 'new attributes' exists to bridge the description and the input, and we only use the 'description' as caption. ## 3 Evaluation of Pseudo Captions It is crucial to ensure the quality of generated captions, especially since they are supposed to be used as ground truth. In this section, we introduce a holistic evaluation scheme that includes objective and subjective assessment - and its result on the captions from the proposed method. ### Objective Evaluation We conduct evaluation on the generated captions using MusicCaps dataset [12]. It has audio (\(x\)), tag list (\(y_{\text{tag}}\)), and ground truth caption (\(y_{\text{cap}}\)). The pseudo captions (\(\tilde{y}_{\text{cap}}\)) are generated with four pre-defined instructions as explained in Section 2.2 for all items in the evaluation split. During the evaluation, the generated captions are compared to the ground truth captions with respect to \(n\)-gram, neural metrics. We also report diversity metrics. Following the previous work [5], we measure four \(n\)-gram metrics [26, 27, 28]: BLEU1 to 4 (B1, B2, B3, B4), METEOR (M), and ROUGE-L (R-L). They are all based on \(n\)-gram precision and recall between the ground truth and generated captions. These metrics capture different aspects of the caption quality. BLEU and METEOR focus on \(n\)-gram overlap between the generated and ground truth captions, while ROUGE-L measures the longest common subsequence between the two. In addition, we use BERT-Score (BERT-S) based on pre-trained BERT embeddings to represent and match the tokens in the ground truth with respect to the generated caption [29]. By computing the similarity between the BERT embeddings of each token, BERT-Score can better capture the semantic similarity between the generated and ground truth captions than \(n\)-gram metrics; as it is more robust to synonyms, paraphrasing, and word order variations. Finally, we evaluate the diversity of the generated captions by measuring how many different words are used. \(novel_{v}\) indicates the percentage of new vocabulary in generated captions that are not among the training vocabulary. _Vocab_ is the number of unique words used in all the generated captions. It is worth noting that diversity metrics are generally considered as subsidiaries and do not capture the overall quality of the generated captions. ### Subjective Evaluation Following the previous work [12], we set up an A-vs-B human rating task, in which a participant is presented with a 10-second single music clip and two text descriptions. We randomly selected 240 music samples from the MusicCaps evaluation dataset. Since the research goal is to generate \begin{table} \begin{tabular}{p{227.6pt}} \hline Input tags \\ video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy \\ \hline \hline Result of “Writing” Instruction \\ This instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere. \\ \hline Result of “Summary” Instruction \\ A cheerful and groovy instrumental track with playful beatboxing and an analog sound, featuring a small keyboard and video game-inspired theme. \\ \hline Result of “Paraphrase” Instruction \\ This upbeat instrumental tune captures the essence of a fun-filled video game theme, incorporating analog-sounding beats, playful beatboxing rhythms, and a groovy small keyboard melody, all without the need for a singer. \\ \hline Result of “Attribute Prediction” Instruction \\ \{“new_attribute”: [”8-bit sound”, “chiptune style”, “retro vibe”], \\ “description”: “This instrumental tune is straight out of a video game with its analog sounding melodies and small keyboard tinkles. \\ Beatboxing adds a playful element to the groovy, cheerful vibe. Reminiscent of classic 8-bit sound and chiptune style, this retro vibe is sure to put a smile on your face.” \\ \hline \hline Ground Truth \\ This is a video game theme cover. The theme belongs to the Super Mario franchise. The main theme is being played on an analog sounding small keyboard. There is an added rhythmic background of beatboxing in this version. The atmosphere is playful. This piece could be used in the background of arcade gaming social media content. \\ \hline \hline \end{tabular} \end{table} Table 1: An example of generated captions from MusicCaps dataset. music captions that can be used as pseudo-ground truth, one description is always fixed to the ground truth and the other is chosen from 5 types of generated captions including the K2C Augmentation [22] and the four proposed instruction methods. This yields up to 1200 (= 240 x 5) questions. We hired 24 participants who are music researchers or professionals in the music industry. Each of them rated 20 randomly selected questions. As a result, we collected a total of 480 ratings. The rater was asked to evaluate caption quality on two different aspects: (Q1) _More True Positive_: which caption describes the music with more accurate attributes? (Q2) _Less False Positive_: which caption describes the music less wrong? For example, if a method produces long and diverse sentences with many music attributes, it may be advantageous for Q1 but disadvantageous for Q2. Conversely, if a method conservatively produces short sentences with few music attributes, it may be advantageous for Q2 but disadvantageous for Q1. We determine the ranking of conditions by counting the number of wins, ties, and loses in the pairwise tests. ### Results We compare our LLM-based caption generation with two template-based methods (tag concatenation, prompt template2) and K2C augmentation [22]. In Table 2, we present the captioning result for MusicCaps [12] evaluation set. When comparing our proposed method with existing methods, we observe significant differences in \(n\)-gram metrics. This is because the tag concatenation fails to complete the sentence structure. In the case of K2C Augmentation, due to the absence of instruction, the input tag is excluded from the generated caption, or a sentence unrelated to the song description sentence is created. In contrast, the template-based model shows improved performance as the musical context exists in the template. We next consider diversity metric with BERT-Score. Our proposed method shows higher values in BERT-Score while generating diverse vocabularies. This indicates that the newly created vocabulary does not harm the music semantics. Footnote 2: Template example: the music is characterized by {input tags} Comparing within the proposed different task instructions, we can observe that each instruction performs a different role. "Writing" shows a high n-gram performance as it faithfully uses input tags to generate captions. "Summary" has the smallest average number of tokens due to its compression of information, but it shows competitive performance in ROUGE-L which is specialized to summarizing, as well as the highest BERT-Score. "Paraphrase" generates many synonyms, resulting in a large vocabulary size and the use of novel vocabulary. "Attribute Prediction" predicts new tags based on the co-occurrence of tags. This instruction shows lower performance in BLEU but competitive results in METEOR, which utilizes a thesaurus, such as WordNet, to consider the accuracy scores of words with similar meanings, indicating that newly predicted tags have similar semantic with ground truth. Figure 2 shows the subjective A-vs-B test results. Each \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Supervised Metrics} & \multicolumn{2}{c}{Diversity Metrics} & \multicolumn{2}{c}{Length} \\ \cline{3-13} Methods & LM & Params & B1\(\uparrow\) & B2\(\uparrow\) & B3\(\uparrow\) & B4\(\uparrow\) & M\(\uparrow\) & R-L\(\uparrow\) & BERT-S\(\uparrow\) & Vocab\(\uparrow\) & Novel, \(\uparrow\) & Avg.Token \\ \hline Baseline & & & & & & & & & & & & & \\ Tag Concat [2, 13] & - & - & 20.25 & 13.57 & 8.64 & 5.42 & 23.24 & 19.52 & 86.24 & 3506 & 46.92 & 20.6\(\pm\)11.2 \\ Template [14] & - & - & 25.41 & 16.15 & 10.00 & 6.15 & 25.57 & 21.36 & 87.92 & 3507 & 46.93 & 25.6\(\pm\)11.2 \\ K2C Aug. [22] & T5 & 220M & 6.07 & 3.01 & 1.58 & 0.85 & 14.23 & 17.92 & 86.33 & 3760 & **67.66** & 14.7\(\pm\)5.1 \\ \hline Proposed Instruction & & & & & & & & & & & & \\ Writing & GPT3.5 & 175B+ & **36.84** & **19.85** & **11.37** & **67.4** & 31.44 & 25.36 & 89.26 & 5521 & 56.17 & 44.4\(\pm\)17.3 \\ Summary & GPT3.5 & 175B+ & 26.12 & 14.58 & 8.80 & 5.52 & 27.58 & **25.83** & **89.88** & 4198 & 49.52 & 28.6\(\pm\)10.7 \\ Pharapase & GPT3.5 & 175B+ & 36.51 & 18.73 & 10.33 & 5.87 & 30.36 & 23.40 & 88.71 & 6165 & 59.95 & 47.9\(\pm\)18.7 \\ Attribute Prediction & GPT3.5 & 175B+ & 35.26 & 18.16 & 9.69 & 5.41 & **34.09** & 23.19 & 88.56 & **6995** & 63.16 & 66.2\(\pm\)21.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of existing pseudo caption generation methods and the proposed method. LM stand for the language model. Avg.Token stand for the average number of token per caption. Figure 2: A-vs-B test results. Each method is compared to ground truth in terms of having more true positives and fewer false positives. The proposed methods (b, c, d, e) show comparable **win+tie** performance to ground truth. method is compared to the ground truth in terms of having more true positives (Q1) and fewer false positives (Q2). For the first question, compared to the baseline K2C augmentation, the proposed methods using the instructions show an overwhelmingly higher _win+tie_ score. This indicates the importance of music-specific instructions when utilizing LLM. In particular, "Paraphrase" and "Attribute Prediction" achieve high _win_ scores by incorporating new information that is different from the existing vocabulary. In the second question, all caption generation methods except "Attribute Prediction" show higher _win+tie_ scores than _lose_ scores. This advocates the trustworthiness of LLM-based caption generation as it shows a similar or less false-positive rate to the ground truth. With its longest average length, "Attribute Prediction" turns out to be 'too creative' and shows a slightly higher false-positive rate than the ground truth. ## 4 DATASET: LP-MusicCaps Based on the proposed pseudo caption generation method, we introduce LP-MusicCaps, an LLM-based Pseudo music caption dataset. We construct the music-to-caption pairs using three existing multi-label tag datasets and four task instructions. The data sources are MusicCaps [12], Magnatatgune [31], and Million Song Dataset [32] ECALS subset [13]. We respectively refer to them as MC, MTT, and MSD. MC contains 5,521 music examples,3 each of which is labeled with 13,219 unique aspects written by music experts. MTT [31] consists of 26k music clips from 5,223 unique songs including genre, instrument, vocal, mood, perceptual tempo, origin, and sonority features. We used the full 188 tag vocabulary and did not generate captions for tracks that do not have associated tags (decreased to 22k). MSD consists of 0.52 million 30-second clips and 1054 tag vocabulary [13]. The tag vocabulary covers various categories including genre, style, instrument, vocal, mood, theme, and culture. Each dataset uses an average of 10.7 / 3.3 / 10.2 labels per music clip for generating pseudo captions, respectively. Footnote 3: We only use 5495 out of the total due to the loss of 26 data samples. Table 3 provides a comparison of statistics between the LP-MusicCaps family and other audio-caption pair datasets. When comparing the two domains, AudioCaps [30] and MusicCaps have high-quality human annotated captions, but they have fewer captions with shorter audio duration. When comparing large-scale datasets, the music domain lacks available datasets compared to the general audio domain (such as LAION-Audio [22] and WavCaps [18]). Although MuLaMCap has an overwhelming amount of annotated captions, it is not publicly available. In contrast, LM-MusicCaps is publicly accessible and provided with various scales. LP-MusicCaps-MC has a similar caption length to manually written captions while having four times more captions per audio. LP-MusicCaps-MTT is a medium-sized dataset with audio download link, and LP-MusicCaps-MSD has the largest audio duration among various captions in the music domain caption dataset. ## 5 Automatic Music Captioning We trained a music captioning model and evaluated it under zero-shot and transfer-learning settings. This section reports the experimental results. ### Encoder-Decoder Model We used a cross-modal encoder-decoder transformer architecture that has achieved outstanding results on various natural language processing tasks [33], lyrics interpretation [34], and speech recognition [35], as shown in Figure 3. Similar to Whisper [35], the encoder takes a log-mel spectrogram with six convolution layers with a filter width of 3 and the GELU [36] activation function. With the exception of the first layer, each convolution layer has a stride of two. The output of the convolution layers is combined with the sinusoidal position encoding and then processed by the encoder transformer blocks. Following the BART\({}_{\text{base}}\) architecture, our encoder and decoder both have 768 widths and 6 transformer blocks. The decoder Figure 3: A cross-modal encoder-decoder architecture. \begin{table} \begin{tabular}{l r r r r} \hline Dataset & \# item & Duration (h) & C/A & Avg. Token \\ \hline General Audio Domain & & & & \\ AudioCaps [30] & 51k & 144.9 & 1 & 9.04nNA \\ LAION-Audio [22] & 630k & 4325.4 & 1-2 & N/A \\ WaveCaps [18] & 403k & 7568.9 & 1 & 7.84nNA \\ \hline Music Domain & & & & \\ MusicCaps [12] & 6k & 15.3 & 1 & 48.9417.3 \\ MuLaMCap* [19] & 393k & 1091.0 & 12 & N/A \\ **LP-MusicCaps-MC** & 6k & 15.3 & 4 & 44.9421.3 \\ **LP-MusicCaps-MTT** & 22k & 180.3 & 4 & 24.8413.6 \\ **LP-MusicCaps-MSD** & 514k & 4283.1 & 4 & 37.3426.8 \\ \hline \end{tabular} \end{table} Table 3: Comparison of audio-caption pair datasets. C/A stands for the number of caption per audio. *Although we include MuLaMCap in the table for comparison, it is not publicly accessible. processes tokenized text captions using transformer blocks with a multi-head attention module that includes a mask to hide future tokens for causality. The music and caption representations are fed into the cross-modal attention layer, and the head of the language model in the decoder predicts the next token autoregressively using the cross-entropy loss, formulated as: \(\mathcal{L}=-\sum_{t=1}^{T}\log p_{\theta}(y_{t}\mid y_{1:t-1},x)\) where \(x\) is the paired audio clip and \(y_{t}\) is the ground truth token at time \(t\) in a caption with length \(T\). ### Experimental Setup To evaluate the impact of the proposed dataset on the music captioning task, we compare a supervised model trained on the MusicCaps [12] training split and a pre-trained model trained on an LP-MusicCaps-MSD dataset. For the pre-trained model, we perform both a zero-shot captioning task that does not use any MusicCaps [12] dataset and a fine-tuning task that updates the model using MusicCaps [12] training split. For comparison with other pseudo caption generation methods, we report results on baseline models trained with the same architecture and amount of audio, but different pseudo captions. In addition to all the metrics we used in Section 3.1, we compute \(\textit{Novel}_{c}\), the percentage of generated captions that were not present in the training set [37]. It measures whether the captioning model is simply copying the training data or not. For all the experiments, the input of the encoder is a 10-second audio signal at 16 kHz sampling rate. It is converted to a log-scaled mel spectrogram with 128 mel bins, 1024-point FFT with a hann window, and a hop size of 10 ms. All models are optimized using AdamW with a learning rate of 1e-4. We use a cosine learning rate decay to zero after a warmup over the first 1000 updates. For the pre-training dataset, we use 256 batch-size and the models are trained for 32,768 updates. We adopt a balanced sampling [38], which uniformly samples an anchor tag first and then selects an annotated item. For supervised and transfer learning, we use a 64 batch size, 100 epochs. We use beam search with 5 beams for the inference of all models. ### Results When comparing within zero-shot captioning models, the model trained on the proposed LP-MusicCaps dataset shows a strong performance in general. The model using tag concatenation shows the lowest performance as it fails to generate musical sentences. In case of the model using a prompt template, it demonstrates a slightly higher BERT-Score, while still exhibiting poor performance in terms of \(n\)-gram metrics due to its limited vocabulary. The model using K2C augmentation outperforms the other two methods but still falls short due to its lack of a musical context. In general, zero-shot models does not perform as well as the supervised baseline in most of the metrics with few exceptions. Among the transfer captioning models, the model with LP-MusicCaps pre-training achieves strong performance overall by winning in the BERT-Score and most of the n-gram metrics. It is noteworthy that our proposed model shows a meaningful increase in BERT-Score compared to the supervised model. This improvement is likely a result of successful semantic understanding rather than word-to-word matching. Moreover, by the improvement of \(\text{Novel}_{c}\), the LP-MusicCaps model demonstrates that it can generate new captions instead of repeating the phrases in the training dataset. This advantage is observed in both the zero-shot and supervised tasks in transfer learning models. ## 6 Conclusion We proposed a tag-to-pseudo caption generation approach with large language models to address the data scarcity issue in automatic music captioning. We conducted a systemic evaluation of the LLM-based augmentation, resulting in the creation of the LP-MusicCaps dataset, a large-scale pseudo-music caption dataset. We also trained a music captioning model with LP-MusicCaps and showed improved generalization. Our proposed approach has the potential to significantly reduce the cost and time required for music-language dataset collection and facilitate further research in the field of connecting music and language, including representation learning, captioning, and generation. However, further collaboration with the community and human evaluation is essential to enhance the quality and accuracy of the generated captions. Additionally, we believe that exploring the use of LLMs for other topics under music information retrieval and music recommendation could lead to novel and exciting applications. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Supervised Metrics} & \multicolumn{3}{c}{Diversity Metrics} & Length \\ \cline{2-11} Model & B1\(\uparrow\) & B2\(\uparrow\) & B3\(\uparrow\) & B4\(\uparrow\) & M\(\uparrow\) & R-L\(\uparrow\) & BERT-S\(\uparrow\) & Vocab\(\uparrow\) & Novel\(\downarrow\) & Novel\(\downarrow\) & Avg.Token \\ \hline Baseline & & & & & & & & & & & \\ Supervised Model & 28.51 & 13.76 & 7.59 & 4.79 & 20.62 & 19.22 & 87.05 & 2240 & 0.54 & 69.00 & 46.7a16.5 \\ \hline Zeroshot Captioning & & & & & & & & & & & \\ Tag Concat [13] & 4.33 & 0.84 & 0.26 & 0.00 & 3.10 & 2.01 & 79.30 & 802 & 46.38 & 100.00 & 23.8a12.1 \\ Template [14] & 7.22 & 1.58 & 0.46 & 0.00 & 5.28 & 6.81 & 81.69 & 787 & 45.24 & 100.00 & 25.8a12.4 \\ K2C-Aug [22] & 7.67 & 2.10 & 0.49 & 0.10 & 7.94 & 11.37 & 82.99 & **2718** & **81.97** & 100.00 & 19.9a7.6 \\ LP-MusicCaps **(Ours)** & **19.77** & **6.70** & **2.17** & **0.79** & **12.88** & **13.03** & **84.51** & 1686 & 47.21 & 100.00 & 45.3a28.0 \\ \hline Transfer Learning & & & & & & & & & & & \\ Tag Concat [13] & 28.65 & 14.68 & 8.68 & 5.82 & 21.88 & 21.31 & 87.67 & 1637 & 3.30 & 96.07 & 41.8a14.3 \\ Template [14] & 28.41 & 14.49 & 8.59 & 5.78 & 21.88 & 21.25 & 87.72 & 1545 & **3.62** & **96.77** & 41.1a13.2 \\ K2C-Aug [22] & **29.50** & **14.99** & 8.70 & 5.73 & 21.97 & 20.92 & 87.50 & **2259** & 1.42 & 84.95 & 44.4a15.0 \\ LP-MusicCaps **(Ours)** & 29.09 & 14.87 & **8.93** & **6.05** & **22.39** & **21.49** & **87.78** & 1695 & 1.47 & 96.06 & 42.5a14.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Music captioning results on the MusicCaps eval-set. Avg.Token stands for the average number of token per caption.
2307.16414
Bifurcation analysis of a conceptual model for the Atlantic Meridional Overturning Circulation
The Atlantic Meridional Overturning Circulation (AMOC) distributes heat and salt into the Northern Hemisphere via a warm surface current toward the subpolar North Atlantic, where water sinks and returns southwards as a deep cold current. There is substantial evidence that the AMOC has slowed down over the last century. We introduce a conceptual box model for the evolution of salinity and temperature on the surface of the North Atlantic Ocean, subject to the influx of meltwater from the Greenland ice sheets. Our model, which extends a model due to Welander, describes the interaction between a surface box and a deep-water box of constant temperature and salinity, which may be convective or non-convective, depending on the density difference. Its two main parameters $\mu$ and $\eta$ describe the influx of freshwater and the threshold density between the two boxes, respectively. We use bifurcation theory to analyse two cases of the model: instantaneous switching between convective or non-convective interaction, where the system is piecewise-smooth (PWS), and the full smooth model with more gradual switching. For the PWS model we derive analytical expressions for all bifurcations. The resulting bifurcation diagram in the $(\mu,\eta)$-plane identifies all regions of possible dynamics, which we show as phase portraits - both at typical parameter points, as well as at the different transitions between them. We also present the bifurcation diagram for the case of smooth switching and show how it arises from that of the PWS case. In this way, we determine exactly where one finds bistability and self-sustained oscillations of the AMOC in both versions of the model. In particular, our results show that oscillations between temperature and salinity on the surface North Atlantic Ocean disappear completely when the transition between the convective and non-convective regimes is too slow.
John Bailie, Bernd Krauskopf
2023-07-31T05:48:02Z
http://arxiv.org/abs/2307.16414v1
# Bifurcation analysis of a conceptual model for the Atlantic Meridional Overturning Circulation ###### Abstract The Atlantic Meridional Overturning Circulation (AMOC) distributes heat and salt into the Northern Hemisphere via a warm surface current toward the subpolar North Atlantic, where water sinks and returns southwards as a deep cold current. There is substantial evidence that the AMOC has slowed down over the last century. We introduce a conceptual box model for the evolution of salinity and temperature on the surface of the North Atlantic Ocean, subject to the influx of meltwater from the Greenland ice sheets. Our model, which extends a model due to Welander, describes the interaction between a surface box and a deep-water box of constant temperature and salinity, which may be convective or non-convective, depending on the density difference. Its two main parameters \(\mu\) and \(\eta\) describe the influx of freshwater and the threshold density between the two boxes, respectively. We use tools from bifurcation theory to analyse two cases of the model: the limiting case of instantaneous switching between convective or non-convective interaction, where the system is piecewise-smooth (PWS), and the full smooth model with more gradual switching. For the PWS model we perform a complete bifurcation analysis by deriving analytical expressions for all bifurcations. The resulting bifurcation diagram in the \((\mu,\eta)\)-plane identifies all regions of possible dynamics, which we show as phase portraits -- both at typical parameter points, as well as at the different transitions between them. We also present the bifurcation diagram for the case of smooth switching and show how it arises from that of the PWS case. In this way, we determine exactly where one finds bistability and self-sustained oscillations of the AMOC in both versions of the model. In particular, our results show that oscillations between temperature and salinity on the surface North Atlantic Ocean disappear completely when the transition between the convective and non-convective regimes is too slow. ## 1 Introduction The Atlantic Meridional Overturning Circulation (AMOC) is a large conveyor belt of water that spans the entire Atlantic Ocean. Light surface currents transport relatively warm and saline waters northward to high latitudes. Here, the water becomes denser, leading to downward convection and mixing with the deep ocean, and subsequent formation of deepwater masses. A deep current then transports this water back to lower latitudes, where it upwells to the surface, thus closing the circulation loop [1]. The strength of the AMOC is governed by the interplay between two proposed upwelling mechanisms [2, 3]. The first perspective is that turbulent mixing across surfaces of equal density results in the upwelling of deepwater to the surface ocean in low latitudes [4, 5]. The second perspective suggests that strong circumpolar winds induce upwelling in the South Atlantic Ocean [6]. Regardless of the mechanism, the process of deepwater formation is crucial in determining the shape and strength of the associated return current -- making it a critical factor for the stability of the AMOC. This paper focuses on the deepwater formation sites in the North Atlantic. Specifically, as illustrated in Figure 1(a), the convection of highly saline water from the surface to the deep ocean in the Labrador and Nordic seas forms the North Atlantic Deep Water (NADW); it has an associated return current referred to as the NADW overturning cell. Several climate processes, such as salt rejection and atmospheric cooling, facilitate this convection by preconditioning the subpolar North Atlantic to have relatively high salinity [7]. Furthermore, an advective process transports saline water to the North Atlantic and, thus, stimulates the convection and formation of the NADW [8]. An inherent negative feedback loop forms: weaker convection results in a smaller NADW and, consequently, a weaker overturning cell. This weaker cell then advects less salt to the North Atlantic, further weakening the convection. Evidence from proxy and sea surface temperature measurements indicate that the AMOC has weakened over the twentieth century [10]. There was also a particularly abrupt change in overturning strength during the 1970s [11] attributed to a large-scale influx of fresh water into the North Atlantic; this is known as the Great Salinity Anomaly and is linked to Arctic sea-ice export [12]. A weakened AMOC has significant consequences on the Earth's climate system since it leads to reduced northern heat transport, which lowers the oceanic and atmospheric temperature in the Northern hemisphere [13] via a weakening or even shutdown of the northern deepwater formation. Some significant implications drawn from simulations are a widespread cooling in Europe [10], the possible collapse of the North Atlantic plankton stocks [14], and a rise in the sea level [15]. As a result of external environmental factors, the AMOC is likely to weaken further, and a complete shutdown of the deep water formation in the Labrador Sea is a possibility [10]. In particular, meltwater from the melting Greenland ice sheets contributes to a large influx of freshwater into the subpolar North Atlantic [16]. As freshwater Figure 1: Panel (a) shows a simplified sketch of the North Atlantic component of the AMOC, inspired by [9]. The surface flow is displayed in red, and the NADW overturning cell in blue. Deepwater formation sites are denoted L and N in the Labrador sea and the Nordic seas, respectively. Panel (b) shows the two-box model setup for the interaction between surface water and cold deep water at the sites L and N. is strictly non-saline, it dilutes the ocean surface water by lowering its salinity, thus, inhibiting the deep-water formation and, hence, the NADW overturning cell strength. Of particular interest in this context is the modelling of the underlying deep water formation itself -- with the aim of understanding the possible long-term behaviour of the AMOC in response to freshwater influx. Climate models form a hierarchy of complexity, and the choice of model depends on the nature of the question that is being asked. We study here a conceptual model of low complexity for investigating the AMOC in regard to deep water formation -- specifically, from the class of box models that consider only a few variables in a relatively small number of interacting boxes, each representing a body of water of concern. While they are not designed to be used for prediction, box models are simple enough to be amenable to mathematical analysis, including with tools from dynamical systems theory [17]. The stability of the AMOC was first investigated by Stommel with a two-box model [18]; it considers the circulation between a subtropical box and a subpolar box, where a capillary flow represents the advection of water between the two boxes. Stommel's model features three qualitatively different regimes. In the first regime, the AMOC is driven by salinity differences between the boxes, and surface currents move water toward the equator. Temperature differences are the main driver in the second regime, and the surface currents move water toward the poles. The final regime features bistability, where the AMOC may tip to either of the described equilibrium states. Stommel laid the foundation for several advective models, which add more boxes and physical processes; see, for example, [19, 20, 21]. A two-box model presented by Welander [22] attempts to describe self-sustained oscillations of temperature and salinity on the ocean surface in the presence of external forcing. The boxes interact by exchanging heat and salt via a mixing process. When the water in the surface box is sufficiently dense, the mixing is convective (strong). When the water in the boxes has comparable density, on the other hand, the mixing is non-convective (weak) and may happen via several climate processes, such as double-diffusion [23]. In this setup, an atmospheric basin with fixed properties interacts with the surface box, which is modelled by Newton's transfer law. The model by Welander is described for two cases: when the transition between convective and non-convective mixing is modelled as a continuous change and, alternatively, when it is instantaneous and discontinuous. In both cases, self-sustained oscillations are observed, which are characterised by a convective and a non-convective phase. Welander's model was re-examined by Leifeld [24] with the aim of formalising the previous analysis by using a modern approach of piecewise-smooth (PWS) dynamical systems. They undertook a preliminary stability analysis and made a first comparison between the smooth and non-smooth models; however, this work falls short of describing the full bifurcation picture and, to the best of our knowledge, there is as yet no complete analysis of the Welander model, nor any closely related models. ### The adjusted Welander model We take this as the starting point of our study of an _adjusted Welander model_ that also considers the impact of a freshwater influx into the North Atlantic ocean. Following on from work in [25], where the external forcing enters in the form of Newton's transfer law, we consider here a direct freshwater flux that dilutes the salinity in a surface ocean box at the North Atlantic, which is coupled to a box of deep water of constant lower temperature and salinity. As is illustrated by the schematic in Figure 1(b), the model takes the form of a planar system of ordinary differential equations for temperature \(T\) and salinity \(S\) in the surface ocean box, which is given by \[\begin{split}\frac{dT}{dt}&=-\gamma(T-T_{a})-k_{ \varepsilon}(\rho)(T-T_{0}),\\ \frac{dS}{dt}&=\frac{F_{0}}{H}S_{0}-k_{\varepsilon}( \rho)(S-S_{0}).\end{split} \tag{1}\] The atmosphere externally drives the surface ocean box to a thermal equilibrium \(T_{a}\) at rate \(\gamma\), which is Newton's transfer law. The salinity, on the other hand, is directly forced by the freshwater flux \(F_{0}\) at the rate \(\frac{F_{0}}{H}S_{0}\), where \(H\) is the depth of the surface ocean box. Moreover, \(T_{0}\) and \(S_{0}\) are the (fixed) temperature and salinity of the deep-ocean box that drive \(T\) and \(S\), respectively, as given by the convective exchange function \(k_{\varepsilon}\). This function determines the coefficient for Newton's transfer law and takes as its argument the density \(\rho\) of the surface ocean box given (in linear approximation) by \[\frac{\rho}{\rho_{0}}=1+\alpha_{S}(S-S_{0})-\alpha_{T}(T-T_{0}). \tag{2}\] Here, the constant \(\rho_{0}\) is the density of the bottom box, and the coefficients \(\alpha_{S}\) and \(\alpha_{T}\) are, respectively, the saline expansion and thermal compression constants [17]. The convective exchange function is a key ingredient in (1) and describes the transition between the two regimes when the vertical mixing between the two boxes is non-convective at rate \(k_{1}>0\) and when it is convective at rate \(k_{2}>k_{1}\). This transition is modeled in its general form as \[k_{\varepsilon}(\rho)=k_{1}+\mathcal{H}_{\varepsilon}(\rho-\rho_{0}-g^{*})(k_ {2}-k_{1}), \tag{3}\] where \(\mathcal{H}_{\varepsilon}\) is a suitable switching function from zero to one, whose switching time depends on the switching-time parameter \(\varepsilon\). Hence, when the density difference \(\rho-\rho_{0}\) is (sufficiently) greater than the density threshold \(g^{*}\), mixing between the boxes is mainly convective; on the other hand, it is mainly non-convective when \(\rho-\rho_{0}\) is (sufficiently) smaller than \(g^{*}\). Different switching functions have been used in the literature [24, 20], including those based on the arctan function. In this paper, we define \(\mathcal{H}_{\varepsilon}\) as \[\mathcal{H}_{\varepsilon}(u)=\frac{1}{2}\left(1+\tanh\left(\frac{u}{ \varepsilon}\right)\right). \tag{4}\] Figure 2: The switching functions \(\mathcal{H}_{\varepsilon}(\rho-\rho_{0}-g^{*})\) with \(\varepsilon=0.1\) in panel (a) and \(\varepsilon=0\) in panel (b). Blue shading indicates mixing is mainly non-convective, and red shading that mixing is mainly convective. In the smooth transition region in panel (a), the colour changes from blue, via white, to red. Note that \({\cal H}_{\varepsilon}\) has a switching time of order \(\varepsilon>0\) and maximal rate of switching \(\frac{1}{\varepsilon}\) given by the derivative of \({\cal H}_{\varepsilon}\) at zero. Moreover, the limiting case for \(\varepsilon\to 0\) is an instantaneous switch, represented by the Heaviside function \({\cal H}_{0}\). Figure 2 shows the resulting convective exchange functions \(k_{\varepsilon}(\rho)\) from (3) for \(\varepsilon=0.1\) and \(\varepsilon=0\). The adjusted Welander model in the form (1) has ten parameters, making a direct analysis impractical. The first step in our analysis is to non-dimensionalise the system by introducing rescaled temperature, salinity and time \[x=\frac{T-T_{0}}{T_{a}-T_{0}},\quad y=\frac{\alpha_{S}(S-S_{0})}{\alpha_{T}(T_ {a}-T_{0})},\quad\tau=\gamma t, \tag{5}\] and parameters \[\kappa_{i}=\frac{k_{i}}{\gamma},\quad\mu=\frac{F_{0}S_{0}\alpha_{S}}{\gamma \alpha_{T}(T_{a}-T_{0})H}\quad\eta=\frac{g^{*}(\kappa_{2}-\kappa_{1})}{\gamma \alpha_{T}(T_{a}-T_{0})\rho_{0}}. \tag{6}\] This transforms (1) into \[\begin{split}&\dot{x}=1-(1+\kappa_{1}+{\cal H}_{\varepsilon}(y-x- \eta)(\kappa_{2}-\kappa_{1}))x,\\ &\dot{y}=\mu-(\kappa_{1}+{\cal H}_{\varepsilon}(y-x-\eta)( \kappa_{2}-\kappa_{1}))y,\end{split} \tag{7}\] where the dot represents the derivative with respect to the rescaled time. ### Outline of the work The adjusted Welander model in the form (7) is our central object of study. We first perform in Section 2 a (non-smooth) bifurcation analysis for the limiting case of system (7) with the Heaviside switching function \({\cal H}_{0}\). Specifically, we determine and catalogue all of the possible dynamics, by presenting analytical expressions for all codimension-one and codimension-two bifurcations; the corresponding proofs and derivations can be found in Appendix A. The rescaled freshwater flux \(\mu\) and density threshold \(\eta\) are the bifurcation parameters, and we show the complete bifurcation diagram in the \((\mu,\eta)\)-plane for a reasonable choice of the vertical mixing coefficients \(0<\kappa_{1}<\kappa_{2}\). Moreover, we present representative phase portraits in the \((x,y)\)-plane for all open regions, of which there are eight, for the different types of transitions of codimension one between them, as well as at the five codimension-two points that organise the bifurcation diagram. In particular, we identify the parameter regime where the system exhibits bistability between states, where deep-water convection is either substantial or shut down, which is characteristic behaviour of several models of different complexity [8]. Moreover, we determine the parameter regime with self-sustained relaxation-type oscillations that have been observed in [25] and also in Welander's original work [20]. In addition to this earlier work, we determine this region analytically and clarify the nature of the different possible transitions to/from this oscillatory regime. Our results also show that the bifurcation diagram in the \((\mu,\eta)\)-plane is toplogically the same for any fixed \(0<\kappa_{1}<\kappa_{2}\). Section 3 is then concerned with the smooth case of system (7) with \({\cal H}_{\varepsilon}\) for \(\varepsilon>0\). Here we first present the bifurcation diagram in the \((\mu,\eta)\)-plane for \(\varepsilon=0.1\) (for the same choice of \(0<\kappa_{1}<\kappa_{2}\)). This requires computing the relevant bifurcation curves by making use of established bifurcation theory [26] in conjunction with the continuation software package AUTO-07p [27]. We focus here on the main parameter regimes, especially those that feature bistability and self-sustained oscillations, for which we show representative phase portraits. We then present a partial bifurcation analysis in \((\mu,\eta,\varepsilon)\)-space that clarifies the convergence of the bifurcation diagram in the \((\mu,\eta)\)-plane as \(\varepsilon\) approaches \(0\). Moreover, we show that there is a codimension-three bifurcation at a quite low value of the switching-time parameter \(\varepsilon\), at which the region with self-sustained oscillations completely disappears from the \((\mu,\eta)\)-plane. In other words, the switching between the regimes with strong convective mixing and with weak non-convective mixing needs to be sufficiently fast for relaxation-type oscillations to occur in the adjusted Welander model (7). In particular, this shows the relevance of the non-smooth limiting system with \(\mathcal{H}_{0}\) for explaining this oscillatory behaviour. In the final Section 4 we summarise our findings, briefly discuss their significance for the dynamics of AMOC, and point out some direction for future work. ## 2 Bifurcation analysis of the PWS model for \(\varepsilon=0\) In the limiting case of an instantaneous transition with the transition function \(\mathcal{H}_{0}\) from (4), system (7) reduces to the piecewise-smooth linear Filippov system \[\binom{\dot{x}}{\dot{y}}=\bigg{\{}\begin{array}{cc}f_{1}(x,y), &y<x+\eta\\ f_{2}(x,y),&y>x+\eta\end{array} \tag{8}\] with \[f_{i}(x,y)=\binom{1-(1+\kappa_{i})x}{\mu-\kappa_{i}y}. \tag{9}\] The switching manifold \[\Sigma=\{(x,y)\in\mathbb{R}^{2},\ \ y=x+\eta\} \tag{10}\] is a straight line that partitions the phase space of (8) into the open regions \[R_{1} =\{(x,y)\in\mathbb{R}^{2},\ \ y<x+\eta\}, \tag{11}\] \[R_{2} =\{(x,y)\in\mathbb{R}^{2},\ \ y>x+\eta\}, \tag{12}\] where \(f_{1}\) and \(f_{2}\) apply, respectively. We now perform a bifurcation analysis of the piecewise-smooth AMOC model (8). To this end, we use tools from the bifurcation theory for this class of non-smooth systems from the relevant literature [28, 29, 30], which we largely follow also in terms of notation and where more details can be found. More specifically, we determine analytic expressions for all (non-smooth) bifurcations, which is possible because of the simple expression for the switching manifold, and the fact that \(f_{1}\) and \(f_{2}\) are linear. We present these results in the form of propositions, whose proofs can be found in Appendix A. The associated curves of codimension-one bifurcations divide the \((\mu,\eta)\)-plane into eight open regions, denoted \(\mathrm{I}-\mathrm{VIII}\). We also present the corresponding phase portraits in the \((\mu,\eta)\)-plane, as well as those at the different types of bifurcations. The vertical mixing coefficients are fixed here to \(\kappa_{1}=0.1\) and \(\kappa_{2}=1.0\). This choice is suitable for our purposes and in the realistic range [25], yet slightly different from the values found in the literature [24, 22]. Moreover, as can be seen from the expressions in Section 2.3, the bifurcation diagram in the \((\mu,\eta)\)-plane is qualitatively the same for any \(0<\kappa_{1}<\kappa_{2}\). ### Sliding properties and pseudo-equilibria We start by introducing some relevant notions from the theory of PWS systems. An equilibrium \(p_{i}\) of the vector field \(f_{i}\) that lies in region \(R_{i}\) is an equilibrium of the overall system and called _admissible_. An important part of the bifurcation theory of planar Filippov systems is the interaction of equilibria and other invariant objects of \(f_{1}\) and \(f_{2}\) with the switching manifold \(\Sigma\)[28]. First of all, orbits may cross the switching manifold at the _crossing segment_\(\Sigma_{c}\subset\Sigma\), along which the vector fields \(f_{1}\) and \(f_{2}\) are both transverse and have the same sign. The set of points where \(f_{1}\) and \(f_{2}\) are transversal but have opposite signs is the _sliding segment_\(\Sigma_{s}\subset\Sigma\), which we also refer to as \(\Sigma_{s}^{a}\) when it is attracting and as \(\Sigma_{s}^{r}\) when it is repelling. These different segments of the switching manifold are bounded by _tangency points_\(F_{1}\) and \(F_{2}\), where either \(f_{1}\) or \(f_{2}\) is tangent to \(\Sigma\), respectively. Generically, such a tangency of \(f_{i}\) is quadratic and isolated, and it is called visible if nearby parabolic orbits lie in \(R_{i}\), and invisible otherwise. For system (8) we have the following. **Proposition 1** (Tangency points and sliding segments).: _System (8) has a single sliding segment \(\Sigma_{s}\) that is delimited by two tangency points \(F_{1}\) and \(F_{2}\) at_ \[F_{i}=\begin{pmatrix}1-\mu+\eta\kappa_{i}\\ 1-\mu+(1+\kappa_{i})\eta\end{pmatrix}\in\Sigma, \tag{13}\] _which are quadratic when_ \[\mu+(\mu-\eta-1)\kappa_{i}-\eta\kappa_{i}^{2}\neq 0. \tag{14}\] _The (quadratic) tangency point \(F_{1}\) is visible for_ \[\mu+(\mu-\eta-1)\kappa_{i}-\eta\kappa_{i}^{2}<0, \tag{15}\] _and the (quadratic) tangency point \(F_{2}\) is visible for_ \[\mu+(\mu-\eta-1)\kappa_{i}-\eta\kappa_{i}^{2}>0. \tag{16}\] _Otherwise, the (quadratic) tangency at \(F_{i}\) is invisible. For \(\eta\neq 0\), system (8) has a sliding segment \(\Sigma_{s}\). When \(\eta>0\) the sliding segment is attracting, denoted \(\Sigma_{s}^{a}\) and given by_ \[\Sigma_{s}^{a}=\{s\in\Sigma,F_{1}<s<F_{2}\}, \tag{17}\] _and when \(\eta<0\) it is repelling, denoted \(\Sigma_{s}^{r}\) and given by_ \[\Sigma_{s}^{r}=\{s\in\Sigma,F_{2}<s<F_{1}\}. \tag{18}\] _Here, in a slight abuse of notation, we mean the ordering on the line \(\Sigma_{s}\), as given by the \(x\)-component._ A crucial ingredient of the theory is the extension of the flow to the sliding segment \(\Sigma_{s}\) by defining the sliding vector field \(f_{s}\). This is achieved with Filippov's convex method by forming a weighted sum of the adjoining vector fields \(f_{1}\) and \(f_{2}\) such that \(f_{s}\) is in the direction of (the tangent to) \(\Sigma_{s}\)[28, 29, 30]. With this definition, a PWS orbit is the union of orbit segments induced by the vector fields \(f_{1}\) on \(R_{1}\), \(f_{2}\) on \(R_{2}\), and \(f_{s}\) on \(\Sigma_{s}\). Moreover, every point of the phase plane lies on a unique PWS orbit of the planar Filippov system; see [30] for details. Orbits that remain in \(R_{1}\cup R_{2}\cup\Sigma_{c}\) are called _regular_, and orbits with segments on \(\Sigma_{s}\) are called _sliding orbits_. Here we use a common convention that sliding orbits continue into \(R_{1}\) or \(R_{2}\) when the end of the sliding segment \(\Sigma_{s}\) is reached (in forward or backward time, respectively, by following the trajectory from the respective tangency point) [30, 31]. However, the end of \(\Sigma_{s}\) may not be reached because the sliding vector field may have equilibria, called _pseudo-equilibria_, which are referred to as _admissible_ when they lie \(\Sigma_{s}\). The properties of all equilibria of system (8) can be stated as follows. **Proposition 2** (Equilibria, sliding vector field and pseudo-equilibria).: _System (8) has the following equilibria and pseudo-equilibria for \(0<\kappa_{1}<\kappa_{2}\)._ 1. _The vector field_ \(f_{i}\) _has the stable nodal equilibrium_ \[p_{i}=\left(\frac{1}{1+\kappa_{i}},\frac{\mu}{\kappa_{i}}\right).\] (19) _The equilibrium_ \(p_{1}\) _is admissible when_ \[\eta>\frac{\mu}{\kappa_{1}}-\frac{1}{1+\kappa_{1}},\] (20) _and_ \(p_{2}\) _is admissible when_ \[\eta<\frac{\mu}{\kappa_{2}}-\frac{1}{1+\kappa_{2}}.\] (21) _The admissible equilibrium_ \(p_{i}\) _has a strong stable manifold_ \(W^{ss}(p_{i})\) _defined by the piecewise-smooth orbit along the linear strong stable direction_ \(W^{ss}_{loc}=\mathrm{span}\binom{1}{0}\)_._ 2. _The_ sliding vector field _defined on the sliding segment_ \(\Sigma_{s}\) _is given by_ \[f_{s}(x)=\frac{1}{\eta}\left(\mu+(\mu-\kappa_{2}\eta-1)x+x^{2}\right)\binom{1} {1},\] (22) _where_ \(\Sigma\) _is parametrised by_ \(x\)_._ 3. _There are two pseudo-equilibria, that is, equilibria of_ \(f_{s}\)_, given by_ \[q^{\pm}=\frac{1}{2}\left(1-\mu-\eta\pm\sqrt{(\eta+\mu+1)^{2}-4\mu}\right) \binom{1}{1}+\binom{0}{\eta}.\] (23) _When_ \(\eta>0\)_, the pseudo-equilibrium_ \(q^{-}\) _is asymptotically unstable and_ \(q^{+}\) _is asymptotically stable on_ \(\Sigma_{s}\)_. On the other hand, when_ \(\eta<0\)_, the pseudo-equilibrium_ \(q^{+}\) _is asymptotically unstable and_ \(q^{-}\) _is asymptotically stable on_ \(\Sigma_{s}\)_. The admissibility of these pseudo-equilibria is presented and described in Section_ 2.2_._ Global invariant manifolds of admissible equilibria are defined in complete analogy to those of smooth systems, but with regard to the piecewise-smooth flow \(\varphi^{t}\) constructed in [30]. Each admissible equilibrium \(p_{i}\in R_{i}\) of system (8) is attracting with real eigenvalues and, hence, has a strong stable manifold \(W^{ss}(p_{i})\) consisting of the two orbits that approach \(p_{i}\) tangent to the strong eigenspace. Since system (8) is piecewise linear, \(W^{ss}(p_{i})\) is actually a straight line locally near \(p_{i}\); however, this is not the case globally since the strong stable manifold typically crosses the switching manifold \(\Sigma\). We also consider here global invariant manifolds of admissible pseudo-equilibria, which we define as follows. If \(q\in\Sigma_{s}\) is a saddle pseudo-equilibrium then its stable manifold \(W^{s}(p)\) or unstable manifold \(W^{u}(p)\) is the union of the two _arriving or departing orbits_ in \(R_{1}\) and \(R_{2}\), consisting of points that reach \(q\) under the piecewise-smooth flow \(\varphi^{t}\) in finite forward or backward time, respectively. The saddle pseudo-equilibrium \(q\) then also has associated _generalised (un)stable manifolds_\(W^{u}_{g}(q)\) or \(W^{s}_{g}(q)\). These generalised manifolds consist of segments on \(\Sigma_{s}\) of points that converge to \(q\) under the sliding flow (in backward and forward time, respectively), together with their globalisation under \(\varphi^{t}\), which generally consists of departing and arriving orbits to tangency points that bound \(\Sigma_{s}\). When an admissible pseudo-equilibrium \(q\) is a nodal attractor, its arriving orbits form the strong stable manifold \(W^{ss}(p)\); similarly, a nodal repellor \(q\in\Sigma_{s}\) has the strong unstable manifold \(W^{uu}(p)\) consisting of its pair of departing orbits. ### Bifurcation diagram and structurally stable phase portraits The bifurcation diagram of system (8) consists of curves of (piecewise-smooth) bifurcations that divide the \((\mu,\eta)\)-plane into eight open regions I to VIII, which are equivalence classes of topological equivalence where the phase portraits are structurally stable. This classification is based on the following common notion [30, 31]: two planar Filippov systems \(f\) and \(\tilde{f}\) with switching manifolds \(\Sigma\) and \(\tilde{\Sigma}\), respectively, are _topologically equivalent_ if there exists an orientation preserving homeomorphism \(h:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\) that maps \(\Sigma\) to \(\tilde{\Sigma}\) and orbits of \(f\) to orbits of \(\tilde{f}\). Note that this definition is a direct and natural extension of that for smooth dynamical systems. In particular, a bifurcation of a planar Filippov system concerns a topological change, and its codimension is given (colloquially speaking) by the number of parameters one needs to find it generically at an isolated point. More information and formal definitions can be found as part of the broad classification in [31] of discontinuity-induced bifurcations in planar Filippov systems. Figure 3 shows the bifurcation diagram of system (8) in the \((\mu,\eta)\)-plane, for the fixed values \(\kappa_{1}=0.1\) and \(\kappa_{2}=1.0\) of the vertical mixing rates, with the regions I to VIII. Their boundaries are formed by bifurcation curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\) of boundary equilibrium bifurcation, FF of fold-fold Figure 3: Two-parameter bifurcation diagram of system (8) in the \((\mu,\eta)\)-plane with \(\kappa_{1}=0.1\) and \(\kappa_{2}=1.0\). The curves of boundary equilibrium bifurcation \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\), fold-fold bifurcation FF, and pseudo-saddle-node bifurcation PS from Proposition 3 bound regions I to VIII with structurally stable phase portraits shown in Figures 4–6. Grey shading indicates the existence of a (crossing) periodic orbit, and blue shading bistability between equilibria. These curves intersect at the codimension-two points \(\mathrm{FB}_{1}\), \(\mathrm{FB}_{1}\), \(\mathrm{BB}\), \(\mathrm{GB}_{1}\), and \(\mathrm{GB}_{2}\) from Proposition 4, which generates segments of different bifurcation types shown in Figures 7–11. Specifically, the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\) consist of segments \(\mathrm{BE}_{1}^{P}\), \(\mathrm{BE}_{2}^{P}\), \(\widetilde{\mathrm{BE}}_{1}^{P}\), \(\widetilde{\mathrm{BE}}_{2}^{P}\) and \(\widetilde{\mathrm{BE}}_{1}^{P}\) of persistence boundary equilibrium bifurcation, and \(\mathrm{BE}_{1}^{F}\), \(\mathrm{BE}_{2}^{F}\) and \(\widetilde{\mathrm{BE}}_{2}^{F}\) of non-smooth fold boundary equilibrium bifurcation. The curve FF consists of segments \(\mathrm{FF}_{1}\) and \(\mathrm{FF}_{2}\) of fold-fold bifurcation and FU of fused-focus bifurcation. bifurcation and PS of pseudo-saddle-node bifurcation that are formally presented and determined in Proposition 3. More precisely, these curves cross or meet at codimension-two bifurcation points \(\mathrm{FB}_{1}\), \(\mathrm{FB}_{1}\), \(\mathrm{BB}\), \(\mathrm{GB}_{1}\), and \(\mathrm{GB}_{2}\). As is spelled out in Proposition 4, these points divide the curves of codimension-one bifurcations into the segments of different bifurcation types that are shown and labeled in Figure 3. We first present and discuss the structurally stable phase portraits of system (8) in regions I to VIII; the different bifurcations between them are analysed and illustrated in subsequent sections. The eight cases of phase portraits are shown in Figures 4-6 in a suitable part of the \((x,y)\)-plane. In every phase portrait, the switching manifold \(\Sigma\) appears as a straight grey line that partitions phase space into the open regions \(R_{1}\) and \(R_{2}\). Admissible equilibria of system (8) are shown in black, and non-admissible equilibria in grey. Non-admissible equilibria outside the frame of interest (far away from the switching manifold) are not shown. There exists a sliding segment in each region: attracting sliding segments \(\Sigma_{s}^{a}\) are coloured blue, and repelling sliding segments \(\Sigma_{s}^{r}\) are coloured orange. In either case, the sliding segment is bounded by the quadratic tangency points \(F_{1}\) and \(F_{2}\), which are coloured cyan when visible and grey when invisible. Admissible pseudo-equilibria \(q^{-}\) and \(q^{+}\) are coloured by their stability: stable pseudo-equilibria are green, and unstable pseudo-equilibria are red. Admissible equilibria and pseudo-equilibria may have (strong) invariant manifolds that are coloured blue when stable and red when unstable. Some representative trajectories are shown in black, and they were obtained numerically with an integrator based on event-detection, as described in [32]. Figure 4 presents phase portraits of system (8) in regions I to III, which all feature and attracting sliding segment \(\Sigma_{s}^{a}\). In the phase portrait in region I, shown in panel (a), \(\Sigma_{s}^{a}\) is bounded by a visible quadratic tangency point \(F_{1}\) on the left and an invisible quadratic tangency point \(F_{2}\) on the right. Neither of the pseudo-equilibria \(q^{-}\) and \(q^{+}\) are on \(\Sigma_{s}^{a}\) and, hence, they are non-admissible (and not shown). The equilibrium \(p_{2}\) lies in region \(R_{1}\) and is non-admissible, while \(p_{1}\in R_{1}\) is admissible and a global attractor. Orbits in \(R_{1}\) and \(R_{2}\) are either regular and converge to \(p_{1}\) or hit the attracting sliding segment \(\Sigma_{s}^{a}\), along which sliding orbits approach \(F_{1}\) and then depart into \(R_{1}\) to converge to \(p_{1}\). Note, that the strong stable manifold \(W^{ss}(p_{1})\) of \(p_{1}\) is composed of a horizontal component in \(R_{1}\) and the corresponding arriving orbit in \(R_{2}\). Crossing the segment \(\mathrm{BE}_{1}^{P}\) of boundary equilibrium bifurcation results in \(p_{1}\) becoming non-admissible by moving into \(R_{2}\) through \(F_{1}\); at the same time, a pseudo-equilibrium \(q^{+}\in\Sigma_{s}^{a}\) emerges from \(F_{1}\), where the tangency is now invisible. The resulting phase portrait in region II is shown in Figure 4(b1) with a magnification near the sliding segment in panel (b2). The pseudo-equilibrium \(q^{+}\) is a global attractor: all orbits in \(R_{1}\) and \(R_{2}\) hit the sliding segment \(\Sigma_{s}^{a}\), along which the sliding orbits converge to \(q^{+}\). Moreover, \(q^{+}\) has the strong stable manifold \(W^{ss}(q^{+})\), consisting of the two arriving orbits to \(q^{+}\) from within \(R_{1}\) and \(R_{2}\), respectively; see panel (b2). When the segment \(\mathrm{BE}_{2}^{P}\) is crossed there is again a boundary equilibrium bifurcation, but now of \(p_{2}\) at \(F_{2}\): as Figure 4(c) shows, in region III the pseudo-equilibrium \(q^{+}\) moved off \(\Sigma_{s}^{a}\) through \(F_{2}\), and \(p_{2}\) with strong stable manifold \(W^{ss}(p_{2})\) is now admissible and the global attractor. Phase portraits in regions IV to VI are presented Figure 5; as was the case for regions I to III, this also concerns the the transition from \(p_{1}\) to \(p_{2}\) being the global attractor, with the difference that there is now a repelling sliding segment \(\Sigma_{s}^{r}\). In the phase portrait in region IV, shown in panel (a), \(\Sigma_{s}^{r}\) is bounded by an invisible quadratic tangency point \(F_{2}\) on the left and by a visible quadratic tangency point \(F_{1}\) on the right; the pseudo-equilibria \(q^{-}\) and \(q^{+}\) are on \(\Sigma_{c}\) and non-admissible (and not shown). The only admissible equilibrium is \(p_{1}\in R_{1}\), and it is a global attractor. Sliding orbits on \(\Sigma_{s}^{r}\) approach \(F_{2}\), where they depart into \(R_{1}\) and converge to \(p_{1}\); however, in contrast to region III, no forward orbits hit \(\Sigma_{s}^{r}\) as this sliding segment is repelling. When crossing segment \(\widehat{\mathrm{BE}}_{1}^{P}\), we find again a (persistence) boundary equilibrium bifurcation where \(p_{1}\) moves through \(F_{1}\) and becomes non-admissible. As Figure 5(b1) and the magnification in panel (b2) show, in region V this results again in the pseudo-equilibrium \(q^{+}\) being admissible. However, \(q^{+}\in\Sigma_{s}^{r}\) is now a repelling node with strong unstable manifold \(W^{uu}(q^{+})\), consisting of the departing orbits from \(q^{+}\) in \(R_{1}\) and \(R_{2}\), respectively. Importantly, in region V there is a stable (crossing) periodic orbit \(\Gamma\), which is composed of orbit segments of \(f_{1}\) in \(R_{1}\) and \(f_{2}\) in \(R_{2}\) that join on the crossing segment \(\Sigma_{c}\). All points except \(q^{+}\) converge to this periodic orbit; in particular, \(W^{uu}(q^{+})\) accumulates on \(\Gamma\), while initial conditions on \(\Sigma_{s}^{r}\backslash\{q^{+}\}\) move to an end point \(F_{2}\) or \(F_{1}\) of \(\Sigma_{s}^{r}\), where they depart into \(R_{1}\) or \(R_{2}\), respectively, to converge to \(\Gamma\); see panel (b2). Crossing segment \(\widehat{\text{BE}}_{2}^{P}\) concerns a second (persistence) boundary equilibrium bifurcation, but now of \(p_{2}\) at the tangent point \(F_{2}\). As a result, the now admissible equilibrium \(p_{2}\) is indeed the global attractor in region VI, as is shown in Figure 5(c). Phase portraits for regions VII and VIII are presented in Figure 6; they both still feature the Figure 4: Representative phase portraits in regions I to III along the horizontal slice \(\eta=0.35\) of the \((\mu,\eta)\)-plane, each with an attracting sliding segment \(\Sigma_{s}^{a}\) bounded by quadratic tangency points \(F_{1}\) and \(F_{2}\). Panel (a) for \(\mu=0.0225\) shows the admissible equilibrium \(p_{1}\) with its strong stable manifold \(W^{ss}(p_{1})\), as well as the non-admissible equilibrium \(p_{2}\in R_{1}\). Panel (b1) for \(\mu=0.385\) and magnification (b2) near the sliding segment show \(p_{2}\in R_{1}\) and the attracting pseudo-node \(q^{+}\) with strong stable manifold \(W^{ss}(q^{+})\). Panel (c) for \(\mu=0.975\) shows the admissible equilibrium \(p_{2}\in R_{2}\) with \(W^{ss}(p_{2})\). repelling sliding segment \(\Sigma_{s}^{r}\), bounded by the quadratic tangency points \(F_{2}\) on the left and \(F_{1}\) on the right. In region VII, as in panel (a1) with a magnification in panel (a2), we find the repelling pseudo-equilibrium \(q^{+}\in\Sigma_{s}^{r}\) as in region V. However, due to the transition through the bounding segment \(\widehat{\mathrm{BE}}_{2}^{F}\) of (non-smooth fold) boundary equilibrium bifurcation, the equilibrium \(p_{2}\) is now in open region \(R_{2}\) and admissible, and the second pseudo-equilibrium \(q^{-}\) now also lies on \(\Sigma_{s}^{r}\). The point \(p_{2}\) attracts all points, apart from those on the generalised stable manifold \(W_{g}^{s}(q^{-})\) of \(q^{-}\), which is composed of sliding orbits on \(\Sigma_{s}^{r}\) approaching \(q^{-}\) and the arriving orbit to \(F_{2}\). The unstable manifold \(W^{u}(q^{-})\) of \(q^{-}\) and strong unstable manifold \(W^{uu}(q^{+})\) both converge to the attractor \(p_{2}\). Note that \(W^{ss}(p_{2})\) is composed of a horizontal component in \(R_{1}\), the corresponding sliding orbit in \(\Sigma_{s}^{r}\) and the arriving orbit to \(F_{1}\). When crossing segment \(\widetilde{\mathrm{BE}}_{1}^{P}\) into region VIII, there is a (persistence) boundary equilibrium bifurcation, at which \(p_{1}\) becomes admissible and the pseudo-equilibrium \(q^{+}\) becomes non-admissible Figure 5: Representative phase portraits in regions IV to VI along a horizontal slice \(\eta=-0.2\) of the \((\mu,\eta)\)-plane, each with a repelling sliding segment \(\Sigma_{s}^{r}\) bounded by quadratic tangency points \(F_{1}\) and \(F_{2}\). Panel (a) for \(\mu=0.0225\) is similar to Figure 4(a), but now the sliding segment is repelling. Panel (b1) for \(\mu=0.25\) and the magnification near the sliding segment (b2) features a repelling pseudo-node \(q^{+}\) with a strong unstable manifold \(W^{uu}(q^{+})\) and a (crossing) periodic orbit \(\Gamma\) that encircles \(\Sigma_{s}^{r}\). Panel (c) for \(\mu=0.525\) is similar to Figure 4(c), but now the sliding segment is repelling. by moving through \(F_{1}\) onto the crossing segment \(\Sigma_{c}\). As the phase portrait in Figure 6(b) shows, both \(p_{1}\in R_{1}\) and equilibrium \(p_{2}\in R_{2}\) are now attractors in region VIII; hence, this is the region of bistability. The generalised stable manifold \(W^{s}_{g}(q^{-})\) of the saddle pseudo-equilibrium \(q^{-}\in\Sigma^{r}_{s}\) is now composed of \(\Sigma^{r}_{s}\) and the arriving orbits to both \(F_{1}\) and \(F_{2}\), and it forms the boundary between the basins of attraction of the attractors \(p_{1}\) and \(p_{2}\). Indeed, the lower branch of \(W^{u}(q^{-})\) converges to \(p_{1}\), and its upper branch to \(p_{2}\). Figure 6: Representative phase portraits in regions VII and VIII, both with a repelling sliding segment \(\Sigma^{r}_{s}\) bounded by quadratic tangency points \(F_{1}\) and \(F_{2}\). Panel (a1) for \((\mu,\eta)=(0.0987,-0.463)\) and the magnification near the sliding segment (a2) show \(p_{2}\) with \(W^{ss}(p_{2})\), pseudo-saddle-equilibrium \(q^{-}\) with a generalised stable manifold \(W^{s}_{g}(q^{-})\) and unstable manifold \(W^{u}(q^{-})\), and the repelling pseudo-node \(q^{+}\) with strong unstable manifold \(W^{uu}(q^{+})\). Panel (b) for \((\mu,\eta)=(-0.115,-0.95)\) shows the simultaneously admissible equilibria \(p_{1}\) with \(W^{ss}(p_{1})\) and \(p_{2}\) with \(W^{ss}(p_{2})\), and the pseudo-saddle-equilibrium \(q^{-}\) with \(W^{s}_{g}(q^{-})\) and \(W^{u}(q^{-})\). ### Codimension-one and codimension-two bifurcations We now present analytical expressions for all (non-smooth) bifurcations of system (8) of codimension one and two in Propositions 3 and 4, respectively. The respective proofs can be found in Appendix A. **Proposition 3** (Codimension-one bifurcations).: _System (8) has the following codimension-one bifurcations, further information on which can be found in [31]._ 1. Boundary equilibrium bifurcations _of_ \(p_{1}\) _and_ \(p_{2}\) _occur, respectively, along the straight lines_ \[\mathrm{BE}_{1}: (\mu,\eta)=\left(\mu,\ \frac{\mu}{\kappa_{1}}-\frac{1}{1+ \kappa_{1}}\right),\] \[\mathrm{BE}_{2}: (\mu,\eta)=\left(\mu,\ \frac{\mu}{\kappa_{2}}-\frac{1}{1+ \kappa_{2}}\right).\] _At_ \(\mathrm{BE}_{i}\) _the equilibrium_ \(p_{i}\) _collides with the tangency point_ \(F_{i}\)_, changing its visibility. The tangency point_ \(F_{1}\) _is visible in regions_ \(\mathrm{I,IV}\) _and_ \(\mathrm{VIII}\)_. Similarly, the tangency point_ \(F_{2}\) _is visible in regions_ \(\mathrm{III,VI,VII}\) _and_ \(\mathrm{VIII}\)_. The pseudo-equilibrium_ \(q^{+}\) _is admissible in regions_ \(\mathrm{II,V}\) _and_ \(\mathrm{VII}\)_. Similarly, the pseudo-equilibrium_ \(q^{-}\) _is admissible in regions_ \(\mathrm{VII}\) _and_ \(\mathrm{VIII}\)_._ 2. Fold-fold bifurcations _occur along the horizontal line_ \[\mathrm{FF}:\ \ (\mu,\eta)=(\mu,\ 0).\] _At_ \(\mathrm{FF}\) _the tangency points_ \(F_{1}\) _and_ \(F_{2}\) _coincide at a singular tangency point_ \(F^{*}\) _and switch places on the sliding segment boundary, resulting in the sliding segments changing between being attracting and repelling_ _[_31_]__; see also Proposition_ 1 _for a description of the tangency points._ 3. \(A\) pseudo-saddle-node bifurcation _occurs along the curve segment_ \[\mathrm{PS}:\ \ (\mu,\eta)=(\mu,\ -(\mu+1)+2\sqrt{\mu}),\ \ \ \frac{\kappa_{1}^{2}}{(\kappa_{1}+1)^{2}}<\mu<\frac{ \kappa_{2}^{2}}{(\kappa_{2}^{2}+1)^{2}}.\] _Along_ \(\mathrm{PS}\) _the pseudo-equilibria_ \(q^{-}\) _and_ \(q^{+}\) _form a saddle-node at_ \[q^{*}=(1-\sqrt{\mu})\binom{1}{1}+\binom{0}{\eta}\] _on the repelling sliding segment_ \(\Sigma_{s}^{r}\)_._ The curves \(\mathrm{BE}_{1}\), \(\mathrm{BE}_{2}\), FF and PS from Proposition 3 intersect or meet at codimension-two bifurcation points. These points divide \(\mathrm{BE}_{1}\), \(\mathrm{BE}_{2}\), FF into the segments shown in Figure 3, along which the respective codimension-one bifurcation manifests itself in a topologically different way, as follows. **Proposition 4** (Codimension-two bifurcations).: _System (8) has the following codimension-two bifurcations for \(0<\kappa_{1}<\kappa_{2}\)._ 1. Fold-boundary equilibrium bifurcations \[\mathrm{FB}_{1}: (\mu,\eta)=\left(\frac{\kappa_{1}}{1+\kappa_{1}},\ 0\right),\] (24) \[\mathrm{FB}_{2}: (\mu,\eta)=\left(\frac{\kappa_{2}}{1+\kappa_{2}},\ 0\right),\] (25) occur at the intersection point of the curve \(\mathrm{FF}\) with the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\), respectively. At the point \(\mathrm{FB}_{i}\), the equilibrium \(p_{i}\) collides with the singular tangency point \(F^{*}\). The point \(\mathrm{FB}_{i}\) divides the curve \(\mathrm{FF}\) locally into segments \(\mathrm{FF}_{1}\) and \(\mathrm{FF}_{2}\), which is the case of fold-fold bifurcation of type VI\({}_{1}\) as presented in [31], and the segment \(\mathrm{FU}\) of fused-focus bifurcation [31, 33] along which the (crossing) periodic orbit \(\Gamma\) (dis)appears. Both of these fold-fold bifurcations result in the sliding segment changing between being repelling and attracting, and the quadratic tangency points \(F_{i}\) switching places as the sliding segment boundaries. The point \(\mathrm{FB}_{i}\) also divides the curve \(\mathrm{BE}_{i}\) locally into segment \(\mathrm{BE}_{i}^{P}\), where there is a standard persistence boundary equilibrium bifurcation with a nodal equilibrium as presented in [31], and a segment \(\widehat{\mathrm{BE}}_{i}^{P}\) along which a stable (crossing) periodic orbit \(\Gamma\) (dis)appears in a homoclinic-like persistence boundary equilibrium bifurcation. 2. A double-boundary equilibrium bifurcation \[\mathrm{BB}:\ \ (\mu,\eta)=\left(\frac{\kappa_{1}\kappa_{2}}{(\kappa_{1}+1)( \kappa_{2}+1)},\ \ \frac{-1}{(\kappa_{1}+1)(\kappa_{2}+1)}\right),\] (26) occurs at the intersection of the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\). At point \(\mathrm{BB}\) the equilibria \(p_{1}\), and \(p_{2}\) simultaneously collide at the two different quadratic tangency points \(F_{1}\) and \(F_{2}\), respectively. The point \(\mathrm{BB}\) divides the curve \(\mathrm{BE}_{1}\) locally into segment \(\widehat{\mathrm{BE}}_{1}^{P}\), along which a stable (crossing) periodic orbit \(\Gamma\) (dis)appears, and a segment \(\widehat{\mathrm{BE}}_{1}^{P}\), where there is a standard persistence boundary equilibrium bifurcation with a nodal equilibrium as presented in [31]. Similarly, the curve \(\mathrm{BE}_{2}\) is divided locally by \(\mathrm{BB}\) into the segment \(\widehat{\mathrm{BE}}_{2}^{F}\), along which the (crossing) periodic orbit \(\Gamma\) (dis)appears, and a segment \(\mathrm{BE}_{2}^{F}\), where there is the standard non-smooth fold boundary equilibrium bifurcation with a nodal equilibrium [31]. 3. Generalized boundary equilibrium bifurcations [30] \[\mathrm{GB}_{1}: (\mu,\eta)=\left(\frac{\kappa_{1}^{2}}{(\kappa_{1}+1)^{2}},\ \ -\frac{1}{(\kappa_{1}+1)^{2}}\right),\] (27) \[\mathrm{GB}_{2}: (\mu,\eta)=\left(\frac{\kappa_{2}^{2}}{(\kappa_{2}+1)^{2}},\ \ -\frac{1}{(\kappa_{2}+1)^{2}}\right),\] (28) occur at end points of the curve \(\mathrm{PS}\), respectively, on the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\). At the point \(\mathrm{GB}_{i}\), equilibrium \(p_{i}\) collides with the quadratic tangency point \(F_{i}\). At the same time, a pseudo-saddle-node bifurcation takes place at \(F_{i}\), resulting in a generalised boundary equilibrium bifurcation with respect to \(f_{i}\). The point \(\mathrm{GB}_{1}\) separates the curve \(\mathrm{BE}_{1}\) locally into the segment \(\widehat{\mathrm{BE}}_{1}^{P}\) and the segment \(\mathrm{BE}_{1}^{F}\) of boundary equilibrium bifurcations. Similarly, the point \(\mathrm{GB}_{2}\) separates the curve \(\mathrm{BE}_{2}\) locally into the segment \(\widehat{\mathrm{BE}}_{2}^{P}\) and the segment \(\widehat{\mathrm{BE}}_{2}^{F}\) of boundary equilibrium bifurcations. ### Phase portraits at codimension-one bifurcations We now present in Figures 7-11 phase portraits in the \((x,y)\)-plane for each segment of codimension-one bifurcation introduced in Proposition 4 and shown and labeled accordingly in Figure 3. Here, we take a global view of each such transition to allow for comparison with the respective neighbouring structurally stable phase portraits in Figures 4-6. #### 2.4.1 Fold-fold and pseudo-Hopf bifurcations Figure 7 shows the phase portraits along segments \(\mathrm{FF}_{1}\), \(\mathrm{FU}\), and \(\mathrm{FF}_{2}\), each of which with a singular tangency point \(F^{*}\). The phase portrait along segment \(\mathrm{FF}_{1}\), which separates regions I and IV, is shown in panel (a). The admissible equilibrium \(p_{1}\in R_{1}\) is a global attractor with strong stable manifold \(W^{ss}(p_{1})\). The singular tangencyat the point \(F^{*}\) is invisible to \(f_{2}\) and visible to \(f_{1}\), and orbits of Figure 7: Representative phase portraits of segments \(\mathrm{FF}_{1},\mathrm{FU}\) and \(\mathrm{FF}_{2}\) along \(\eta=0\), each with a singular tangency point \(F^{*}\). Panel (a) for \(\mu=0.0225\) on \(\mathrm{FF}_{1}\) shows the global attractor \(p_{1}\) with \(W^{ss}(p_{1})\) and the equilibrium \(p_{2}\). Panel (b1) for \(\mu=0.25\) on FU and the magnification (b2) shows the weakly attracting fold-fold point \(F^{*}\); a representative orbit is highlighted in purple. Panel (c) for \(\mu=0.7\) on \(\mathrm{FF}_{2}\) shows the global attractor \(p_{2}\) with \(W^{ss}(p_{2})\). system (8) are collinear at \(F^{*}\). The phase portrait along segment FU, separating regions II and V, is shown in panel (b1) with a magnification near \(F^{*}\) in panel (b2). The singular tangency at \(F^{*}\) is now invisible to both vector fields \(f_{1}\) and \(f_{2}\), and orbits are anti-collinear at \(F^{*}\). Therefore, orbits spiral inward toward \(F^{*}\) (at a very slow rate), and this point is a global attractor. This situation is reminiscent of a (supercritical) Hopf bifurcation for smooth dynamical systems, which is why this bifurcation is also known as a pseudo-Hopf bifurcation [33]. The phase portrait along segment \(\mathrm{FF}_{2}\), which separates regions III and VI, is presented in panel (c). The singular tangency at \(F^{*}\) is now visible to \(f_{2}\) and invisible to \(f_{1}\), and \(p_{2}\in R_{2}\) is admissible and the global attractor. #### 2.4.2 Boundary equilibrium and pseudo-saddle-node bifurcations The phase portraits along the segments of the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\) from Proposition 3 are characterised by an equilibrium of system (8) being on the switching manifold, but they have different global manifestations. The phase portrait along segment \(\mathrm{BE}_{1}^{P}\), which separates regions I via II, is shown in Figure 8(a). It has the attracting sliding segment \(\Sigma_{s}^{a}\) bounded by the boundary-node \(p_{1}\) on the left, and by the invisible quadratic tangency point \(F_{2}\) on the right; both pseudo-equilibria \(q^{-}\) and \(q^{+}\) are non-admissible (and not shown). The equilibrium \(p_{2}\) is not admissible and the boundary-node \(p_{1}\) is a global attractor; note that its strong stable manifold \(W^{ss}(p_{1})\) consists only of the horizontal arriving orbit to \(p_{1}\) in \(R_{1}\). The phase portrait in panel (b) along segment \(\mathrm{BE}_{2}^{P}\), separating regions II and III, is the corresponding situation but for the boundary-node \(p_{2}\): this point is now the global attractor with strong stable manifold \(W^{ss}(p_{2})\) in \(R_{2}\), and it bounds \(\Sigma_{s}^{a}\) together with the invisible quadratic tangency point \(F_{1}\). Figure 9(a1) shows the phase portrait along segment \(\widehat{\mathrm{BE}}_{1}^{P}\), separating regions IV and V, with a magnification in panel (a2) near the sliding segment, which is now repelling. Here \(\Sigma_{s}^{r}\) is bounded by the invisible quadratic tangency point \(F_{2}\) on the left and by the boundary-node \(p_{1}\) on the right, with both pseudo-equilibria non-admissible (and not shown). The point \(p_{1}\) is globally attracting with strong stable manifold \(W^{ss}(p_{1})\) in \(R_{1}\). However, the vector field \(f_{2}\) is transverse to \(\Sigma\) at \(p_{1}\), and this Figure 8: Representative phase portraits along the segments \(\mathrm{BE}_{1}^{P}\) and \(\mathrm{BE}_{2}^{P}\) on the horizontal slice \(\eta=0.35\) of the \((\mu,\eta)\)-plane, each with an attracting sliding segment \(\Sigma_{s}^{a}\). Panel (a) for \(\mu=0.1259\) on \(\mathrm{BE}_{1}^{P}\) shows boundary equilibrium \(p_{1}\) with \(W^{ss}(p_{1})\). Panel (b) for \(\mu=-0.25\) on \(\mathrm{BE}_{2}^{P}\) shows the boundary equilibrium \(p_{2}\) with \(W^{ss}(p_{2})\). departing orbit forms a non-sliding homoclinic connection \(\Gamma_{1}^{*}\) back to the boundary-node \(p_{1}\). Observe in panel (a2) that \(\Gamma_{1}^{*}\) bounds a region of a family of homoclinic orbits that involve sliding (in backward time) along the repelling sliding segment \(\Sigma_{s}^{r}\). The orbit labeled \(\gamma_{1}^{*}\), consisting of \(\Sigma_{s}^{r}\) and the departing orbit from \(F_{2}\), is the maximal sliding homoclinic orbit: it divides this region inside \(\Gamma_{1}^{*}\) into homoclinic orbit that remain in \(R_{1}\) from those that have segments in both \(R_{1}\) and \(R_{2}\). The phase portrait along segment \(\widehat{\text{BE}}_{2}^{P}\), which separates regions V and VI, is shown similarly in Figure 9(b1) and (b2). The overall picture is effectively that same, but now \(p_{2}\) is the globally attracting boundary equilibrium on \(\Sigma_{s}^{r}\), with analogous non-sliding and maximal sliding homoclinic orbits \(\Gamma_{2}^{*}\) and \(\gamma_{2}^{*}\), respectively. The characterising feature of this bifurcation is the existence of a non-sliding and crossing homoclinic connection \(\Gamma_{i}^{*}\), from which the stable (crossing) periodic orbit \(\Gamma\) in region V bifurcates; compare with Figure 5(b). This type of (persistence) boundary equilibrium bifurcation is hardly discussed in the literature; to our knowledge, it has only been observed in the related Welander's box model in [24], where it is referred to as a homoclinic-like boundary equilibrium bifurcation. The phase portrait along segment \(\widehat{\mathrm{BE}}_{2}^{F}\), which separates regions V and VII, is shown in Figure 10(a1) with a magnification near the repelling sliding segment in panel (a2). Here, \(\Sigma_{s}^{r}\) is bounded by the attracting boundary-node \(p_{2}\) on the left and by an invisible tangency \(F_{1}\) on the right; moreover, it contains the admissible and repelling pseudo-equilibrium \(q^{+}\in\Sigma_{s}^{r}\) (while the pseudo-equilibrium \(q^{-}\) is non-admissible and not shown). As was the case along segment \(\widehat{\mathrm{BE}}_{1}^{F}\), the phase portrait in Figure 10(a) features a (crossing) homoclinic orbit \(\Gamma_{2}^{*}\) of \(p_{2}\). However, due to the existence of \(q^{+}\) on \(\Sigma_{s}^{r}\), this special orbit does now not bound a region with further (sliding) homoclinic orbits. Regardless, \(\Gamma_{2}^{*}\) is still the limit of the stable (crossing) periodic orbit \(\Gamma\) in region V. Note that all points inside the region bounded by \(\Gamma_{2}^{*}\) converge in backward time to the unstable pseudo-equilibrium \(q^{+}\), whose strong unstable manifold \(W^{uu}(q^{+})\) converges to \(p_{2}\); see panel (a2). Segment \(\widetilde{\mathrm{BE}}_{1}^{P}\) separates regions VII and VIII, and the phase portrait along it is shown in Figure 10(b). Here, \(p_{1}\) is the attracting boundary-node, and the quadratic tangency \(F_{2}\) is visible. The equilibrium \(p_{2}\in R_{2}\) is admissible and also attracting. Moreover, the pseudo-equilibrium \(q^{-}\) lies on the repelling sliding section \(\Sigma_{s}^{r}\), and it is a saddle. Its generalised stable manifold \(W_{g}^{s}(q^{-})\) consists of \(\Sigma_{s}^{r}\) and the arriving orbit to \(F_{2}\). The boundary between the basins of attraction of \(p_{1}\) and \(p_{2}\) is formed by the union of \(W_{g}^{s}(q^{-})\) and the strong stable manifold \(W^{ss}(p_{1})\) in \(R_{1}\). Points below these curves and including \(W^{ss}(p_{1})\) converge to \(p_{1}\), while points above these curves converge to \(p_{2}\). The pseudo-saddle-node bifurcation along the curve PS separates regions VI and VII, and its phase portrait is shown in Figure 10(c). As the name suggests, there is a saddle-node \(q^{*}\) of pseudo-equilibria on the repelling sliding segment \(\Sigma_{s}^{r}\), which is the limiting point where the admissible pseudo-equilibria \(q^{-},q^{+}\in\Sigma_{s}^{r}\) in region VII (dis)appear. Note that \(q^{*}\) is semi-stable on \(\Sigma_{s}^{r}\) and has the strong unstable manifold \(W^{uu}(q^{*})\). The points on \(\Sigma_{s}^{r}\) in between the visible quadratic tangency point \(F_{2}\) and \(q^{*}\) end up at \(q^{*}\) under the sliding flow; all other points in the \((x,y)\)-plane converge to the admissible and stable equilibrium \(p_{2}\in R_{2}\) with strong stable manifold \(W^{ss}(p_{2})\). Finally, the boundary equilibrium bifurcations along segments \(\mathrm{BE}_{2}^{F}\) and \(\mathrm{BE}_{1}^{F}\) are encountered in the transition from region IV via region VIII to region VI. In the phase portrait for \(\mathrm{BE}_{2}^{F}\) in Figure 11(a), the attracting boundary-node \(p_{2}\) bounds the repelling sliding segment \(\Sigma_{s}^{r}\) on the left, while a visible quadratic tangency point \(F_{1}\) bounds it on the right. There are no pseudo-equilibria on \(\Sigma_{s}^{r}\), and the equilibrium \(p_{1}\in R_{1}\) is admissible and also attracting. Trajectories above and including the union of \(W^{ss}(p_{2})\) in \(R_{2}\), \(\Sigma_{s}^{r}\) and the arriving orbit to \(F_{1}\) in \(R_{1}\) converge to \(p_{2}\), and orbits below this union converge to \(p_{1}\). The phase portrait along segments \(\mathrm{BE}_{1}^{F}\) in Figure 11(b) is effectively the same with the roles of \(p_{1}\) and \(p_{2}\) exchanged. Here, the orbits below and including the union of the Figure 11: Representative phase portraits along the segments \(\mathrm{BE}_{2}^{F}\) and \(\mathrm{BE}_{1}^{F}\), each with a repelling sliding segment \(\Sigma_{s}^{r}\). Panel (a) for \((\mu,\eta)=(-0.05,-0.55)\) on \(\mathrm{BE}_{2}^{F}\) shows the boundary equilibrium \(p_{2}\) with \(W^{ss}(p_{2})\) and the admissible equilibrium \(p_{1}\in R_{1}\) with \(W^{ss}(p_{1})\). Panel (b) for \((\mu,\eta)=(0.0209,-0.7)\) on \(\mathrm{BE}_{1}^{F}\) similarly shows the boundary equilibrium \(p_{1}\) with \(W^{ss}(p_{1})\) and the admissible equilibrium \(p_{2}\in R_{2}\) with \(W^{ss}(p_{2})\). arriving orbit to \(F_{2}\) in \(R_{2}\), \(\Sigma_{s}^{r}\) and \(W^{ss}(p_{1})\) in \(R_{1}\) converge to the attracting boundary node \(p_{1}\in\Sigma_{s}^{r}\), while orbits above this union converge to the attracting equilibrium \(p_{2}\in R_{2}\). ## 3 Bifurcation analysis of the smooth model We now investigate the smooth model (7) for small \(\varepsilon>0\). Here we again fix the vertical mixing coefficients to \(\kappa_{1}=0.1\) and \(\kappa_{2}=1.0\), to enable a direct comparison of the bifurcation diagram of system (7) with that of the limiting case of system (8). We first consider the bifurcation diagram in the \((\mu,\eta)\)-plane of system (7) for the fixed value of \(\varepsilon=0.1\). It is shown in Figure 12 and was obtained by computing the shown bifurcation curves and codimension-two points with the continuation package AUTO-07p [27], guided by established bifurcation theory [26]. One clearly observes four main open regions, denoted \(A,B,C\) and \(D\), on which we focus here; associated phase portraits are shown in Figures 13-15. A main element of the bifurcation diagram in Figure 12 is a curve S of saddle-node bifurcation with two branches that meet at the cusp point \(\mathrm{CP}\). Along each branch of S there are points \(\mathrm{BT}_{1}\) and \(\mathrm{BT}_{2}\) of Bogdanov-Takens bifurcation (one close to \(\mathrm{CP}\)). From these points a curve H of Hopf bifurcation emerges, which is the second main element of the bifurcation diagram. Together, the Figure 12: Two-parameter bifurcation diagram in the \((\mu,\eta)\)-plane of system (7) for \(\varepsilon=0.1\) and with \(\kappa_{1}=0.1\), \(\kappa_{2}=1.0\). Shown are curves of Hopf bifurcation H (red, solid when supercritical, dashed when subcritical), saddle-node bifurcation S (black when on periodic orbit and grey otherwise) and homoclinic bifurcation \(\mathrm{h}_{1}\) (green), which are the main curves that divide the \((\mu,\eta)\)-plane into the large regions \(A,B,C\) and \(D\). Also shown are codimension-two points \(\mathrm{CP}\), \(\mathrm{BT}_{1}\), \(\mathrm{BT}_{2}\), \(\mathrm{GH}_{1}\), \(\mathrm{GH}_{2}\), \(\mathrm{N}_{1}\) and \(\mathrm{N}_{2}\); grey shading indicates the existence of a stable periodic orbit, and blue shading bistability between equilibria. curves S and H effectively form the boundaries of the four main regions \(A\), \(B\), \(C\) and \(D\). Additional ingredients are: the change of criticality of H at generalised Hopf points GH\({}_{1}\) and GH\({}_{2}\); a curve h\({}_{1}\) of homoclinic bifurcation; and a segment of S, bounded by points N\({}_{1}\) and N\({}_{2}\) of non-central homoclinic bifurcation [34], where the saddle-node bifurcation occurs on a periodic orbit (also known as SNIC or SNIPER). We remark that the complete bifurcation diagram in the \((\mu,\eta)\)-plane involves subtle additional bifurcation phenomena near the points GH\({}_{1}\), GH\({}_{2}\), N\({}_{1}\) and N\({}_{2}\) that are indistinguishable on the scale of Figure 12; these include very narrow regions bounded by additional curves of homclinic bifurcation and of saddle-node bifurcation of periodic orbits, and their discussion is beyond the scope of this paper. ### Phase portraits in the main regions of the \((\mu,\eta)\)-plane Region \(A\) of Figure 12 is bounded by the respective (supercritical) part of the curves S and H. Comparison with Figure 3 shows that \(A\) is the largest region and 'covers' the five regions I, II, III, Figure 13: Phase portraits at the points \(A_{1}\) at \(\mu=0.01\) and \(A_{2}\) at \(\mu=0.1\) with \(\eta=-0.4\) from region \(A\) in Figure 12. Panels (a1) and (b1) shows the phase portrait on the graph of \(\mathcal{H}_{0.1}(x,y)\), and panels (a2) and (b2) in the \((x,y)\)-plane. Featured is the equilibrium \(p\), its strong stable manifold \(W^{ss}(p)\) (blue curve) when it exists, and some representative trajectories (purple curves). IV and VI of the PWS system (8). Throughout region \(A\), there is a single attracting equilibrium, denoted \(p\), which may correspond to distinct mixing states: weak (non-convective) mixing near \(\kappa_{1}\), an intermediate state in between convective and non-convective mixing, or strong (convective) mixing near \(\kappa_{2}\). This is illustrated in Figures 13 and 14(a) with phase portraits at the parameter points labeled \(A_{1}\), \(A_{2}\) and \(A_{3}\) in Figure 12. We show all phase portrait of system (7) in two ways to indicate when the dynamics corresponds to \(\kappa_{1}\) or \(\kappa_{2}\): on the graph of \(\mathcal{H}_{\varepsilon}(y-x-\eta)\) over the \((x,y)\)-plane and on the \((x,y)\)-plane itself, where we use coloring as in Figure 2. At parameter point \(A_{1}\) as in Figure 13(a), the single stable equilibrium \(p\) lies in the region with \(\mathcal{H}_{0.1}(y-x-\eta)\) near \(0\) (that is, the dynamics of system (7) is near \(\kappa_{1}\)), and it has real eigenvalues and a strong stable manifold \(W^{ss}(p)\); hence, \(p\) corresponds here to the equilibrium \(p_{1}\in R_{1}\) from regions I and IV of the PWS system (8). Moving to parameter point \(A_{2}\) as in Figure 13(b), the equilibrium \(p\) now lies in the transition region where the graph of \(\mathcal{H}_{0.1}(y-x-\eta)\) is steep; moreover, it is an attracting focus with complex conjugate eigenvalues. Finally, at parameter point \(A_{3}\) as in Figure 14(a), the attracting point \(p\) has again real eigenvalues and a strong stable manifold \(W^{ss}(p)\), and now lies in the region of the phase plane with \(\mathcal{H}_{0.1}(y-x-\eta)\) near \(1\) (that is, the dynamics is now near \(\kappa_{2}\)). Hence, \(p\) now corresponds to the equilibrium \(p_{2}\in R_{2}\) Figure 14: Phase portraits at the point \(A_{3}\) at \((\mu,\eta)=(0.1,-0.555)\) and from region \(B\) at \((\mu,\eta)=(0.07,-0.50)\), shown as in Figure 13 and featuring a periodic orbit \(\Gamma\) in panels (b1) and (b2). in either regions III and VI of the PWS limit. We conclude that the gradual transition from \(A_{1}\) to \(A_{3}\) within region \(A\) is very reminiscent of that from region I, via region II, to region III of system (8); compare with Figure 4. Region \(B\) is bounded by the supercritical part of the curve H and the SNIPER-part of S, and it is the'smooth version' of region V of system (8). The phase portrait in Figure 14(b), at the marked parameter point in Figure 12, shows that in region \(B\) there is indeed a stable periodic orbit \(\Gamma\) surrounding the now unstable equilibrium \(p\). Observe that \(\Gamma\) 'lives' in the switching region; that is, it lies on the steep part of the graph of \(\mathcal{H}_{0.1}(y-x-\eta)\). Note further that the periodic orbit \(\Gamma\) bifurcates at the supercritical part of the Hopf bifurcation curve H from the attracting focus \(p\) of the phase portrait at \(A_{2}\) in Figure 13(b). As \(\eta\) is decreased within region \(B\), the periodic orbit \(\Gamma\) grows and develops two segments that lie in the region with \(\mathcal{H}_{0.1}(y-x-\eta)\) near \(0\) and near \(1\), respectively; these segments correspond to the two segments of the segments periodic orbit \(\Gamma\) in V of the limiting PWS system (8) in Figure 5(b). Region \(C\) of Figure 12 is bounded by segments of the two branches of S and by the homoclinic Figure 15: Phase portraits in the region \(C\) at \((\mu,\eta)=(0.04,-0.575)\) and region \(D\) at \((\mu,\eta)=(0.01,-0.65)\). Shown in the same manner as in Figure 13, now featuring equilibria \(p_{1},p_{3}\), and \(q\) with the stable manifold \(W^{s}(q)\) and unstable manifold \(W^{u}(q)\). bifurcation curve h\({}_{1}\) (which follows closely a subcritical part of the curve H). Figure 15(a) shows the representative phase portrait at the marked parameter point in Figure 12. There is an attracting equilibrium, labeled \(p_{2}\), with a high value of the transition function \(\mathcal{H}_{0.1}\), as well as a saddle-equilibrium \(q\) and a repelling equilibrium \(p_{1}\) with an intermediate value of \(\mathcal{H}_{0.1}\). Note that \(q\) has the stable manifold \(W^{s}(q)\) and unstable manifold \(W^{u}(q)\), which converges to \(p_{2}\). Region \(C\) is the'smooth version' of region VII of the limiting PWS system (8) in the following way: \(p_{2}\) of the smooth system (7) corresponds to \(p_{2}\in R_{2}\), and the equilibria \(q\) and \(p_{1}\) correspond to the pseudo-saddle-equilibrium \(q^{-}\) and pseudo-equilibrium \(q^{+}\) on the repelling sliding segment \(\Sigma_{s}^{r}\), respectively; compare with Figure 6(a). Finally, region \(D\) is bounded by the other segments of the two branches of S and the homoclinic bifurcation curve h\({}_{1}\). As the representative phase portrait in Figure 15(b) at the marked parameter point in Figure 12 shows, it is the region of bistability and corresponds to region VIII of the system (8). The attractor \(p_{2}\) in Figure 15(b) is still at a high value of \(\mathcal{H}_{0.1}\), and the saddle-equilibrium \(q\) is unchanged. However, in contrast to region \(C\), the equilibrium \(p_{1}\) is at lower value of the transition function and, moreover, it is now an attractor. The lower branch of \(W^{u}(q)\) converges to \(p_{1}\) and its upper branch to \(p_{2}\), meaning that the stable manifold \(W^{s}(q)\) forms the boundary between the basins of attraction of \(p_{1}\) and \(p_{2}\); compare with Figure 6(b). ### Partial bifurcation analysis in \((\mu,\eta,\varepsilon)\)-space We now describe how the main elements of the bifurcation diagram of system (7) in the \((\mu,\eta)\)-plane change with the switching-time parameter \(\varepsilon\). Our focus here is on the curves S and H, which meet at the Bogdanov-Takens points BT\({}_{1}\) and BT\({}_{2}\) and effectively delimit the two main regions of interest, namely region \(B\) characterised by stable oscillations and region \(D\) exhibiting bistability. Figure 16 presents the partial three-parameter bifurcation diagram in \((\mu,\eta,\varepsilon)\)-space for \(\varepsilon\in[0,0.16]\) and ranges of \(\mu\) and \(\eta\) as in Figure 3. Specifically, Figure 16 shows the bifurcation diagram of the limiting system (8) for \(\varepsilon=0\) together with the curves S and H of system (7) as computed for 31 equidistant slices of fixed \(\varepsilon>0\). In this way, the corresponding surfaces S and H of saddle-node and Hopf bifurcation are visualised in \((\mu,\eta,\varepsilon)\)-space with a'see-through effect'. Also shown in Figure 16 is the curve CP of cusp bifurcation, and the curves BT\({}_{1}\) and BT\({}_{2}\) of Bogdanov-Takens bifurcation, along which the surface H ends on the surface S. These codimension-two bifurcation curves were computed directly by numerical continuation in \((\mu,\eta,\varepsilon)\)-space. In particular, this shows that BT\({}_{1}\) and BT\({}_{2}\) form a single curve with a maximum at the point DBT at \(\varepsilon\approx 0.147\), which we identified as a codimension-three degenerate Bogdanov-Takens point of focus type [35, 36]. Additionally, curves GH\({}_{1}\) and GH\({}_{2}\) of generalised Hopf bifurcation are shown in Figure 16; they were found by identifying the points the corresponding bifurcation points on the curves of Hopf bifurcation in the individual slices for fixed \(\varepsilon\). We observe for increasing \(\varepsilon\) that the curve GH\({}_{1}\) ends at the point DBT. The curve GH\({}_{2}\), on the other hand, terminates where the curves CP and BT\({}_{2}\) intersect at the codimension-three point GBC at \(\varepsilon\approx 0.1208\). Similarly, the codimension-two non-central homoclinic bifurcation N\({}_{1}\) and N\({}_{2}\) (not shown in Figure 16) are found to vanish as \(\varepsilon\) increases, prior to \(\varepsilon\) reaching the value \(\varepsilon\approx 0.147\) of the point DBT. The disappearance of N\({}_{1}\) and N\({}_{2}\) involves a sequence of codimension-three bifurcations, whose analysis is beyond the scope of this paper. We first consider the relevance of the PWS limiting system (8) for the bifurcation diagram of the smooth system (7). While the continuation of Hopf and saddle-node bifurcation of system (7) becomes very challenging for small values of \(\varepsilon\) near 0, we managed to compute the respective curves S and H in the slice at \(\varepsilon=0.005\). As illustrated in Figure 16, this turns out to be sufficient for determining the convergence of S and H to the corresponding non-smooth bifurcation as \(\varepsilon\) approaches 0. Specifically, the lower boundary of the surface S of saddle-node bifurcation in the \((\mu,\eta)\)-plane at \(\varepsilon=0\) is the union of the non-smooth fold boundary equilibrium bifurcation curve segments \(\mathrm{BE}_{1}^{F},\mathrm{BE}_{2}^{F}\) and \(\widehat{\mathrm{BE}}_{2}^{F}\), and the pseudo-saddle-node bifurcation PS. The surface H of Hopf bifurcation has as its boundary at \(\varepsilon=0\) the union of the curve segments \(\widehat{\mathrm{BE}}_{1}^{P}\), \(\widehat{\mathrm{BE}}_{2}^{P}\) and \(\widehat{\mathrm{BE}}_{1}^{P}\) of persistence boundary equilibrium bifurcation, and FU of fused-focus bifurcation. Moreover, the curves \(\mathrm{BT}_{1}\) and \(\mathrm{BT}_{2}\) of Bogdanov-Takens bifurcation converge to the points \(\mathrm{GB}_{1}\) and \(\mathrm{GB}_{2}\) of generalised boundary equilibrium bifurcation, respectively; the curve CP of cusp bifurcation also converges to the point \(\mathrm{GB}_{2}\). Similarly, the curves \(\mathrm{GH}_{1}\) and \(\mathrm{GH}_{2}\) of generalised Hopf bifurcation converge to the points \(\mathrm{FB}_{1}\) and \(\mathrm{FB}_{2}\) of fold-boundary equilibrium bifurcation, respectively. We now consider the influence of increasing the switching-time parameter \(\varepsilon\). Observe from Figure 16 that the surface H of Hopf bifurcation 'ends' at the point DBT at \(\varepsilon\approx 0.147\). Specifically, the curve H in the \((\mu,\eta)\)-plane for fixed \(\varepsilon<0.147\) shrinks to a point at DBT and disappears. Since all Figure 16: Partial three-parameter bifurcation diagram in \((\mu,\eta,\varepsilon)\)-space of system (7) for \(\varepsilon>0\) and system (8) at \(\varepsilon=0\), with \(\kappa_{1}=0.1\) and \(\kappa_{2}=0.1\). Represented are curves S of saddle-node bifurcation (black) and H of Hopf bifurcation (red) for fixed values of \(\varepsilon>0\), together with the bifurcation diagram for \(\varepsilon=0\) from Figure 3. The diagram also illustrates the curve CP of cusp bifurcation (grey), along with branches \(\mathrm{BT}_{1}\) and \(\mathrm{BT}_{2}\) of Bogdanov-Takens bifurcation (dark purple) that meet at the point DBT. Included are also curves \(\mathrm{GH}_{1}\) and \(\mathrm{GH}_{2}\) of generalised Hopf bifurcation (pink). other curves of codimension-two bifurcations have also disappeared, above DBT one only finds the surface S of saddle-node bifurcation with the curve CP of cusp bifurcation. Therefore, in any slice for fixed \(\varepsilon>0.147\) the remaining regions are: region \(A\) with a single attracting equilibrium that can take any value of \(\mathcal{H}_{\varepsilon}\), and the bistability region \(D\), where two stable equilibria coexist, one associated with \(\mathcal{H}_{\varepsilon}\) near \(0\) and the other with \(\mathcal{H}_{\varepsilon}\) near \(1\). In particular, both region \(B\) with stable oscillations, and region \(C\), no longer exists for \(\varepsilon>0.147\). Hence, we conclude that the existence of self-sustained oscillations in the (adjusted) Welander model requires sufficiently fast switching between convective and non-convective mixing of surface water with the deep ocean. ## 4 Discussion and outlook We studied the adjusted Welander model (7) with transition function \(\mathcal{H}_{\varepsilon}\) between weak and strong mixing between the warm surface and cold deep ocean as given by (4). This conceptual model in the context of the AMOC describes the evolution of temperature and salinity on the ocean surface in the Labrador and Nordic seas. We performed a bifurcation analysis with advanced tools from (non-smooth) dynamical systems theory, first for the piecewise-smooth limiting case \(\varepsilon=0\) when \(\mathcal{H}_{0}\) is the Heaviside function, and then for the smooth case of \(\mathcal{H}_{\varepsilon}\) with small \(\varepsilon>0\). Specifically, we presented bifurcation diagrams in the \((\mu,\eta)\)-plane of salinity versus temperature flux ratio \(\mu\) and density threshold \(\eta\), where the rates \(\kappa_{1}\) of weak (non-convective) and \(\kappa_{2}\) of strong (convective) mixing were fixed at suitable values. For the PWS model with \(\varepsilon=0\), all curves of codimension-one bifurcations and points of codimension-two bifurcations were determined analytically -- resulting in a complete description of all possible dynamics and the transitions between them. In this way, we identified the respective discontinuity-induced bifurcations, including the continuum of homoclinic orbits investigated in [24], and showed how these are generated or lost as \(\mu\) and \(\eta\) change along different paths. In fact, the bifurcation diagram in the \((\mu,\eta)\)-plane we presented for this case is complete and representative: it does not change in a qualitative way when a different choice is made for \(0<\kappa_{1}<\kappa_{2}\), as the expressions we derived show. For the smooth case, we computed the corresponding bifurcation diagram in the \((\mu,\eta)\)-plane for \(\varepsilon=0.1\) by means of numerical continuation. While the bifurcation diagram is complete, we concentrated here on four main regions of dynamics. In particular, we identified the region with oscillations found in Welander's original model [20], as well as a region of bistability that resembles previously described dynamics in a hierarchy of AMOC models [8]. We also performed a partial bifurcation analysis in \((\mu,\eta,\varepsilon)\)-space for small values of \(\varepsilon\), which focused on surfaces of Hopf and saddle-node bifurcations that (effectively) bound the main regions. In this way, we showed how the bifurcation diagram for \(\varepsilon>0\) is 'connected' to that of the PWS limit. Here, the switching time \(\varepsilon\) plays the role of a parameter that desingularises the limiting Heaviside switching function for \(\varepsilon=0\). A direction for future mathematical work would be to use tools from geometric singular perturbation theory [37] to study via slow-fast regularisation [38] how complicated smooth dynamics arises from the piecewise-linear limit. In this context, we conjecture that the family of homoclinic orbits along the segment \(\widetilde{\text{BE}}_{1}^{P}\) will generate a singular Hopf bifurcation with subsequent canard explosion to the Welander-type large periodic orbit -- with the maximally sliding orbit \(\gamma_{1}^{*}\) being the limit of a maximal canard [39]. Returning to the context of the AMOC, we found for large influxes of freshwater (smaller \(\mu\)) that mixing is dominantly non-convective, with the system approaching a stable equilibrium associated with \(\kappa_{1}\). Conversely, the mixing is dominated by convection for large influxes of salinity (larger \(\mu\)), with convergence to a stable equilibrium associated with \(\kappa_{2}\). We found that the intermediate region of bistability in the AMOC strength exists throughout and is rather independent of the switching time parameter \(\varepsilon\). In contrast, the region of oscillations, where the AMOC strength changes periodically between strong and weak, does depend on \(\varepsilon\). In fact, oscillations are present only for sufficiently small \(\varepsilon\): when the switching between the two regimes of mixing regimes becomes too slow, oscialltions are no longer observed. More generally, the investigation of a conceptual model, such as system (7), is a tool to uncover and highlight possible types of dynamics one may observe in the the AMOC. Specifically, we considered here the issue of deep ocean mixing in the North Atlantic in isolation from the larger climate system. Of course, there are many other climate processes that influence the overall state of the AMOC, and the analysis presented should be seen as forming a basis for the investigation of possible extensions of the model. There are several interesting directions for future research in this regard, all with their own mathematical challenges. One option is to consider additonal boxes in the model, such as an Equatorial box as in Stommel's original setup [18], or even to model the two deep-water convection sites in the Labrador sea and the Nordic seas by separate boxed as in [21]; indeed, such models are of higher dimensions, which makes their bifurcation analysis more involved. Another direction is to incorporate seasonal changes, for example, by periodic forcing the freshwater influx parameter \(\mu\), which leads to a non-autonomous model. Finally, the AMOC displays a number of feedback loops, such as the salt-advection into the subpolar North Atlantic. Incorporating feedback loops leads to the study of conceptual climate models in the form of delay differential equations, the study of which is possible but challenging because they have an infinite-dimensional phase space [40]. ## Acknowledgements This work was supported in part by Royal Society Te Aparangi Marsden Fund grant #19-UOA-223. We thank Henk Dijkstra for many helpful discussion, especially regarding the form of the adjusted Welander model we study here. ## Appendix A Proofs of Propositions 1-4 We now state and then verify the required properties for the specific case of system (8), with reference to the literature on planar Filippov systems where applicable. For in-depth background on general Filippov system theory and the associated formalism see [30, 31]. **Proof of Proposition 1 (Sliding segments and tangency points).** The linear switching manifold \(\Sigma\) is given as the zero set of the switching function \(g(x,y)=y-x-\eta\), and it has the constant normal vector \(\mathbf{n}=\binom{-1}{1}\). A tangency point \(F_{i}\) occurs when \((f_{i}\cdot\mathbf{n})(x,y)=0\) (more generally, when the first Lie derivative of \(g\) with respect to \(f_{i}\) is zero [30]). With \(y=x+\eta\) on \(\Sigma\) we obtain \[(f_{i}\cdot\mathbf{n})(x,x+\eta)=x-1+\mu-\eta\kappa_{i}, \tag{29}\] which yields (13). The visibility of the tangency point \(F_{i}\), when it is quadratic, is determined by the curvature of the orbit of \(f_{i}\) from \(F_{i}\) relative to \(\Sigma\). This is measured by the second Lie derivative of \(g\) with respect to \(f_{i}\)[30], which for system (8) is given by \[(f_{i}\cdot\nabla(f_{i}\cdot\mathbf{n}))(x,y)=(1+\kappa_{i})(1-(1+\kappa_{i} )x)-\kappa_{i}\mu+\kappa_{i}^{2}y,\] where \(\nabla\) is the gradient. Evaluating at \(F_{i}\) gives \[(f_{i}\cdot\nabla(f_{i}\cdot\mathbf{n}))(F_{i})=\mu+\kappa_{i}(\mu-\eta-1)- \eta\kappa_{i}^{2},\] which yields the genericity condition (14), and the visibility conditions (15) and (16). The tangency points \(F_{1}\) and \(F_{2}\) bound \(\Sigma_{s}\) and (29) implies \[(f_{1}\cdot\mathbf{n})(x,x+\eta)>0\text{ for }x>F_{1}\quad\text{and}\quad(f_{2} \cdot\mathbf{n})(x,x+\eta)<0\text{ for }x<F_{2}.\] From (13) we know that \(F_{1}<F_{2}\) for \(\eta>0\), while \(F_{2}<F_{1}\) for \(\eta<0\), which yields (17) and (18). **Proof of Proposition 2 (Equilibria, sliding vector field and pseudo-equilibria).** 1. Expression (19) immediately follow from setting \(f_{i}(x,y)=0\), and conditions (20) and (21) are immediate consequences from the definition of \(R_{i}\) in (11) and (12), respectively. The Jacobian \[J_{f_{i}}(x,y)=\begin{bmatrix}-(1+\kappa_{i})&0\\ 0&-\kappa_{i}\end{bmatrix}\] (30) of \(f_{i}\) has two negative real eigenvalues \(\lambda_{ss}=-(1+\kappa_{i})\) and \(\lambda_{s}=\kappa_{i}\), which implies that \(p_{i}\) is a stable node. Since \(\lambda_{ss}\) has eigenvector \(\binom{1}{0}\), the statement on \(W^{ss}_{loc}(p_{i})\) follows. 2. The sliding vector field on the line \(\Sigma_{s}\) is given by \[f_{s}(x,x+\eta)=((1-\lambda(x))f_{1}+\lambda(x)f_{2})\,(x,x+\eta),\] (31) where \(\lambda(x)\in[0,1]\) is chosen such that the vector \(f_{s}\) is in the (constant) direction \(\binom{1}{1}\) of \(\Sigma_{s}\). This means that both components of the vector \(f_{s}(x,x+\eta)\) are equal, which this is the case for \[\lambda(x)=\frac{x+\mu-\kappa_{2}\eta-1}{\eta(\kappa_{1}-\kappa_{2})}.\] Insertion into (31) and simplification yields \(f_{s}\) as given in (22). 3. Setting \(f_{s}(x,x+y)=0\) means solving the quadratic equation \[Q(x):=\mu+(\mu-\kappa_{2}\eta-1)x+x^{2}\] in (22), which gives the expressions for \(q^{\pm}\) in (23). The stated properties follow from evaluating \(\frac{dQ(x)}{dx}\) at the \(x\)-values of \(q^{-}\) and \(q^{+}\), respectively. **Proof of Proposition 3 (Codimension-one bifurcations).** 1. The equilibrium \(p_{i}\) from Proposition 2 collides with the switching manifold \(\Sigma\) when \[g(p_{i})=\frac{\mu}{\kappa_{i}}-\frac{1}{\kappa_{i}+1}-\eta=0.\] Solving this for \(\eta\) gives the stated expression for \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\). According to (13) the respective boundary equilibrium bifurcations happens at the tangency point \(F_{i}=p_{i}\), and (23) shows that this involves the (dis)appearance of an admissible pseudo-equilibria through \(F_{i}\). Simultaneously, there is a change in visibility of \(F_{i}\)[41, 31], as can be seen from (15) and (16). It follows that the visibility of the tangency points \(F_{i}\) and the presence of admissible pseudo-equilibria \(q^{\pm}\) in the different regions of the \((\mu,\eta)\)-plane are as stated. See Proposition 4 for details regarding genericity conditions and different manifestations of the boundary equilibrium bifurcations along the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\). 2. For \(\eta=0\) we have \(F^{*}=F_{1}=F_{2}\) according to (13), which is the defining property of the fold-fold bifurcation FF; for genericity conditions and resulting different manifestations see Proposition 4. 3. A saddle-node bifurcation of pseudo-equilibria occurs when the square root in (23) is zero, which gives \[\eta=-(\mu+1)+2\sqrt{\mu}\] and, hence, PS as stated, and also \(q^{*}\) as in (24). The saddle-node is generic since \(\frac{d^{2}Q(x)}{dx^{2}}=2\neq 0\). Since \((\mu+1)^{2}-4\mu=(\mu-1)^{2}>0\), we know that \(\eta<0\) along the curve PS. Hence, \(q^{*}\) lies on \(\Sigma_{s}^{r}\) with \(F_{2}<q^{*}<F_{1}\), and the stated bounds for \(\mu\) follow. **Proof of Proposition 4 (Codimension-two bifurcations).** 1. The expressions for \(\mathrm{FB}_{i}\) follow immediately from Proposition 3 by requiring that the curve FF intersects the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\), respectively, yielding \(p_{i}=F^{*}\). Note that these curves intersect transversely at \(\mathrm{FB}_{i}\), and the genericity conditions for FF and \(\mathrm{BE}_{i}\) are satisfied, which means that the fold-boundary equilibrium bifurcations are generic; see [42]. The points \(\mathrm{FB}_{i}\) divide the fold-fold curve FF into segments \(\mathrm{FF}_{i}\), where \(f_{1}(F^{*})\) and \(f_{2}(F^{*})\) are colinear, and a segement FU where they are not. With \[f_{i}\cdot\nabla(f_{i}\cdot\mathbf{n})(F^{*})=\mu+(\mu-1)\kappa_{i},\] we conclude that along \(\mathrm{FF}_{i}\) the fold-fold bifurcation is for a visible and and invisible quadratic tangency, which is exactly the case \(\mathrm{VI}_{1}\) described in [31]. It also follows that along FU the fold-fold bifurcation is for two invisible quadratic tangencies, and with nearby flows in opposite directions; this identifies this case as a fused-focus bifurcation according to [31]. The bifurcating (crossing) periodic orbit \(\Gamma\) is stable as demonstrated by the phase portraits presented in Section 2.2. We remark that the stability of \(\Gamma\) can be determined by considering the (local) return map around \(F^{*}\)[30, 31], but this is beyond the scope of this paper. The point \(\mathrm{FB}_{i}\) also divide \(\mathrm{BE}_{i}\) locally as stated; this follows from the change of stability of the sliding segement and the associated change from \(F_{1}<F_{2}\) for \(\eta>0\) to \(F_{2}<F_{1}\) for \(\eta<0\); see Proposition 1 and the illustrated and discussed in depth in Section 2.4. 2. At the point BB of double-boundary equilibrium bifurcation there are boundary equilibrium bifurcations simultaneously at \(p_{1}\neq p_{2}\), and its location is readily found by equating expressions in Proposition 3 for the curves \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\), which intersect transversally. It follows that the division of the curves \(\mathrm{BE}_{i}\) as are stated; this is illustrated and discussed in Section 2.4. 3. The point \(\mathrm{GB}_{i}\) is found by equating the expressions for the curves \(\mathrm{BE}_{i}\) and PS from Proposition 3. Whether the boundary equilibrium bifurcation \(\mathrm{BE}_{i}\) is of non-smooth fold or persistence type depends on the sign of the higher-order term [42] \[(\mathbf{n}\cdot(J_{f_{j}})^{-1}\cdot f_{i})(p_{2})=\frac{1}{\kappa_{j}}\left( \frac{{\kappa_{i}}^{2}}{(\kappa_{i}+1)^{2}}-\mu\right)\] Here \(j\neq i\in\{1,2\}\) is the respective other index and \(J_{f_{j}}\) is the Jacobian from (30). Hence, a sign change for the curve \(\mathrm{BE}_{i}\) happens at the point \(\mathrm{GB}_{i}\); specifically, \(\mathrm{BE}_{i}\) is of persistence type for \(\mu>\frac{\kappa_{i}^{2}}{(\kappa_{i}+1)^{2}}\) and of non-smooth fold type for \(\mu<\frac{\kappa_{i}^{2}}{(\kappa_{i}+1)^{2}}\). ## Appendix B Phase portraits at codimension-two bifurcations We present in Figures 17 and 18 phase portraits at the points \(\mathrm{FB}_{1}\), \(\mathrm{FB}_{2}\), \(\mathrm{BB}\), \(\mathrm{GB}_{1}\) and \(\mathrm{GB}_{2}\) from Proposition 4. This illustrates how these codimension-two bifurcation points give the nearby codimension-one boundary equilibrium bifurcations \(\mathrm{BE}_{i}\) and fold-fold bifurcations FF their different flavours. Figure 17 presents phase portraits at the fold-boundary equilibrium bifurcation points \(\mathrm{FB}_{1}\) and \(\mathrm{FB}_{2}\). The phase portrait at \(\mathrm{FB}_{1}\) is shown in panel (a). It features an attracting boundary-node \(p_{1}\) that is simultaneously a singular tangency point, which is invisible for \(f_{2}\). All orbits converge to \(p_{1}\) along the weak eigendirection in \(R_{1}\). The equilibrium \(p_{2}\) is in \(R_{1}\) and non-admissible. The phase portrait at \(\mathrm{FB}_{2}\) is shown in panel (b). The equilibrium \(p_{2}\) is now the attracting boundary-node and an invisible tangency point for \(f_{1}\) that attracts all orbits along the weak eigendirection in \(R_{2}\). In both cases, the strong stable manifold \(W^{ss}(p_{i})\) of \(p_{i}\) is the corresponding arriving orbit in \(R_{i}\). Figure 18 presents phase portraits at the remaining codimension two points. The phase portrait at the double boundary equilibrium bifurcation \(\mathrm{BB}\) at the intersection of the \(\mathrm{BE}_{1}\) and \(\mathrm{BE}_{2}\) curves is shown in panel (a). It features a repelling sliding segment \(\Sigma_{s}^{r}\) bounded on the left by the attracting boundary node \(p_{2}\) and on the right by the attracting boundary node \(p_{1}\). The pseudo-equilibria \(q^{-}\) and \(q^{+}\) are both on \(\Sigma_{c}\) and non-admissible (and not shown): the pseudo-equilibrium \(q^{-}\) is at the left hand boundary \(p_{2}\), and \(q^{+}\) is at the right hand boundary \(p_{1}\). There is a heteroclinic connection between \(p_{1}\) and \(p_{2}\) composed of orbit segments in \(R_{1}\) and \(R_{2}\), respectively. Moreover, a (sliding) heteroclinic connection between the equilibria is composed of the sliding orbit from \(p_{1}\) to \(p_{2}\). If we interpret the departing orbits from \(\Sigma_{s}^{r}\) as having a sliding component, then there is a continuum of homoclinic connections to \(p_{1}\) in \(R_{1}\) composed of departing orbits from \(\Sigma_{s}^{r}\) and a corresponding sliding component. Note that the boundary-node \(p_{1}\) has a strong stable manifold \(W^{ss}(p_{1})\) composed of a horizontal component in \(R_{1}\). Similarly, the boundary-node \(p_{2}\) also has a strong stable manifold \(W^{ss}(p_{2})\), composed of a horizontal component in \(R_{2}\). Overall, the boundary equilibrium bifurcations occurring simultaneously leads to a pseudo-equilibrium emerging on the sliding segment along Figure 17: Representative phase portrait at codimension-two points \(\mathrm{FB}_{1}\) and \(\mathrm{FB}_{2}\). Panel (a) for \((\mu,\eta)=(0.0909,0)\) at \(\mathrm{FB}_{1}\) shows the globally stable boundary-node \(p_{1}\) with strong stable manifold \(W^{ss}(p_{1})\). Panel (b) for \((\mu,\eta)=(0.5,0)\) at \(\mathrm{FB}_{2}\) shows the globally stable boundary-node \(p_{2}\) with the strong stable manifold \(W^{ss}(p_{2})\). and \(\widehat{\mathrm{BE}}_{2}^{F}\); see Section 2.4. The phase portrait at the generalised boundary equilibrium bifurcation GB\({}_{1}\) is shown in panel (b). It features a repelling sliding segment \(\Sigma_{s}^{r}\) bounded on the left by the visible quadratic tangency point \(F_{2}\) and on the right by the attracting generalised boundary-node \(p_{1}\) (shown in magenta). The pseudo-equilibria \(q^{-}\) and \(q^{+}\) undergo a pseudo-saddle-node bifurcation at the right hand boundary-node \(p_{1}\). The phase portrait at the generalised boundary equilibrium bifurcation GB\({}_{2}\) is shown in panel (c1) with a magnification near the sliding segment in panel (c2). The repelling sliding segment \(\Sigma_{s}^{r}\) is bounded on the left by generalised boundary node \(p_{2}\) (shown in magenta) and on the right by the invisible quadratic tangency point \(F_{1}\). The homoclinic connections \(\gamma_{2}^{*}\) and \(\Gamma_{2}^{*}\) are the same as described in Section 2.4; see also Figure 9(b). The departing orbits from \(\Sigma_{s}^{r}\) together with the corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{2})\) and \(W^{ss}(p_{1})\) are shown in green. The corresponding stable manifolds \(W^{ss}(p_{1})\) and \(W^{ss}(p_{2})\) are shown in green. with the respective sliding component from \(p_{2}\), form a continuum of homoclinic connections to \(p_{2}\). In particular, within \(\gamma_{2}^{*}\) there are homoclinic connections composed of a departing orbit from \(\Sigma_{s}^{r}\) in \(R_{2}\) and the corresponding sliding orbit. There are also homoclinic connections inbetween \(\gamma_{2}^{*}\) and \(\Gamma_{2}^{*}\), which feature departing orbits from \(\Sigma_{s}^{r}\) in \(R_{1}\) that cross \(\Sigma\) into \(R_{2}\).
2306.17423
Separating the superradiant emission from the Hawking radiation from a rotating black hole
Emission of particles created in the background of a rotating black hole can be greatly amplified taking away rotational energy of a black hole. This amplification affects both particles created near the horizon (due to the Hawing effect), and particles created near the potential barrier far from the horizon. Only the latter effect is called the superradiance in the strict sense. We explicitly calculate the superradiant emission for scalar particles and compare it with the total scalar particle emission (Hawking radiation plus superradiance) to clarify some confusion in the literature. We clearly show that these two emissions are not the same. In particular, superradiance persists even for extremal black holes whose Hawking temperature is zero.
De-Chang Dai, Dejan Stojkovic
2023-06-30T06:36:04Z
http://arxiv.org/abs/2306.17423v2
# Separating the superradiant emission from the Hawking radiation from a rotating black hole ###### Abstract Emission of particles created in the background of a rotating black hole can be greatly amplified taking away rotational energy of a black hole. This amplification affects both particles created near the horizon (due to the Hawing effect), and particles created near the potential barrier far from the horizon. Only the latter effect is called the superradiance in the strict sense. We explicitly calculate the superradiant emission for scalar particles and compare it with the total scalar particle emission (Hawking radiation plus superradiance) to clarify some confusion in the literature. We clearly show that these two emissions are not the same. In particular, superradiance persists even for extremal black holes whose Hawking temperature is zero. ## I Superradiance The notion of superrdiance, or superradiant emission of particles, has been used in a wide range of situations in the literature. The term "superradiance" was introduced by R.H. Dicke in 1954 [1], describing an effect in which disordered energy is converted into coherent energy. In classical physics, superradiance is a classical phenomenon in which an amplitude of an outgoing wave after the reflection is greater than the amplitude of the ingoing wave [2]. This phenomenon can happen in the background of a rotating black hole [3], which contains an ergosphere, i.e. the region between the infinite redshift surface and the event horizon. In such a background, an incident wave can take away some of the rotational energy of the black hole and get amplified after reflection, thus effectively yielding a reflection coefficient greater than one (i.e. negative absorption coefficient). In the context of quantum Hawking radiation from a black hole, it was noticed in numerical studies that spontaneous emission from a black hole can also get amplified taking away rotational energy of the black hole [4]. Calculations of the black hole greybody factors indicate that this amplification is very much spin dependent, with emission of higher spin particles strongly favored. In [5] an analytic explanation of this phenomenon was given in terms of the spin-spin interaction between the spin of the rotating black hole and spin of the emitted particle. While the spin dependent amplification of Hawking radiation is also often called superradiance in the spirit of Dicke's definition in [1], it is different from the original superradiance in [2] or [3] which crucially rely on the negative absorption coefficient. For example the superradiance as defined in [2] or [3] is not possible for fermions [3; 6]. If the incident wave is made of fermions, then the reflected wave can not get amplified due to Pauli exclusion principle, since all the available states are already occupied. In this paper, for clarity, we will call this effect the superradiance and separate it from the Hawking effect. The crucial difference is that the Hawking effect happens in the presence of the horizon, while the superradiance does not need the horizon. Superradiant emission is simply the effect of particle creation in scattering from the potential barrier. Since the black hole contains both the horizon and the potential barrier outside the horizon, the total radiation from the rotating black hole will include both the Hawking effect and superradiance. The effects of superradiance in the context of black holes have been extensively explored in the literature (for a comprehensive review see [7]). However, to the best of our knowledge, it appears that the superradiance has never been explicitly separated from the Hawking radiation in concrete calculations. Therefore, in this paper we will explicitly calculate the particle production due to the potential barrier away from the horizon of a rotating black hole and compare it with the total radiation. Among the other things, it will become clear that the superradiance exists even when the Hawking temperature drops to zero (i.e. for an extremal bulk hole). Thus, treating a black hole as a black body emitter with a finite temperature \(T\), and thus intensity of radiation proportional to \(T^{4}\), is a significant oversimplification. This fact can potentially have some implications even for the information loss paradox. Since suprerradiance particles are created at the barrier outside of the horizon, they should not affect the horizon and singularity physics. However, if one takes the whole process of the black hole creation and evaporation into account, potential barrier outside of the horizon still retains some information about the gravitational process that created the black hole. This implies that superradiance is relevant to black hole information[8]. This is especially important in the context of the black hole scalar hair. We start with the metric for a rotating black hole. The geometry of a rotating black hole is described by the Kerr metric in Boyer-Lindquist coordinates \[ds^{2} = -(1-\frac{2Mr}{\Sigma})dt^{2}-\frac{4Mra\sin^{2}\theta}{\Sigma}dtd \phi+\frac{\Sigma}{\Delta}dr^{2}+\Sigma d\theta^{2} \tag{1}\] \[+\Big{(}r^{2}+a^{2}+\frac{2Mra^{2}\sin^{2}\theta}{\Sigma}\sin^{2} \theta\Big{)}d\phi^{2},\] \[\Delta=r^{2}-2Mr+a^{2},\] (2) \[\Sigma=r^{2}+a^{2}\cos^{2}\theta, \tag{3}\] where \(a\) is the black hole rotation parameter, while \(M\) is the mass of the black hole. We now place a scalar field in this background. We will follow the Frolov's book [11] on black hole physics. There, a complete classification of different types of bases in the black hole Penrose diagram were given (see Fig. 1). In this setup one can separate the Hawking effect from superradiance in physically and mathematically clear way. We thus decompose a scalar field \(\psi\) in the spherically symmetric coordinates \[\psi_{l,m}=e^{-i\omega t}R_{l,m}(r,\omega)\frac{S_{l,m}(\theta,\omega)e^{im \phi}}{\sqrt{2\pi}}, \tag{4}\] where \(l\) and \(m\) are angular and magnetic quantum number respectively. The equation of motion can be separated into two main equations, \[\frac{d}{dr}\Big{(}\Delta\frac{dR_{l,m}}{dr}\Big{)}+\Big{(}\frac {K^{2}}{\Delta}-\lambda\Big{)}R_{l,m}=0. \tag{5}\] \[\frac{1}{\sin\theta}\frac{d}{d\theta}\Big{(}\sin\theta\frac{dS_{l,m}}{d\theta}\Big{)}+\Big{(}a^{2}\omega^{2}\cos^{2}\theta-\frac{m^{2}}{\sin^{ 2}\theta}\] \[+E\Big{)}S_{l,m}=0,\] (6) \[K=(r^{2}+a^{2})\omega-am,\] (7) \[\lambda=E+a^{2}\omega^{2}-2am\omega, \tag{8}\] here, \(E\) is the eigenvalue of the angular equation, Eq. (6), while \(\lambda\) is the corresponding eigenvalue of the radial equation, Eq. (5). The radial equation can be simplified by introducing a new variable \(\chi\), \[\chi=(r^{2}+a^{2})^{1/2}R_{l,m}. \tag{9}\] The radial part becomes \[\Big{(}\frac{d^{2}}{dr_{*}^{2}}+\frac{K^{2}-\lambda\Delta}{(r^{2}+a^{2})^{2}}- G^{2}-\frac{dG}{dr_{*}}\Big{)}\chi=0, \tag{10}\] where, \(r^{*}\) is the tortoise coordinate \[dr_{*}=\frac{r^{2}+a^{2}}{\Delta}dr. \tag{11}\] and \[G=\frac{r\Delta}{(r^{2}+a^{2})^{2}} \tag{12}\] The asymptotic solution near infinity is \[\chi\sim e^{\pm i\omega r_{*}},r\rightarrow\infty. \tag{13}\] The asymptotic solution near the horizon is \[\chi\sim e^{\pm i(\omega-m\Omega_{H})r_{*}},r\to r_{+}, \tag{14}\] where, \(\Omega_{H}=a/(2Mr_{+})\) and \(r_{+}\) is the radius of the black hole outer horizon. In the Penrose diagram shown in fig.1, an incident wave coming from the past infinity, \(I^{-}\), is scattered by the potential. Part of the wave is reflected to infinity, while part of it will penetrate the barrier and fall into the black hole horizon. \[\chi=\begin{cases}e^{-i\omega r_{*}}+\tilde{R}e^{i\omega r_{*}}&,\,\text{for }r \rightarrow\infty\\ \tilde{T}e^{\pm i(\omega-m\Omega_{H})r_{*}}&,\,\text{for }r\to r_{+}.\end{cases} \tag{15}\] The radial equation, Eq. (10), is a real second order ordinary differential equation which satisfies the Wronskian relation. From the Wronkian relation, one can prove that \[(1-\frac{m\Omega_{H}}{\omega})|\tilde{T}|^{2}=1-|\tilde{R}|^{2}. \tag{16}\] From here we see that if \(\omega<m\Omega_{H}\), then \(|\tilde{R}|^{2}>1\). This means that the reflected wave has a greater amplitude than the initial wave, which in turn implies that particles are created during the scattering. These new particles are not directly created by the black hole horizon, and we focus our attention to this process. ## II Particle creation by the superradiance mechanism As we already explained, the superradiance mechanism operates outside the horizon represented by the line labeled by H\({}^{+}\) in fig.1. So all the relevant events are in the Figure 1: The Penrose-Carter diagram: The space outside the horizon is presented in the right square. The upper triangle represents the space inside the future horizon. The black hole radiation due to Hawking effect involves these two regions. On the other hand, only the right square is involved in particle creation by the superradiance mechanism. The thin dashed line represents the potential barrier that induces the superradiance. Four types of bases (_down_, _up_, _in_ and _out_) are involved in the process. Five bases (including \(dn\)) are involved in the full black hole radiation [11]. right square in fig. 1. In contrast, Hawking radiation is induced by the black hole horizon. To study particle creation by superradiance mechanism we have to identify the basis in which we decompose the fields, and the vacuum state of the field. To describe a vacuum state of a field, we need at least two bases. For our purpose, we define four possible bases [11]. The first is the \(in\)-coming mode. It represents a wave which goes from the past null infinity, \(I^{-}\), to the black hole, \[\chi^{in}_{J}\sim\frac{1}{\sqrt{\omega}}\exp(-i\omega r_{*}). \tag{17}\] The second one is the \(out\)-going mode. It represents a wave which propagates from the black hole to future null infinity, \(I^{+}\), \[\chi^{out}_{J}\sim\frac{1}{\sqrt{\omega}}\exp(i\omega r_{*}) \tag{18}\] The third one is the \(down\) mode. It represents a wave which goes into the future horizon, \(H^{+}\), \[\chi^{down}_{J}\sim\frac{1}{\sqrt{|\omega-m\Omega_{H}|}}\exp(-i(\omega-m\Omega _{H})r_{*}). \tag{19}\] The forth one is the \(up\) mode. It represents a wave which goes away from the past horizon, \(H^{-}\), \[\chi^{up}_{H}\sim\frac{1}{\sqrt{|\omega-m\Omega_{H}|}}\exp(i(\omega-m\Omega_{ H})r_{*}). \tag{20}\] Since a field decomposition requires two distinct bases, we can decompose \(\chi\) in two different ways \[\hat{\chi} = \sum_{J}\hat{a}^{in}_{J}\chi^{in}_{J}+\hat{a}^{up}_{J}\tilde{ \chi}^{up}_{J}+h.c. \tag{21}\] \[= \sum_{J}\hat{b}^{out}_{J}\chi^{out}_{J}+\hat{b}^{down}_{J}\tilde{ \chi}^{down}_{J}+h.c. \tag{22}\] where, \(J=\{\omega,l,m\}\), while \(h.c.\) stands for the hermitian conjugate terms. We also have \[\tilde{\chi}^{\alpha}_{J} = \chi^{\alpha}_{J},\,\text{if}\,\,\omega-m\Omega_{H}>0 \tag{23}\] \[= \chi^{\alpha*}_{J},\,\text{if}\,\,\omega-m\Omega_{H}<0, \tag{24}\] where \(\alpha\) can be \(up\) or \(down\). The two types of vacuum corresponding to \(\hat{a}^{\alpha}_{J}\) and \(\hat{b}^{\alpha}_{J}\) are \[\hat{a}^{\alpha}_{J}\left|in;0\right> = 0 \tag{25}\] \[\hat{b}^{\alpha}_{J}\left|out;0\right> = 0. \tag{26}\] The \(in\) mode is expressed in terms of the bases in the past, while the \(out\) mode is expressed in terms of the bases in the future. Consider now a field which starts from vacuum in the far past, and after evolving is seen in terms of the \(out\) bases \[\chi^{in}_{J} \rightarrow R_{J}\chi^{out}_{J}+T_{J}\chi^{down}_{J} \tag{27}\] \[\chi^{up}_{J} \rightarrow t_{J}\chi^{out}_{J}+r_{J}\chi^{down}_{J}. \tag{28}\] If we compare eqs. (13) and (14) with eqs. (17) to (20), we see that there are extra normalization factors in the latter four equations. This will lead to extra factors in the transmission coefficients. Therefore, \(T_{J}\) and \(\tilde{T}\) (eq. 15) are related by \(T_{J}=\sqrt{|1-m\Omega_{H}/\omega|}\tilde{T}\). The creation and annihilation operators are related according to the relationship between the modes. For \(\omega-m\Omega_{H}>0\), we have \[\hat{b}^{out}_{J}=R_{J}\hat{a}^{in}_{J}+t_{J}\hat{a}^{up}_{J}. \tag{29}\] In this case, there is no particle creation since there is no mixing of the creation and annihilation operators. For \(\omega-m\Omega_{H}<0\), we have \[\hat{b}^{out}_{J}=R_{J}\hat{a}^{in}_{J}+t_{J}\hat{a}^{up}_{J}. \tag{30}\] In this case there is particle creation because of the mixing of the creation and annihilation operators. Thus, \(\omega-m\Omega_{H}<0\) is the necessary condition for the superradiance. From the commutation relations we can get \[|t_{J}|^{2}=|T_{J}|^{2}. \tag{31}\] From here we can calculate the particle creation number due to the superradiance effect as \[n_{J}=\langle in,0|\,b^{out\dagger}_{J}b^{out}_{J}\left|in,0\right>=|t_{J}|^{2}, \,\text{if}\,\,\omega-m\Omega_{H}<0. \tag{32}\] To compare the superradiance particle creation with the Hawking effect and demonstrate their difference, we calculate the total particle creation number (Hawking effect plus superradiance) characterized by the transmission coefficient \(|T_{J}|^{2}\)[6] \[n_{J}^{T}=\frac{\text{sign}(1-\frac{m\Omega}{\omega})|T_{J}|^{2}}{\exp\bigl{(} \frac{\omega-m\Omega_{H}}{T}\bigr{)}-1}, \tag{33}\] where, \(T\) is the black hole temperature. We can immediately see the fundamental difference between the superradiance and Hawking effect. For example, for an extremal black hole the Hawking temperature goes to zero, \(T\to 0\). If \(\omega>m\Omega_{H}\), which is outside of the superradiant regime, \(n_{J}^{T}=0\) since both the Hawking effect and superradiance are absent. However, if \(\omega<m\Omega_{H}\), we get \(n_{J}=n_{J}^{T}\). In that case the Hawking effect is still absent (Hawking temperature is still zero), however the superradiance is present and it is the only contribution to the total radiation from a black hole. As an illustration, we will now compare the power spectrum, \(\frac{dE}{dtd\omega}\), and power, \(\frac{dE}{dt}\), between the superradiance and total radiation. The power emitted in particles generated by the superradiance mechanism is \[P_{s}\equiv\frac{dE_{s}}{dt}=\frac{1}{2\pi}\int\sum_{l,m}n_{J}\omega d\omega. \tag{34}\] The total power of particles coming out of the black hole is \[P_{T}\equiv\frac{dE_{T}}{dt}=\frac{1}{2\pi}\int\sum_{l,m}n_{J}^{T}\omega d\omega. \tag{35}\] The power spectrum of emitted particles is defined as the emitted energy per unit time per unit frequency, i.e. \(\frac{dE}{dtd\omega}\). We use the units where \(G=c=k=1\). In fig. 2, we plot the comparison between the power spectrum of particles created by the superradiance mechanism and the power spectrum of the total radiation from the black hole. For convenience, the rotation parameter \(a\) is rescaled to \(a^{*}=a/M\). We fix the value of the black hole rotation parameter to \(a^{*}=0.9\) where the superradiance of scalar particles is significant, but still does not dominate over the Hawking emission. In fig.3, we plot the comparison of the powers as a function of the black hole rotation parameter \(a^{*}\). We can see that the superradiance becomes dominant for highly rotating black holes at \(a^{*}\approx 0.94\). In fig.4, we plot the ratio between the powers emitted by superradiance and total radiation. Once again we see that for the extremal black hole (i.e. \(a^{*}=1\)) superradiance equals the total radiation since the Hawking effect ceases to exists at that point. ## III Conclusion The main point in this paper is to make a pedagogical distinction between the amplification of the Hawking emission in the background of a rotating black hole from the effect of superradiance. Both the spin dependent amplification of Hawking radiation and superradiance crucially depend on taking away rotational energy from a rotating black hole. However, superradiance is created at the potential barrier away from the horizon and should not be mixed with Hawking radiation. We explicitly calculated the superradiant emission and compared it with the total radiation. We showed that the superradiance of scalar particles is negligible for \(a^{*}<0.7\), but it becomes dominant around \(a^{*}=0.94\). The numerical values might change if one includes all types of particles since the superradiant amplification is stronger for higher spin particles. For the modes that do not satisfy the condition for superradiance, i.e. for \(\omega>m\Omega_{H}\), we have only the Hawking effect which clearly vanishes for the extremal black hole when the Hawking temperature goes to zero. However, for the superradaint modes, with \(\omega<m\Omega_{H}\), emission persists even for the extremal black hole. In this paper we considered only a scalar field, but similar calculations can be performed for the vector and graviton fields. As we mentioned in the introduction, fermions do not exhibit superradiance because of the Pauli exclusion principle. The fact that even the extremal rotating black holes emit particles might be relevant in various cosmological scenarios that strongly depend on the existence or non-existence of such radiation (e.g. [12; 13; 14; 15]). In particular, small primordial black holes can Hawking radiate gravitons, thus contributing to the primordial stochastic gravitational wave background [12]. In our context, even the extremal rotating black holes with the zero Hawking temperature can keep contributing to this background. Figure 2: Comparison of the scalar particle power spectra, \(\frac{dE}{dtd\omega}\), for \(a^{*}=0.9\). The black curve represents the power spectrum for the total radiation of scalar particles from the black hole (Hawking effect plus superradiance). The dashed line represents the power spectrum for the scalar particles created by superradiance. The transmission coefficient, \(T_{J}\), is taken from the BlackHawk generator [9; 10]. The spectra are clearly different. We also see that for this value of \(a^{*}\) superradiance is significant but still does not dominate over the Hawking emission. Figure 3: Comparison of the powers for the scalar particles, \(P=\frac{dE}{dt}\), as a function of the black hole rotation parameter \(a^{*}\). The black solid curve is the total scalar particle radiation from the black hole (Hawking effect plus superradiance), \(P_{T}\). The thin solid curve is one half of the total radiation, plotted for convenience. The dashed line is the power from the superradiance mechanism, \(P_{s}\). We can see that the superradiance does not play important role for \(a^{*}<0.7\). However, it becomes dominant around \(a^{*}=0.94\). On the other hand, extremal primordial black holes are often mentioned as good dark matter candidates due to their lack of Hawking radiation [13; 14; 15]. Again, in our context we see that the extremal black holes will keep emitting particles, which makes them unstable and also visible. Finally, we comment on the likelihood that small primordial black holes have large angular momentum. Density perturbations usually do not carry significant angular momentum, so the corresponding black holes created by this mechanism will not be spinning fast. However, if two black holes merge, their initial relative angular momentum gets transformed into the final black hole spin, due to the angular momentum conservation. In addition, accretion of surrounding material is very efficient in spinning the black hole up. If the last \(20-50\%\) of the black hole mass came from accretion, such a black hole would be close to extremal. Finally, black holes formed in collisions of the energetic particles in the early universe [16] are also expected to carry high spin due to the initial relative angular momentum of the colliding particles. ###### Acknowledgements. D.C. Dai is supported by the National Science and Technology Council (under grant no. 111-2112-M-259-016-MY3) D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021.
2309.16812
SatDM: Synthesizing Realistic Satellite Image with Semantic Layout Conditioning using Diffusion Models
Deep learning models in the Earth Observation domain heavily rely on the availability of large-scale accurately labeled satellite imagery. However, obtaining and labeling satellite imagery is a resource-intensive endeavor. While generative models offer a promising solution to address data scarcity, their potential remains underexplored. Recently, Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant promise in synthesizing realistic images from semantic layouts. In this paper, a conditional DDPM model capable of taking a semantic map and generating high-quality, diverse, and correspondingly accurate satellite images is implemented. Additionally, a comprehensive illustration of the optimization dynamics is provided. The proposed methodology integrates cutting-edge techniques such as variance learning, classifier-free guidance, and improved noise scheduling. The denoising network architecture is further complemented by the incorporation of adaptive normalization and self-attention mechanisms, enhancing the model's capabilities. The effectiveness of our proposed model is validated using a meticulously labeled dataset introduced within the context of this study. Validation encompasses both algorithmic methods such as Frechet Inception Distance (FID) and Intersection over Union (IoU), as well as a human opinion study. Our findings indicate that the generated samples exhibit minimal deviation from real ones, opening doors for practical applications such as data augmentation. We look forward to further explorations of DDPMs in a wider variety of settings and data modalities. An open-source reference implementation of the algorithm and a link to the benchmarked dataset are provided at https://github.com/obaghirli/syn10-diffusion.
Orkhan Baghirli, Hamid Askarov, Imran Ibrahimli, Ismat Bakhishov, Nabi Nabiyev
2023-09-28T19:39:13Z
http://arxiv.org/abs/2309.16812v1
SatDM: Synthesizing Realistic Satellite Image with Semantic Layout Conditioning using Diffusion Models ###### Abstract Deep learning models in the Earth Observation domain heavily rely on the availability of large-scale accurately labeled satellite imagery. However, obtaining and labeling satellite imagery is a resource-intensive endeavor. While generative models offer a promising solution to address data scarcity, their potential remains underexplored. Recently, Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant promise in synthesizing realistic images from semantic layouts. In this paper, a conditional DDPM model capable of taking a semantic map and generating high-quality, diverse, and correspondingly accurate satellite images is implemented. Additionally, a comprehensive illustration of the optimization dynamics is provided. The proposed methodology integrates cutting-edge techniques such as variance learning, classifier-free guidance, and improved noise scheduling. The denoising network architecture is further complemented by the incorporation of adaptive normalization and self-attention mechanisms, enhancing the model's capabilities. The effectiveness of our proposed model is validated using a meticulously labeled dataset introduced within the context of this study. Validation encompasses both algorithmic methods such as Frechet Inception Distance (FID) and Intersection over Union (IoU), as well as a human opinion study. Our findings indicate that the generated samples exhibit minimal deviation from real ones, opening doors for practical applications such as data augmentation. We look forward to further explorations of DDPMs in a wider variety of settings and data modalities. An open-source reference implementation of the algorithm and a link to the benchmarked dataset are provided at [https://github.com/obaghirli/syn10-diffusion](https://github.com/obaghirli/syn10-diffusion). **Keywords:** generative models, conditional diffusion models, semantic image synthesis, building footprint dataset, remote sensing, satellite imagery IAC-23,B1,4,6,x76306 ###### Abstract Deep learning models in the Earth Observation domain heavily rely on the availability of large-scale accurately labeled satellite imagery. However, obtaining and labeling satellite imagery is a resource-intensive endeavor. While generative models offer a promising solution to address data scarcity, their potential remains underexplored. Recently, Denoising Diffusion Probabilistic Models (DDPMs) have demonstrated significant promise in synthesizing realistic images from semantic layouts. In this paper, a conditional DDPM model capable of taking a semantic map and generating high-quality, diverse, and correspondingly accurate satellite images is implemented. Additionally, a comprehensive illustration of the optimization dynamics is provided. The proposed methodology integrates cutting-edge techniques such as variance learning, classifier-free guidance, and improved noise scheduling. The denoising network architecture is further complemented by the incorporation of adaptive normalization and self-attention mechanisms, enhancing the model's capabilities. The effectiveness of our proposed model is validated using a meticulously labeled dataset introduced within the context of this study. Validation encompasses both algorithmic methods such as Frechet Inception Distance (FID) and Intersection over Union (IoU), as well as a human opinion study. Our findings indicate that the generated samples exhibit minimal deviation from real ones, opening doors for practical applications such as data augmentation. We look forward to further explorations of DDPMs in a wider variety of settings and data modalities. An open-source reference implementation of the algorithm and a link to the benchmarked dataset are provided at [https://github.com/obaghirli/syn10-diffusion](https://github.com/obaghirli/syn10-diffusion). IAC-23,B1,4,6,x76306 ## 1 Introduction Synthetic image generation is one of the fundamental challenges in the field of computer vision, with the goal of producing imagery that is indistinguishable from real images in terms of both fidelity and diversity. This task can also be seen as the inverse of semantic segmentation, where an input image is mapped to its corresponding semantic layout. Image generation can be categorized as unconditional or conditional. In the unconditional setting, the generative model relies solely on random noise as input, while in the conditional setting, additional information, such as semantic layouts, is provided. Conditional image generation has been extensively studied in the literature and has found many applications in various industries due to the well-defined nature of this paradigm and the better quality of the generated images compared to its unconditional counterpart. Machine learning algorithms rely heavily on the availability of large-scale datasets to achieve optimal performance. The size of the dataset significantly impacts the discriminative and expressive power of these models. Most of the challenges encountered in the industry are formulated within a supervised training paradigm, where both the input data and the corresponding output labels are available to maximize performance. However, acquiring a large number of labeled samples is a laborious task that increases the development cost of the model. Moreover, manual labeling at scale is prone to random or systematic errors, which escalate with the dataset's scale and complexity. These errors can have detrimental effects on both the project budget and the model's performance. In this paper, we propose a methodology for generating photorealistic synthetic imagery conditioned on semantic layout to augment existing datasets. We implement and deploy a novel denoising diffusion model on a dataset comprising optical satellite imagery and evaluate its performance both quantitatively and qualitatively. Optical satellite imagery features complex structural and textural characteristics, making it a more challenging domain compared to the commonly used benchmarks of natural images for generative modeling. Our results demonstrate that the denoising diffusion models can produce high-quality samples at multiple resolutions, even with limited data and computational resources. The primary contributions of this work include: * The compilation of SAT25K, a meticulously curated building footprint dataset comprising image tiles and corresponding semantic layouts. * An in-depth exploration of diffusion models for generating synthetic satellite imagery. * The development of SatDM, a high-performance conditional diffusion model tailored for semantic layout conditioning. * The release of source code and model weights to facilitate reproducibility. The rest of this paper is organized as follows: In Section 2, we review and summarize relevant existing literature to establish the groundwork for our research. Section 3 outlines the key components of our proposed methodology. The experimental setup is discussed in Section 4. Section 5 presents our findings, offering an in-depth discussion of the results. In Section 6, we examine the limitations of our work and outline potential directions for future research before concluding. ## 2 Related Work Generative Adversarial Networks (GANs) [1] have been at the center of image generation for the past decade. GANs heavily benefited from the striking successes of the advancements in the field of deep learning, thus incorporating backpropagation and dropout algorithms on top of the multilayer perceptron architecture with piecewise linear units. The underlying premise of the GANs is simultaneously training the generator network with the discriminator network until the discriminator cannot distinguish between the real samples and samples generated by the generator network. GANs also did not suffer from the intractable probability density functions during the loss formulation as in the previous probabilistic models. Furthermore, sampling in GANs could be done seamlessly by using only forward propagation without involving approximate inference or Markov chains. Conditional GANs are introduced in [2] by providing both the generator and discriminator networks with the conditioning signal via concatenation. A seminal work in conditional image generation known as pix2pix is proposed in [3] which allows the translation between different image domains. Following this work, [4] synthesized high-resolution photorealistic images from semantic layouts through their more robust and optimized pix2pixHD architecture. For both of these architectures, conditioning information is fed to the generative and discriminate networks only once at the onset of the sequential convolutional neural networks (CNN) and normalization blocks. [5] takes a projection-based approach to condition the discriminator by computing the inner product between the embedded conditioning vector and feature vector, which significantly improves the quality of the conditional image generation. A revolutionary style-based GAN is introduced in [6] where the noise and embedded conditioning signal are injected into the synthesis network at multiple intermediate layers rather than the input layer only, as seen in many traditional generators. The style information in the form of a conditioning signal modulates the input feature map through the learned scale and bias parameters of the adaptive instance normalization (AdaIN) layers at multiple stages. A similar approach is undertaken in another highly influential work [7], where the conditioning semantic layout is used to modulate the activations in normalization layers through spatially adaptive, learned transformation. This is in contrast to the traditional methods where the semantic layout is fed to the deep network as input to be processed through stacks of normalization layers, which tend to wash away the quality of the propagating signal. In a study to explore the scalability of GANs, authors in [8] successfully trained large-scale BigGAN-deep architectures and demonstrated that GANs benefit dramatically from scaling. Even though GANs have achieved significant success in generative modeling, their training is subject to many adversities. To this end, GANs capture less diversity and are often difficult to train, collapsing without carefully selected hyperparameters and regularizers [8, 9, 10, 11, 12, 13, 14]. Furthermore, objectively evaluating the implicit generative models such as GANs is difficult [15] due to their lack of tractable likelihood function. On the other hand, recent advancements have shown that likelihood-based diffusion models can produce high-quality images while offering desirable properties such as broader distribution coverage, a stationary training objective, and ease of scalability [14]. In [16], a method for Bayesian learning from large-scale datasets is proposed. This method involves the stochastic optimization of a likelihood through the use of Langevin dynamics. This process introduces noise into the parameter updates, leading the parameter trajectory to converge towards the complete posterior distribution, not just the maximum a posteriori mode. Initially, the algorithm resembles stochastic optimization, but it automatically shifts towards simulating samples from the posterior using Langevin dynamics. Unparalleled to Bayesian methods, in a groundbreaking work [17] inspired by non-equilibrium statistical physics, diffusion probabilistic models were introduced. These models enable the capturing of data distributions of arbitrary forms while allowing exact sampling through computational tractability. The core concept revolves around systematically and gradually disintegrating the patterns present in a data distribution using an iterative forward diffusion process. Subsequently, a reverse diffusion process that reconstructs the original structure in the data is learned. Learning within this framework involves estimating slight perturbations to the diffusion process. Following this work, a score-based generative modeling framework was proposed in [18] consisting of two key components: score matching and annealed Langevin dynamics. The fundamental principle behind score-based modeling is perturbing the original data with varying levels of Gaussian noise to estimate the score, which represents the gradient of the log-density function at the input data point. A neural network is trained to predict this gradient field from the data. During sampling, annealed Langevin dynamics is used to progress from high to low noise levels until the samples become indistinguishable from the original data. A connection between score matching and diffusion probabilistic models is revealed in [19] as the authors show that under certain parameterization, denoising score matching models exhibit equivalence to diffusion probabilistic models. During learning the reverse diffusion process, the neural network parameterization, which predicts the noise levels in perturbed data rather than the forward diffusion process posterior mean, resulted in a superior sample quality. Thus, the injected noise parameterization leads to a simplified, weighted variational bound objective for diffusion models, resembling denoising score matching during training and Langevin dynamics during sampling. The authors concluded that despite the high sample quality of their method, the log-likelihoods are not competitive when compared to those of other likelihood-based models. Furthermore, since the diffusion process involves multiple forward steps to gradually destroy the signal, reversing the diffusion process to reconstruct the signal also necessitates numerous steps, resulting in a slow sampler. To address these difficulties around DDPMs, [20] proposed several improvements. The authors suggested changing the linear noise scheduler to a cosine noise scheduler; thus maintaining a less abrupt diffusion process, learning model variance at each timestep rather than setting to a predetermined constant value, and switching the timestep sampler from uniform to importance sampler; thus improving log-likelihood, and adjusting the sampling variances based on the learned model variances for an arbitrary subsequence of the original sampling trajectory; thus improving the sampling speed. In a parallel work, rather than following the original DDPMs conceptualization, [21] proposed a change to the underlying principles, which enables faster image generation. While retaining the original training objective of DDPMs, the authors redefined the forward process as non-Markovian, hence achieving much shorter generative Markov chains, leading to accelerated sampling with only a slight degradation in sample quality. In an effort to reduce the sampling time for diffusion models, [22] proposed a method resembling a distillation process, which is applied to the sampler of implicit models in a progressive way, halving the number of required sampling steps in each iteration. Authors in [14] argue that one of the reasons why the diffusion models may still fall short of the quality of samples generated by GANs is that a trade-off mechanism between fidelity and diversity is incorporated into the GANs architecture, which allows them to produce more visually pleasing images at the cost of diversity, which is known as the truncation trick. To make DDPMs also benefit from this trade-off, they proposed auxiliary classifier guidance to the generation process inspired by the heavy use of class labels by conditional GANs and the role of estimated gradients from the data in the noise conditional score networks (NCSN). Following this study, [23] demonstrated that high-resolution high-fidelity samples can be generated by cascading conditional diffusion-based models obviating the need for a companion classifier. Moreover, the proposed model exhibited superior performance compared to state-of-the-art GAN-based models. However, training multiple diffusion models and sampling sequentially is very time-consuming. In an attempt to achieve a GAN-like trade-off between the diversity and fidelity of the generated images without requiring an auxiliary classifier, authors in [24] proposed a classifier-free guidance schema purely based on the diffusion models. In this schema, conditional and unconditional diffusion models are trained jointly without increasing the number of total training parameters, and sampling is performed using the linear combination of conditional and unconditional score estimates. Increasing the strength of this linear interpolation leads to an increase in sample fidelity and a decrease in sample diversity. To address the drawbacks of previous work, authors in [25] departed from working on the image space to compressed latent space of lower dimensionality through the pre-trained autoencoders, which made the high-resolution synthesis possible with significantly reduced computational requirements. The practicality of latent diffusion models (LDMs) opened the door to the development of large-scale diffusion models such as Stable Diffusion which is trained on billions of images conditioned on text prompts. To transfer the capabilities of general-purpose large diffusion models to more task-specific domains, where access to large amounts of training data is not feasible, authors in [26] presented a control mechanism that allows the fine-tuning of the large models while preserving the knowledge extracted from billions of images. In the field of remote sensing, synthetic image generation has been receiving increasing attention. Researchers have explored various approaches to tackle the image generation task, and these approaches can be broadly categorized into three main groups: techniques employing GANs, DDPMs, and simulated sensors and environments. Annotated hyperspectral data generation has been addressed in [27] using GANs, and the study validated the use of synthetic samples as an effective data augmentation strategy. River image synthesis for the purpose of hydrological studies also utilizes GANs to generate high-resolution artificial river images [28]. Previous work in regard to the generation of synthetic multispectral imagery [29] and image style transfer from vegetation to desert [30] using Sentinel-2 data suggested promising outcomes. With architectural and algorithmic modifications, authors reported satisfactory results and showed that GANs are capable of preserving relationships between different bands at varying resolutions. A novel self-attending task GAN is introduced in [31], enabling the generation of realistic high-contrast scientific imagery of resident space objects while preserving localized semantic content. Synthesizing images conditioned on additional information such as class labels, semantic layouts, and other data modalities grants more fine-tuned control over the generation process and allows us to ask the model to generate images of a specific type. Translation from satellite images to maps [32, 33], and street views to satellite images [34] utilizes the conditioning capability of GANs to generate the desired outcome. Another study [35] formulates the image generation task as the completion of missing pixels in an image conditioned on adjacent pixels. A study in [36] adopts GANs conditioned on reference optical images to enhance the interpretability of SAR data through SAR-to-optical domain translation. Another focus is on generating synthetic images using simulation platforms. Authors in [37] use a proprietary simulation software to generate overhead plane images with a novel placement on the map, and validate that detection models trained on synthetic data together with only a small portion of real data can potentially reach the performance of models trained solely on real data; thus reducing the need for annotated real data. The application of diffusion-based models to satellite imagery generation represents a relatively new and promising area of research in remote sensing. Motivated by the effective application of diffusion models in natural images and the extensive utilization of GANs for image super-resolution in remote sensing, researchers in [38] adopted a diffusion-based hybrid model conditioned on the features of low-resolution image, extracted through a transformer network and CNN, to guide the image generation. Additionally, a recent diffusion-based model conditioned on SAR data for cloud removal task demonstrated promising results [39]. In another study [40], the image fusion task is formulated as an image-to-image translation, where the diffusion model is conditioned on the low-resolution multi-spectral image, and high-resolution panchromatic image to guide the generation of a pansharpened high-resolution multi-spectral image. Authors in [41] demonstrate the conditioning of pre-trained diffusion models on cartographic data to generate realistic satellite images. While diffusion models have shown promising potential and are gradually replacing traditional state-of-the-art methods in various domains, their application in conditioned image synthesis remains relatively underexplored, especially in the context of satellite imagery. This indicates a significant gap in the existing body of knowledge and presents an opportunity for further investigation. ## 3 Methodology Our goal is to design a conditional generative model that estimates the reverse of the forward diffusion process which converts complex data distribution into a simple noise distribution by gradually adding small isotropic Gaussian noise with a smooth variance schedule to the intermediate latent distributions throughout the sufficiently large diffusion steps. ### Loss Function To maintain consistency with the prior research and prevent potential confusion, we will omit the conditioning signal \(c\) when deriving the loss function. The probability the generative model assigns to data in its tractable form can be evaluated as relative probability of the reverse trajectories \(p(x_{t-1}|x_{t})\) and forward trajectories \(q(x_{t}|x_{t-1})\), averaged over forward trajectories \(q(x_{1:T}|x_{0})\) as in Eq. 1. \[\begin{split} p_{\theta}(x_{0})&=\int dx_{1:T}p_{ \theta}(x_{0:T})\\ &=\int dx_{1:T}p_{\theta}(x_{0:T})\frac{q(x_{1:T}|x_{0})}{q(x_{1: T}|x_{0})}\\ &=\int dx_{1:T}q(x_{1:T}|x_{0})p(x_{T})\prod_{t=1}^{T}\frac{p_{ \theta}(x_{t-1}|x_{t})}{q(x_{t}|x_{t-1})}\end{split} \tag{1}\] where the reverse process is defined as a Markov chain with learned Gaussian transitions, starting at \(p(x_{T})=\mathcal{N}\left(x_{T};\mathbf{0},\mathbf{I}\right)\). Training is performed by maximizing the evidence lower bound (ELBO) on log-likelihood \(L\) (Eq. 2), which can also be expressed as minimizing the upper bound on negative log-likelihood (Eq. 3). \[L=\int dx_{0}q(x_{0})\log(p_{\theta}(x_{0})) \tag{2}\] \[\begin{split}\mathbb{E}[&-\log(p_{\theta}(x_{0})) ]\leq\mathbb{E}_{q}[-\log\frac{p_{\theta}(x_{0:T})}{q(x_{1:T}|x_{0})}]\\ &=\mathbb{E}_{q}[-\log(p(x_{T}))-\sum_{t\geq 1}\log(\frac{p_{ \theta}(x_{t-1}|x_{t})}{q(x_{t}|x_{t-1})})]\end{split} \tag{3}\] \[\mathbb{E}_{q}[L_{T}+\sum_{t>1}L_{t-1}+L_{0}] \tag{4}\] Rewriting the Eq. 3 as Eq. 4 allows one to explore the individual contributions of the terms \(L_{0},L_{t-1}\), and \(L_{T}\) (Eq. 5, 6, 7) involved in the optimization process. \[L_{0} =-\log(p_{\theta}(x_{0}|x_{1})) \tag{5}\] \[L_{t-1} =D_{KL}(q(x_{t-1}|x_{t},x_{0})||p_{\theta}(x_{t-1}|x_{t}))\] (6) \[L_{T} =D_{KL}(q(x_{T}|x_{0})||p(x_{T})) \tag{7}\] Given sufficiently large diffusion steps \(T\), infinitesimal and smooth noise variance \(\beta_{t}\), forward noising process adequately destroys the data distribution so that \(q(x_{T}|x_{0})\approx\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(p(x_{T})\approx\mathcal{N}(\mathbf{0},\mathbf{I})\); hence the \(KL\) divergence between two nearly isotropic Gaussian distributions (Eq. 7) becomes negligibly small. The \(L_{0}\) term corresponds to the reverse process decoder, which is computed using the discretized cumulative Gaussian distribution function as described in [19]. The \(L_{t-1}\) term, which dominates the optimization process, is composed of \(KL\) divergence between two Gaussian distributions - the reverse process posterior \(q(x_{t-1}|x_{t},x_{0})\) conditioned on data distribution \(x_{0}\sim q(x_{0})\) and neural network estimate of reverse process \(p_{\theta}(x_{t-1}|x_{t})\). \[\tilde{\alpha}_{t} =\frac{f(t)}{f(0)}, f(t) =\cos\left(\frac{t/T+s}{1+s}\cdot\frac{\pi}{2}\right)^{2} \tag{8}\] \[\beta_{t} =1-\frac{\tilde{\alpha}_{t}}{\tilde{\alpha}_{t-1}}, \alpha_{t} =1-\beta_{t}, \tilde{\alpha}_{t} =\prod_{s=0}^{t}\alpha_{s}\] Reverse process posterior \(q(x_{t-1}|x_{t},x_{0})\) can be derived using the Bayes theorem and Eqs. 9 - 11. To prevent the abrupt disturbance in noise level, the cosine variance scheduler described in [20] is adopted (Eq. 8). Here, \(s=0.008\) is a small offset to maintain numerical stability and \(\beta_{t}\) is clipped to be between 0 and 1. \[q(x_{t}|x_{t-1}) =\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{I}) \tag{9}\] \[q(x_{t}|x_{0}) =\mathcal{N}(x_{t};\sqrt{\tilde{\alpha}_{t}}x_{0},(1-\tilde{ \alpha}_{t})\mathbf{I})\] (10) \[q(x_{t-1}|x_{0}) =\mathcal{N}(x_{t-1};\sqrt{\tilde{\alpha}_{t-1}}x_{0},(1-\tilde{ \alpha}_{t-1})\mathbf{I}) \tag{11}\] Eq. 12 shows that the reverse process posterior is parameterized by posterior mean \(\widetilde{\mu}_{t}(x_{t},x_{0})\) and posterior variance \(\widetilde{B}_{t}\) described in Eq. 13 and Eq. 14, respectively. \[q(x_{t-1}|x_{t},x_{0}) =\mathcal{N}(x_{t-1};\widetilde{\mu}(x_{t},x_{0}),\widetilde{\beta }\mathbf{I}) \tag{12}\] \[\widetilde{\mu}_{t}(x_{t},x_{0}) =\frac{\sqrt{\tilde{\alpha}_{t-1}}\beta_{t}}{1-\tilde{\alpha}_{ t}}x_{0}+\frac{\sqrt{\tilde{\alpha}_{t}}(1-\tilde{\alpha}_{t-1})}{1-\tilde{ \alpha}_{t}}x_{t}\] (13) \[\widetilde{\beta}_{t} =\frac{1-\tilde{\alpha}_{t-1}}{1-\tilde{\alpha}_{t}}\beta_{t} \tag{14}\] Since \(q(x_{t-1}|x_{t})\) is intractable, we are approximating it with a deep neural network surrogate Eq. 15. This representation of the estimate reverse process is parameterized by model mean \(\mu_{\theta}(x_{t},t)\) and model variance \(\sigma_{t}^{2}\), described in Eq. 16 and 17, respectively. The model mean is derived by following the noise parameterization \(x_{t}(x_{0},\epsilon)=\sqrt{\tilde{\alpha}_{t}}x_{0}+\sqrt{1-\tilde{\alpha}_{ t}}\epsilon\quad\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and Eq. 13. \(\epsilon_{\theta}\) is a function estimator intended to predict \(\epsilon\) from \(x_{t}\). The model variance \(\sigma_{t}^{2}\) is fixed to a predefined constant for each diffusion step and can take either of \(\beta_{t}\) or \(\widetilde{\beta}_{t}\), which are the upper and lower bounds on the variance, respectively. \[p_{\theta}(x_{t-1}|x_{t}) =\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\sigma_{t}^{2}\mathbf{I}) \tag{15}\] \[\mu_{\theta}(x_{t},t) =\frac{1}{\sqrt{\tilde{\alpha}_{t}}}\left(x_{t}-\frac{\beta_{t}} {\sqrt{1-\tilde{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t)\right)\] (16) \[\sigma_{t}^{2} \in\{\beta_{t},\widetilde{\beta}_{t}\} \tag{17}\] The loss term we want to minimize is the \(KL\) divergence between the reverse process posterior and estimate of the reverse process (Eq. 6), which has a closed form solution leading to Eq. 18. Applying the noise parameterization to the posterior mean results in a simple training objective function \(L_{simple}\) (Eq. 19). \[L_{t-1} =\mathbb{E}_{q}\left[\frac{1}{2\sigma_{t}^{2}}\|\widetilde{\mu}_{ t}(x_{t},x_{0})-\mu_{\theta}(x_{t},t)\|^{2}\right] \tag{18}\] \[L_{simple} =\mathbb{E}_{t,x_{0},\epsilon}\left[\|\epsilon-\epsilon_{ \theta}(x_{t},t)\|^{2}\right] \tag{19}\] During the derivation of \(L_{simple}\), model variance \(\sigma_{t}^{2}\) was kept fixed meaning that it was not part of the learning process. However, [20] suggests that even though in the limit of infinite diffusion steps, the choice of model variance would not affect the sample quality at all, in practice with a much shorter forward trajectory, learning the model variance \(\Sigma_{\theta}(x_{t},t)\) in Eq. 20 can improve the model log-likelihood, leading to better sample quality. Due to the incorporation of the predicted variance term \(v\) in the adjusted model of the reverse process (Eq. 21), where \(v\) varies between 0 and 1 and serves as a linear interpolation factor between the upper and lower bounds on the log-variance, a new modified training objective function denoted as \(L_{hybrid}\) is introduced in Eq. 22. \[\Sigma_{\theta}(x_{t},t) =\exp\left(v\log(\beta_{t})+(1-v)\log(\widetilde{B}_{t})\right) \tag{20}\] \[p_{\theta}(x_{t-1}|x_{t}) =\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{\theta}(x_{t},t)) \tag{21}\] While \(L_{simple}\) exclusively guides the update of the mean \(\mu_{\theta}(x_{t},t)\) assuming a fixed variance schedule (Eq. 15), \(L_{vlb}\) exclusively facilitates the update of the variance \(\Sigma_{\theta}(x_{t},t)\), keeping the mean unaffected. Optimizing \(L_{vlb}\) is more challenging than optimizing \(L_{simple}\) due to increased gradient noise. To address this, the hybrid loss function \(L_{hybrid}\) combines these two approaches using a small weight parameter \(\lambda=0.001\). This combination allows for learning the variance while maintaining the overall stability of the optimization process. \[\begin{split} L_{vlb}&=L_{0}+L_{1}+\ldots+L_{T-1}+L_{T} \\ L_{hybrid}&=L_{simple}+\lambda L_{vlb}\end{split} \tag{22}\] We adopted the classifier-free guidance strategy proposed in [24], which jointly trains the unconditional model \(\epsilon_{\theta}(x_{t},t,c=\emptyset)\) and conditional model \(\epsilon_{\theta}(x_{t},t,c)\), simply by setting the conditioning signal \(c\) to the unconditional class identifier \(\emptyset\) with some probability \(p_{uncond}=0.2\), set as a hyperparameter. ### Sampling The sampling procedure starts at timestep \(T\) with \(x_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) drawn from a standard normal prior, which is then passed as input to the denoising neural network model \(\widetilde{\epsilon}_{\theta}(x_{t},t,c)\) as described in Eq. 23. \[\begin{split}\widetilde{\epsilon}_{\theta}(x_{t},t,c)& =\epsilon_{\theta}(x_{t},t,c=\emptyset)\\ &+\omega\left(\epsilon_{\theta}(x_{t},t,c)-\epsilon_{\theta}(x_{ t},t,c=\emptyset)\right)\end{split} \tag{23}\] which is a linear combination of conditional and unconditional noise estimates. The interpolation factor \(\omega\) enables the trade-off between sample fidelity and diversity. The sampling process is conducted iteratively, where the output from the denoising network constitutes the input to the model at the subsequent iteration. To sample \(x_{t-1}\sim p_{\theta}(x_{t-1}|x_{t},c)\) is to compute Eq. 24, where the noise profile \(z\) follows Eq. 25. \[\begin{split} x_{t-1}&=\frac{1}{\sqrt{\alpha}} \left(x_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\alpha_{t}}}\widetilde{\epsilon}_{ \theta}(x_{t},t,c)\right)\\ &+\sqrt{\Sigma_{\theta}(x_{t},t,c)z}\end{split} \tag{24}\] \[\begin{split} z=\begin{cases}\mathcal{N}(\mathbf{0},\mathbf{I})&t>1\\ 0&t\leq 1\end{cases}\end{split} \tag{25}\] At the end of the sampling process, model mean \(\mu_{\theta}(x_{1},1)\) is displayed noiselessly. ### Denoising Network Our model architecture is based on previous work [42], and builds upon the underlying time-conditional U-Net architecture, which is illustrated in Fig. 1 (a). The encoding path of the model, which consists of consecutive residual networks (ResNet) at each level (Fig. 1 (b)), captures the low-level latent representation of the noisy image \(x_{t}\) at diffusion step \(t\). The decoding path of the network, which consists of modified ResNet blocks (Fig. 1 (d)) that incorporate the conditioning segmentation map (SegMap) through the spatially-adaptive (SPADE) normalization module (Fig. 1 (c)), reconstructs the level of noise added to the original image \(x_{0}\). In this manner, the semantic information encoded in SegMap is preserved, rather than being washed away after successive passes through the convolutional and normalization layers [6; 7]. Group normalization is applied as a normalization technique to reduce sensitivity to variations in batch size. The timestep embedding \(t_{emb}\) modulates the input signal that is fed into both the encoder and decoder ResNet blocks, whereas the semantic information SegMap is exclusively injected into the network through the decoder ResNet blocks [26]. Multi-head self-attention modules are added on top of the ResNet blocks [43] only at predefined resolutions such as 32, 16, and 8. The upsampling block first scales up the input using the nearest neighbors method, and then it performs a convolution operation. The downsampling block employs the convolution operation with a stride parameter set to 2. The decoding and encoding paths are interlinked via the skip connections to fuse information from both ends. The proposed conditional U-Net model estimates the noise mean \(\epsilon_{\theta}(\cdot,t)\) (Eq. 19) and \(v\) component (Eq. 20), which is normalized to a range of 0 to 1 and interpolates between the upper and lower bounds on log-variance. ## 4 Experiments ### Dataset Due to the unavailability of high-precision annotated satellite imagery of building footprints that can serve as a benchmark dataset at the time of conducting the experiments, we opted to create our own. It is worth mentioning that manually labeling high-resolution imagery is an expensive and labor-intensive endeavor. In this section, we introduce the SAT25k dataset, which is composed of the high-resolution satellite imagery of buildings and the corresponding building footprint annotations (Table 1). \begin{table} \begin{tabular}{l c} \hline \hline Data Source & Google Earth \\ Satellite & Pleiades \\ Correction Method & Orthophotomosaic \\ Region & Mardakan, Baku, Azerbaijan \\ Coordinate System & EPSG:32639 - WGS 84 \\ Bands & Red, Green, Blue \\ Data Type & uint8 \\ Data Range & 0 - 255 \\ Resolution [m] & 0.5 \\ Annotation Classes & Binary \\ Number of Polygons & 25000 \\ Surface Area [\(km^{2}\)] & 92.3 \\ Footprints Area [\(km^{2}\)] & 3.98 \\ Pixel Count [M] & 370 \\ \hline \hline \end{tabular} \end{table} Table 1: Description of SAT25K dataset. The dataset covers an area primarily characterized by suburban houses, complemented by a subset of industrial facilities and buildings, although not exhaustive in its representation. The designated region falls within the parameters of a semi-arid climatic zone. The features of interest remain unaffected by meteorological elements such as snow coverage, cloud presence, or any other adverse atmospheric conditions. The associated annotations are encoded in a pixel-wise binary format, where the values 1 and 0 correspond to the positive and negative classes, respectively. The original dataset is tiled into \(128\times 128\) windows with 50% overlap between the adjacent tiles. Subsequently, we downsampled the \(128\times 128\) tiles using the pixel area interpolation method to produce the \(64\times 64\) tiles. Tiles with less than 1% positive class ratio are dropped. Both datasets at the resolution of \(128\times 128\) and \(64\times 64\) undergo a process of augmentation by applying the basic geometric transformations including random rotation 90\({}^{\circ}\) and vertical flip, only on their training splits. The composition of the resulting datasets for both experiments is described in Table 2. ### Environment The experimentation was performed using a single NVIDIA Tesla V100 instance for the \(128\times 128\) resolution and a single NVIDIA Quadro P5000 instance for the \(64\times 64\) resolution. Both instances are equipped with 16GB of GPU memory. The remaining environment parameters are described in Table 3. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Split} & \multicolumn{2}{c}{64 \(\times\) 64} & \multicolumn{2}{c}{128 \(\times\) 128} \\ \cline{2-5} & Org. & Aug. & Org. & Aug. \\ \hline Train & 24366 & 25634 & 24681 & 25319 \\ Test & 5000 & 0 & 5000 & 0 \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution of the Organic and Augmented image tiles across the train and test splits of SAT25K dataset. Figure 1: Denoising Network. (a) Conditional U-Net model estimates the noise mean and variance of a noisy input image \(x_{t}\) at timestep t. (b) Encoder ResNet block captures the low-level representation of \(x_{t}\). (c) SPADE block effectively modulates the input signal based on the conditioning SegMap, which is denoted as \(c\) within the equations. (d) Decoder ResNet block incorporates the latent representation from the encoder block along with the semantic information to reconstruct the noise model. ### Training The experiments at different resolutions are conducted on separate environments with parameters described in Table 4. The models are trained for 92 hours and 163 hours for experiments at resolutions of \(64\times 64\) and \(128\times 128\), respectively. For both experiments, we utilize the AdamW implementation of PyTorch 2.0 as the optimizer, with the default parameters except for the weight decay, which is configured at 0.05. Before the parameter update, the gradient norm is clipped to a maximum threshold of 1 to prevent unstable optimization. Furthermore, to ensure parameter updates remain robust against sudden fluctuations, we calculate exponential moving averages (EMA) of the model parameters, using a decay parameter set to 0.9999. For the early stages of the optimization is characterized as noisy, we added a delay of 10K iterations to start gathering the EMA values. During experimentation, we observed that the cosine annealing with a warm restarts scheme as a learning rate strategy hurts the optimization process at the inflection points, leading to instabilities. Therefore, the learning rate is configured such that it follows the cosine curve starting at the initial value of \(2\times 10^{-5}\), and decreasing down to 0 throughout the entire optimization period, without any restarts. The input tiles are scaled to a range of -1 to 1, while the SegMap is one-hot encoded. For each batch, the diffusion timesteps are sampled uniformly. The GPU utilizations are recorded at 93% and 99% for experiments at resolutions of \(128\times 128\) and \(64\times 64\), respectively. The goal of optimization is to minimize the objective function \(L_{hybrid}\) described in Eq. 22. Fig. 2 illustrates the evolution of key indicators throughout the optimization process. Fig. 2 (a) shows that \(L_{simple}\) follows a stable downward trajectory for both experiments, whereas model 64 demonstrates elevated losses. Additionally, despite model 64 being trained with larger batch size, its \(L_{simple}\) curve exhibits more noise compared to that of model 128. It can be inferred that downsampling from \(128\times 128\) to \(64\times 64\) during the dataset preparation resulted in an irreversible loss of information, suggesting that higher resolutions imply a smaller and smoother loss term. The variational loss term \(L_{vlb}\), which is portrayed in Fig. 2 (b), displays more turbulent convergence dynamics compared to \(L_{simple}\), confirming the findings from an earlier study [20]. The average of variance signal \(v\), which is driven by the \(L_{vlb}\) term and described in Eq. 20, is depicted in Fig. 2 (c) in its unnormalized form. Even though the \(v\) term is not explicitly constrained, it follows a well-defined behavior. Both models follow the lower bound on the variance schedule \(\widehat{\beta_{i}}\), with model 128 exhibiting a closer alignment. Fig. 2 (d) demonstrates that as the optimization progresses, the gradient norms tend to decrease. Smaller gradient norms indicate that the changes in the parameter values are becoming smaller, implying that the algorithm is getting closer to an optimal solution. Over the course of training, the optimizer gradually adjusts the model parameters towards zero, which is reflected in Fig. 2 (e). The cosine annealing schedule (Fig. 2 (f)) reduces sensitivity to the choice of the initial learning rate and ensures a smooth transition to promote stable convergence. ### Sampling Sampling is performed according to the parameters described in Table 5. EMA of the model parameters are employed instead of their instantaneous snapshots during the sampling process. Since we are traversing every diffusion step during sampling, generating a dataset of 5000 synthetic images at resolutions of \(64\times 64\) and \(128\times 128\) takes substantial time. The average wall-clock time measured to generate each sample increases in accordance with the model size and length of the reverse process. The guidance scale \(\omega\) is set to 1.5 for a balanced trade-off between sample fidelity and diversity [42]. The GPU utilization for both models is recorded at 99% across the entire sampling process. \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & 64 \(\times\) 64 & 128 \(\times\) 128 \\ \hline Diffusion Steps & 1000 & 1000 \\ Noise Schedule & cosine & cosine \\ Model Size & 31M & 130M \\ Model Channels & 64 & 128 \\ Depth & 3 & 4 \\ Channel Multiplier & 1, 2, 3, 4 & 1, 1, 2, 3, 4 \\ Head Channels & 32 & 64 \\ Attention Resolution & 32, 16, 8 & 32, 16, 8 \\ Number of ResNet blocks & 2 & 2 \\ Dropout & 0.1 & 0 \\ Batch Size & 32 & 8 \\ Iterations & 400K & 1250K \\ Initial Learning Rate & \(2\times 10^{-5}\) & \(2\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Training parameters. \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & 64 \(\times\) 64 & 128 \(\times\) 128 \\ \hline OS & Ubuntu 18.04 & Ubuntu 20.04 \\ GPU model & Quadro P5000 & Tesla V100 \\ GPU memory & 16GB & 16GB \\ GPU count & 1 & 1 \\ CPU model & Core I9 9900K & Xeon E5-2690 \\ CPU memory & 128GB & 110GB \\ \hline \hline \end{tabular} \end{table} Table 3: Environment parameters. ## 5 Results The generated images conditioned on their corresponding segmentation maps are illustrated in Fig. 3. The results indicate that both models are capable of synthesizing semantically meaningful, diverse, and high-fidelity samples. Furthermore, the strong alignment observed between the generated images and semantic maps signifies the successful integration of the conditioning mechanism into the model architecture. The proposed diffusion models efficiently capture the structure, style, texture, and color composition of the real images without displaying visual cues of overfitting or mode collapse. Notably, the models effectively handle the challenges inherent to satellite imagery, including object occlusion, shadows, straight lines, and the complex interdependence between spectral bands. ### Frechet inception distance FID is a metric that provides a balanced assessment of image fidelity and diversity simultaneously. The FID scores described in Table 6 are measured between the 5000 synthetic samples generated by our proposed SatDM models and the 5000 real test images from the SAT25K dataset. The test images and their corresponding semantic labels have never participated in the training phase. The Inception V3 model trained on the ImageNet 1K dataset is used to calculate the FID scores. The \(64\times 64\) model outperformed the \(128\times 128\) model since the latter poses a more challenging optimization problem due to the increased dimensionality. The results of both resolutions are on par with the state-of-the-art. ### Intersection over Union The IoU quantifies the degree of overlap between the predicted segmentation map and the ground truth segmentation map. In the generation of synthetic samples, our diffusion model accepts pure noise and the ground truth segmentation map (SegMap) as input parameters. Subsequently, the generated sample is passed through an off-the \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & \(64\times 64\) & \(128\times 128\) \\ \hline GPU model & Quadro P5000 & Tesla V100 \\ GPU count & 1 & 1 \\ GPU utilization & 99\% & 99\% \\ Batch Size & 256 & 64 \\ Sampling Steps & 1000 & 1000 \\ Guidance (\(\omega\)) & 1.5 & 1.5 \\ Number of Samples & 5000 & 5000 \\ Duration [hours] & 24.3 & 42.5 \\ Throughput [\(s^{-1}\)] & \(5.72\times 10^{-2}\) & \(3.26\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Sampling parameters. Figure 2: Optimization dynamics. (a) \(L_{simple}\) term is computed as the mean squared error between the actual and estimated noise terms (Eq. 19). (b) \(L_{vlb}\) is calculated as the negative log-likelihood loss for term \(L_{0}\) and KL divergence loss for terms \(L_{1:T}\) (Eq. 22). (c) The mean of the interpolation variable \(v\), obtained directly from the model output, serves as the basis for estimating variance. (d) Monitoring of the gradient norms to identify anomalies such as exploding or vanishing gradients. (e) The parameter norm is calculated as the \(L^{2}\)-norm of all trainable model parameters. (f) Learning rate profile under the cosine annealing without warm restart schedule. shelf segmentation model known as the Segment Anything Model (SAM), without undergoing any fine-tuning. The IoU scores, as described in Table 6, are computed by comparing the predicted segmentation map of the generated images (SamMap-Synthetic) to the true mask (SegMap). The SAM model is configured to use 5 points for each connected component in the SegMap. To assess the fitness of the SAM model to the domain of satellite imagery from the test split of the SAT25K dataset, we also calculated the IoU score between the SAM's semantic segmentation of the real imagery (SamMap-Real) and the corresponding SegMap. The results indicate that SAM, without any fine-tuning, can only offer limited performance in this context. Based on our findings, we observed that the \(128\times 128\) model outperformed the \(64\times 64\) model, primarily due to the degraded boundary structure of objects in the downsampled \(64\times 64\) dataset. ### Human Opinion Study While the FID score is widely used to assess the performance of the generative models, it is limited in its ability to accurately evaluate various aspects of generated images, such as color accuracy, textual quality, semantic alignment, presence of artifacts, and subtle details. To address these limitations, we conducted a human opinion study, where we tasked 12 trained Geographic Information Systems (GIS) professionals to discriminate between the AI-generated and real samples. In this study, we designed two separate two-alternative forced choices (2AFC) questionnaires, one for each set of \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Resolution & mIoU \(\uparrow\) & FID \(\downarrow\) \\ \hline Synthetic & \(64\times 64\) & 0.48 & 25.6 \\ SAT25K & \(64\times 64\) & 0.37 & \\ \hline Synthetic & \(128\times 128\) & 0.60 & 29.7 \\ SAT25K & \(128\times 128\) & 0.46 & \\ \hline \hline \end{tabular} \end{table} Table 6: Performance Metrics. The synthetic dataset represents the 5000 model-generated samples. SAT25K represents the test split of the corresponding dataset consisting of 5000 samples. The FID score is measured between the Synthetic and SAT25K datasets, while the IoU score is computed for each dataset separately. \(\uparrow\) indicates the higher the better, while \(\downarrow\) indicates the lower the better. \begin{table} \begin{tabular}{l c c} \hline \hline Metrics & 64 \(\times\) 64 & 128 \(\times\) 128 \\ \hline Accuracy & 0.57 & 0.53 \\ Precision & 0.57 & 0.53 \\ Recall & 0.60 & 0.50 \\ F1 & 0.58 & 0.50 \\ \hline \hline \end{tabular} \end{table} Table 7: Human Opinion Study. The results are calculated individually for each respondent and then averaged across all respondents. Both questionnaires feature a balanced distribution of samples, with an equal representation of AI-generated and real images. A score of 0.5 represents maximum uncertainty in the context of binary classification. Figure 3: Model generated samples. The alternating rows represent the conditioning segmentation maps and generated images, respectively. (a) Samples generated at \(64\times 64\) resolution. (b) Samples generated at \(128\times 128\) resolution. images at the resolutions of \(64\times 64\) and \(128\times 128\), featuring the participation of 9 out of 12 respondents and 10 out of 12 respondents, respectively. Each questionnaire included 25 randomly selected real samples from the test split of the SAT25K dataset and 25 model-generated samples. To enhance visual perceptibility, we resized the \(64\times 64\) samples to \(128\times 128\) using the nearest neighbors method. Respondents were asked to determine whether a given image was real or AI-generated, with a standardized time limit of 5 seconds for each question. The optimum theoretical outcome of this study would be when the respondents are absolutely indecisive in their responses, to the extent of making completely random guesses. The results of the study are summarized in Table 7, where any deviation from 0.5, which corresponds to random guessing, indicates that humans were able to discern specific discriminative features between AI-generated and real samples. The results suggest that the samples at the resolution of \(128\times 128\) are more effective at deceiving humans. ### Trajectory Fig. 4 (a) illustrates the reverse trajectory of image synthesis, starting as a pure isotropic Gaussian noise at timestep \(t=1000\) and gradually evolving into a noise-free image at timestep \(t=0\) through subsequent denoising operations. The results suggest that, in the early phases of the generation cycle, the \(128\times 128\) model learns the image's underlying structure, texture, and color composition more quickly compared to its \(64\times 64\) counterpart. Consequently, the \(128\times 128\) model dedicates a larger portion of its capacity to resolving high-frequency details present in the images, while the \(64\times 64\) model focuses more on resolving low-frequency details, given the relatively reduced presence of fine-grained information at this resolution. Figure 4: Analysis techniques. The top row corresponds to the \(64\times 64\) experiment, while the bottom row represents the \(128\times 128\) experiment. (a) An evolution of the image generation process over a reverse diffusion trajectory represented at various diffusion timesteps. (b) Inpainting of the partially corrupted image conditioned on a semantic map. (c) Interpolation between Image A and Image B, given the SegMap corresponding to Image B, where \(\eta\) denotes the interpolation strength. (d) Similarity search between the generated and real samples. The generated image and its closest match from the training dataset are portrayed side-by-side. ### Inpainting Fig. 4 (b) demonstrates the inpainting process, where the image is cut according to the conditioning segmentation map, and the hole is filled with noise. The partially corrupted image undergoes a forward diffusion process at various timesteps. The reconstructed images reveal that both models have developed a deep understanding of object and scene semantics. Images generated over longer trajectories exhibit more detailed object restoration while causing greater modifications to the surrounding scene. In contrast, shorter trajectories produce less detailed object restoration with minimal adjustments to the surrounding area. ### Interpolation Interpolation between two images is depicted in Fig. 4 (c). Image A presents a minimalistic scene characterized by bare land cover and the absence of foreground objects, while Image B represents a complex scene. Both images undergo the diffusion process up to the predefined timestep \(t=600\). The linear combination of the corrupted images, modeled as \(x_{t}=x_{t}^{(A)}(1-\eta)+x_{t}^{(B)}\eta\), where \(\eta\) denotes the interpolation strength, is fed to the denoising model along with the conditioning SeqMap of Image B. The results demonstrate that as \(\eta\) increases, the generated images increasingly incorporate content and style from Image B. The semantically meaningful transitions from Image A to Image B signify the continuity of the denoising networks' latent space as the traversal of the latent space is reflected in smooth transitions in the data space. ### Similarity The generated images are assessed for potential overfitting by comparing them to the entire training split of the SAT25K dataset. For this purpose, a generated sample is encoded using the pre-trained Inception V3 model, and the resulting embedding vector is compared to the embeddings of real samples, with cosine similarity employed as the comparison metric. The closest matching images are showcased in Fig. 4 (d). The results reveal that both models generate distinct samples without exhibiting visual indicators of overfitting. ## 6 Conclusions In this paper, we introduce SatDM, a diffusion-based generative model that is conditioned on semantic layout. In addition, within the scope of this study, we introduce SAT25K, a novel building footprint dataset. Our models achieve state-of-the-art performance in the synthesis of realistic satellite imagery, excelling in both quantitative and qualitative assessments, paving the way for several intriguing applications. The proposed sampler is computationally intensive, and we leave the synthesis at higher resolutions for future investigations. ## Acknowledgements We are grateful for the generous support and sponsorship provided by Microsoft for Startups Founders Hub. This support has significantly contributed to the success of our research and development efforts.
2301.03383
On the continuity of the solution map of the Euler-Poincaré equations in Besov spaces
By constructing a series of perturbation functions through localization in the Fourier domain and translation, we show that the data-to-solution map for the Euler-Poincar\'e equations is nowhere uniformly continuous in $B^s_{p,r}(\mathbb{R} ^d)$ with $s>\max\{1+\frac d2,\frac32\}$ and $(p,r)\in (1,\infty)\times [1,\infty)$. This improves our previous result which shows the data-to-solution map for the Euler-Poincar\'e equations is non-uniformly continuous on a bounded subset of $B^s_{p,r}(\mathbb{R} ^d)$ near the origin.
Min Li
2022-11-09T08:22:57Z
http://arxiv.org/abs/2301.03383v1
# On the continuity of the solution map of the Euler-Poincare equations in Besov spaces # On the continuity of the solution map of the Euler-Poincare equations in Besov spaces Min Li\({}^{1}\) \({}^{1}\) Department of Mathematics, Jiangxi University of Finance and Economics, Nanchang, 330032, China E-mail: [email protected] **Abstract:** By constructing a series of perturbation functions through localization in the Fourier domain and using a symmetric form of the system, we show that the data-to-solution map for the Euler-Poincare equations is nowhere uniformly continuous in \(B^{s}_{p,r}(\mathbb{R}^{d})\) with \(s>\max\{1+\frac{d}{2},\frac{3}{2}\}\) and \((p,r)\in(1,\infty)\times[1,\infty)\). This improves our previous result which shows the data-to-solution map for the Euler-Poincare equations is non-uniformly continuous on a bounded subset of \(B^{s}_{p,r}(\mathbb{R}^{d})\) near the origin. **Keywords:** Euler-Poincare equations; Nowhere uniformly continuous; Besov spaces; Data-to-solution map. **MSC (2010):** 35Q35, 35Q51, 35L30 ## 1 Introduction In this paper, we consider the Cauchy problem in \(\mathbb{R}^{d}\) for Euler-Poincare equations \[\left\{\begin{aligned} &\partial_{t}m+u\cdot\nabla m+\nabla u^{T} \cdot m+(\text{div}u)m=0,&&(t,x)\in\mathbb{R}^{+}\times \mathbb{R}^{d},\\ & m=(1-\Delta)u,&&(t,x)\in\mathbb{R}^{+}\times \mathbb{R}^{d},\\ & u(0,x)=u_{0},&& x\in\mathbb{R}^{d}.\end{aligned}\right. \tag{1.1}\] The equations (1.1) were first introduced by Holm, Marsden, and Ratiu in [17, 18] as a high dimensional generalization of the following Camassa-Holm equation for modeling and analyzing the nonlinear shallow water waves : \[m_{t}+um_{x}+2u_{x}m=0,\ m=u-u_{xx}.\] (CH) Indeed, when \(d=1\) the Euler-Poincare equations are the same as the Camassa-Holm equation (CH). Also, the Euler-Poincare equations were investigated as the system describe geodesic motion on the diffeomorphism group with respect to the kinetic energy norm in [16]. For \(d=1\), the equation (CH) was introduced by Camassa and Holm [5] as a bi-Hamiltonian model for shallow water waves. Most importantly, CH equation has peakon solutions of the form \(Ce^{-|x-Ct|}\) which aroused a lot of interest in physics, see [8, 29]. There is an extensive literature about the strong well-posedness, weak solutions and analytic or geometric properties of the CH equation, here we name some. Local well-posedness and ill-posedness for the Cauchy problem of the CH equation were investigated in [9, 12, 13]. Blow-up phenomena and global existence of strong solutions were discussed in [7, 9, 10, 11]. The existence of global weak solutions and dissipative solutions were investigated in [3, 4, 30], more results can be found in the references therein. The first rigorous analysis of the Euler-Poincare equations (1.1) was done by Chae and Liu [6], they eatablished the local existence of weak solution in \(W^{2,p}(\mathbb{R}^{d}),\ p>d\) and local existence of unique classical solutions in \(H^{s}(\mathbb{R}^{d}),\ s>\frac{d}{2}+3\). Yan and Yin [31] further discussed the local existence and uniqueness of the solution to (1.1) in Besov spaces. On the other hand, Li, Yu and Zhai [27] proved that the solutions to (1.1) with a large class of smooth initial data blows up in finite time or exists globally in time, which settled an open problem raised by Chae and Liu [6]. Later, Luo and Yin have obtained a new blow-up result in the periodic case by using the rotational invariant properties of the equation [25]. For more results of Euler-Poincare equations, see [25, 32]. Recently, starting from the research of Himonas et al. [14, 15], the continuity properties of the data-to-solution maps of the Camassa-Holm type equations are gradually attracting interest of many authors, see [21, 23]. Most of the non-uniform continuity results are established only on a bounded set near the origin. To overcome this limitation, Inci obtained a series of nowhere uniform continuity results including many Camassa-Holm type equations [19, 20]. And for the incompressible Euler equation, Bourgain and Li [2] showed that the data-to-solution map is nowhere-uniform continuity in \(H^{s}(\mathbb{R}^{d})\) with \(s\geq 0\) by using an idea of localized Galilean boost, this method will inspire us in this article. As part of the well-posedness theory, the continuity properties of the data-to-solution map is indeed very important. In fact, the non-uniform continuity of data-to-solution map suggests that the local well-posedness cannot be established by the contraction mappings principle since this would imply Lipschitz continuity for the solution map. On the other hand, in some critical spaces the continuity of the data-to-solution maps are first broken before the existence and uniqueness of the solution, which leads to ill-posedness [22]. Most previous work on continuity has focused on the spacial one-dimensional Camassa-Holm type equations equations, for the multi-dimensional Euler-Poincare equations (1.1), the continuity problem has not been thoroughly investigated. Until recently, Li et al. [24] shown that the corresponding solution to (1.1) is not uniformly continuous dependence for that the initial data in \(H^{s}(\mathbb{R}^{d}),s>1+\frac{d}{2}\). Later, the non-uniformly continuous result was extended to Besov space \(B^{s}_{p,r}(\mathbb{R}^{d}),s>\max\{1+\frac{d}{2},\frac{3}{2}\}\) in [26]. It is worth to mention that, the non-uniform continuity results of (1.1) are established only on a bounded set near the origin. In this paper, we will remove the boundedness restriction and prove that the data-to-solution map of the Euler-Poincare equations (1.1) is not uniformly continuous on any open subset \(U\subset B^{s}_{p,r}(\mathbb{R}^{d}),s>\max\{1+\frac{d}{2},\frac{3}{2}\}\). Technically, our proof based on a symmetric form of the equation (1.1), and a translation method to construct perturbation data, this method was introduced by Bourgain and Li [2] to proof the nowhere uniform continuity of the incompressible Euler equations. For simplicity, we first transform Eq.(1.1) into a transport type system. According to Yan [31], we can rewrite (1.1) to the following nonlocal form: \[\partial_{t}u+u\cdot\nabla u=Q(u,u)+R(u,u), \tag{1.2}\] where \[\begin{cases}&Q(u,v)=-(I-\Delta)^{-1}\mathrm{div}\Big{(}\nabla u \nabla v+\nabla u\nabla v^{T}-\nabla u^{T}\nabla v-\nabla u(\mathrm{div}v)+ \frac{1}{2}(\nabla u:\nabla v)\mathbf{I}\Big{)},\\ &R(u,v)=-(I-\Delta)^{-1}\big{(}u\ \mathrm{div}v+\nabla u^{T}v\big{)}.\end{cases} \tag{1.3}\] We now define a symmetric bilinear operator \(\mathcal{T}\) by \[\mathcal{T}(u,v) :=\frac{1}{2}\Big{(}Q(u,v)+Q(v,u)+R(u,v)+R(v,u)\Big{)}\] \[=-(I-\Delta)^{-1}\mathrm{div}\big{(}M(\nabla u,\nabla v)\big{)}-( I-\Delta)^{-1}\big{(}N(u,\nabla u;v,\nabla v)\big{)}, \tag{1.4}\] here \(M,N\) are bilinear functions of \((\nabla u,\nabla v)\) and \((u,\nabla u;v,\nabla v)\) respectively according to (1.3), they are symmetric on \(u,v\). Then, the Euler-Poincare equations becomes \[\begin{cases}\partial_{t}u+u\cdot\nabla u=\mathcal{T}(u,u),&(t,x)\in\mathbb{R }^{+}\times\mathbb{R}^{d},\\ \omit\span\@@LTX@noalign{\vskip 6.0pt plus 2.0pt minus 2.0pt}\omit\cr u(0,x)=u_{0},&x\in \mathbb{R}^{d}.\end{cases}\] We first recall the non-uniform continuity results established in [26]. **Theorem 1.1** (**Non-uniform continuity on a bounded set)**.: _Let \(d\geq 2\) and \(s>2+\max\big{\{}1+\frac{d}{p},\frac{3}{2}\big{\}}\) with \(1\leq p,r\leq\infty\). The data-to-solution map \(S_{t}\) for Euler-Poincare equations (E-P) is not uniformly continuous from any bounded subset \(O_{N}=\{u_{0}\in B^{s}_{p,r}(\mathbb{R}^{d}):\|u_{0}\|_{B^{s}_{p,r}}\leq N\}\) into \(\mathcal{C}([0,T];B^{s}_{p,r})\). More precisely, there exists two sequences of initial data \(f_{n}+g_{n},\ f_{n}\) such that_ \[\|f_{n}\|_{B^{s}_{p,r}}\lesssim 1\quad\text{and}\quad\lim_{n\to\infty}\|g_{n}\|_{ B^{s}_{p,r}}=0,\] _with the solutions \(S_{t}(f_{n}+g_{n}),\ S_{t}(f_{n})\) satisfy_ \[\liminf_{n\to\infty}\|S_{t}(f_{n}+g_{n})-S_{t}(f_{n})\|_{B^{s}_{p,r}}\geq c_{0 }t,\ \forall t\in[0,T_{0}],\] _for some constant \(c_{0}>0\) and small time \(T_{0}\)._ The main result of this paper is the following theorem. **Theorem 1.2** (**Nowhere uniform continuity)**.: _Assume that \(d\geq 2\), and_ \[s>2+\max\big{\{}1+\frac{d}{p},\frac{3}{2}\big{\}}\quad\text{and}\quad(p,r)\in(1, \infty)\times[1,\infty). \tag{1.5}\] _Then the data-to-solution map \(S_{t}\) for Euler-Poincare equations for the Cauchy problem (E-P)_ \[S_{t}:B^{s}_{p,r}(\mathbb{R}^{d})\to\mathcal{C}([0,T];B^{s}_{p,r}),\quad u_{0} \mapsto S_{t}(u_{0}),\] _is nowhere uniformly continuous from \(B^{s}_{p,r}\) into \(\mathcal{C}([0,T];B^{s}_{p,r})\). More precisely, for any \(u_{0}\in B^{s}_{p,r}\) and \(N>0\), there exists two sequences of functions \(f_{n}(x),g_{n}(x)\) such that_ \[\|f_{n}\|_{B^{s}_{p,r}}\lesssim 2^{-N}\quad\text{and}\quad\lim_{n\to\infty}\|g_{ n}\|_{B^{s}_{p,r}}=0,\] _the corresponding solutions \(S_{t}(f_{n}+g_{n}),\ S_{t}(f_{n})\) satisfy_ \[\liminf_{n\to\infty}\|S_{t}(u_{0}+f_{n}+g_{n})-S_{t}(u_{0}+f_{n})\|_{B^{s}_{p,r}}\geq c_{0}t,\ \forall t\in[0,T_{0}],\] _for some constant \(c_{0}>0\) and small time \(T_{0}\)._ **Remark 1.1**.: _As a comparison with Theorem 1.1, Theorem 1.2 avoids endpoints \(p=1\) and \(p=\infty\), this is because we need to use the boundedness of Riez transform in \(L^{p}(\mathbb{R}^{d})\) when doing gradient estimate of \(\mathcal{T}\) (see Lemma 3.2 blow), which is only available when \(p\in(1,\infty)\)._ **Remark 1.2**.: _The non-uniform continuity in Theorem 1.1 established only on a bounded set near the origin, in Theorem 1.2 we have removed these restrictions and showed that for any \(u_{0}\) and any neighbour \(U(u_{0})\subset B^{s}_{p,r}\), the data-to-solution map restrict on \(U\) is not uniformly continuous. In this sense, Theorem 1.2 improves the previous results in [26]._ The remainder of this paper is organized as follows. In Section 2, we list some notations and recall basic results of the Littlewood-Paley theory. In Section 3, we present the proof of Theorem 1.2 by establishing some technical lemmas and propositions. ## 2 Littlewood-Paley analysis We first present some facts about the Littlewood-Paley decomposition, the nonhomogeneous Besov spaces and their some useful properties (see [1] for more details). Let \(\mathcal{B}:=\{\xi\in\mathbb{R}^{d}:|\xi|\leq 4/3\}\) and \(\mathcal{C}:=\{\xi\in\mathbb{R}^{d}:3/4\leq|\xi|\leq 8/3\}.\) Choose a radial, non-negative, smooth function \(\chi:\mathbb{R}^{d}\mapsto[0,1]\) such that it is supported in \(\mathcal{B}\) and \(\chi\equiv 1\) for \(|\xi|\leq 3/4\). Setting \(\varphi(\xi):=\chi(\xi/2)-\chi(\xi)\), then we deduce that \(\varphi\) is supported in \(\mathcal{C}\). Moreover, \[\chi(\xi)+\sum_{j\geq 0}\varphi(2^{-j}\xi)=1\quad\text{ for any }\xi\in\mathbb{R}^{d}.\] We should emphasize that the fact \(\varphi(\xi)\equiv 1\) for \(4/3\leq|\xi|\leq 3/2\) will be used in the sequel. For every \(u\in\mathcal{S}^{\prime}(\mathbb{R}^{d})\), the inhomogeneous dyadic blocks \(\Delta_{j}\) are defined as follows \[\Delta_{j}u=\begin{cases}0,&if\quad j\leq-2;\\ \chi(D)u=\mathcal{F}^{-1}(\chi\mathcal{F}u),&if\quad j=-1;\\ \varphi(2^{-j}D)u=\mathcal{F}^{-1}\big{(}\varphi(2^{-j}\cdot)\mathcal{F}u \big{)},&if\quad j\geq 0.\end{cases}\] In the inhomogeneous case, the following Littlewood-Paley decomposition makes sense \[u=\sum_{j\geq-1}\Delta_{j}u\quad\text{for any }u\in\mathcal{S}^{\prime}( \mathbb{R}^{d}).\] **Definition 2.1**.: _labelbesov Let \(s\in\mathbb{R}\) and \((p,r)\in[1,\infty]^{2}\). The nonhomogeneous Besov space \(B^{s}_{p,r}(\mathbb{R}^{d})\) is defined by_ \[B^{s}_{p,r}(\mathbb{R}^{d}):=\Big{\{}f\in\mathcal{S}^{\prime}( \mathbb{R}^{d}):\ \|f\|_{B^{s}_{p,r}(\mathbb{R}^{d})}<\infty\Big{\}},\] _where_ \[\|f\|_{B^{s}_{p,r}(\mathbb{R}^{d})}= \begin{cases}\left(\sum_{j\geq-1}2^{sjr}\|\Delta_{j}f\|_{L^{p} (\mathbb{R}^{d})}^{r}\right)^{\frac{1}{r}},&\text{if }1\leq r<\infty,\\ \sup_{j\geq-1}2^{sj}\|\Delta_{j}f\|_{L^{p}(\mathbb{R}^{d})},&\text{if }r= \infty.\end{cases}\] The following Bernstein's inequalities will be used in the sequel. **Lemma 2.1**.: _Let \(\mathcal{B}\) be a Ball and \(\mathcal{C}\) be an annulus. There exist constants \(C>0\) such that for all \(k\in\mathbb{N}\cup\{0\}\), any positive real number \(\lambda\) and any function \(f\in L^{p}(\mathbb{R}^{d})\) with \(1\leq p\leq q\leq\infty\), we have_ \[\operatorname{supp}\hat{f}\subset\lambda\mathcal{B} \ \Rightarrow\ \|D^{k}f\|_{L^{q}}:=\sup_{|\alpha|=k}\|\partial^{\alpha}f\|_{L^{q}}\leq C^ {k+1}\lambda^{k+(\frac{d}{p}-\frac{d}{q})}\|f\|_{L^{p}},\] \[\operatorname{supp}\hat{f}\subset\lambda\mathcal{C} \ \Rightarrow\ C^{-k-1}\lambda^{k}\|f\|_{L^{p}}\leq\|\Delta^{k}f\|_{L^{p}} \leq C^{k+1}\lambda^{k}\|f\|_{L^{p}}.\] **Lemma 2.2** (See [1]).: _Let \((s_{1},s_{2},p,r)\in\mathbb{R}^{2}\times[1,\infty]^{2}\), and \(s_{1}<s_{2},\ 0<\theta<1\), then we have_ \[\|u\|_{B^{\theta s_{2}+(1-\theta)s_{2}}_{p,r}}\leq \|u\|_{B^{s_{1}}_{p,r}}^{\theta}\|u\|_{B^{s_{2}}_{p,r}}^{1-\theta},\] \[\|u\|_{B^{\theta s_{1}+(1-\theta)s_{2}}_{p,1}}\leq \frac{C}{s_{2}-s_{1}}\Big{(}\frac{1}{\theta}+\frac{1}{1-\theta} \Big{)}\|u\|_{B^{s_{1}}_{p,\infty}}^{\theta}\|u\|_{B^{s_{2}}_{p,\infty}}^{1- \theta}.\] Then, we give some important product estimates which will be used throughout the paper. **Lemma 2.3** (See [1]).: _For \((p,r)\in[1,\infty]^{2}\) and \(s>0\), \(B^{s}_{p,r}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d})\) is an algebra. Moreover, for any \(u,v\in B^{s}_{p,r}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d})\), we have_ \[\|uv\|_{B^{s}_{p,r}}\leq C(\|u\|_{B^{s}_{p,r}}\|v\|_{L^{\infty}}+\|v\|_{B^{s}_{ p,r}}\|u\|_{L^{\infty}}).\] _In addition, if \(s>\max\big{\{}1+\frac{d}{p},\frac{3}{2}\big{\}}\), then_ \[\|uv\|_{B^{s-2}_{p,r}(\mathbb{R}^{d})}\leq C\|u\|_{B^{s-2}_{p,r}( \mathbb{R}^{d})}\|v\|_{B^{s-1}_{p,r}(\mathbb{R}^{d})}.\] **Lemma 2.4** (See [1, 28]).: _Let \((p,r)\in[1,\infty]^{2}\) and \(\sigma\geq-\min\big{\{}\frac{d}{p},1-\frac{d}{p}\big{\}}\). Assume that \(f_{0}\in B^{\sigma}_{p,r}(\mathbb{R}^{d})\), \(g\in L^{1}([0,T];B^{\sigma}_{p,r}(\mathbb{R}^{d}))\) and \(\nabla\mathbf{u}\in L^{1}([0,T];B^{\sigma-1}_{p,r}(\mathbb{R}^{d}))\) if \(\sigma>1+\frac{d}{p}\) or \(\sigma=1+\frac{d}{p},r=1\). If \(f\in L^{\infty}([0,T];B^{\sigma}_{p,r}(\mathbb{R}^{d}))\cap\mathcal{C}([0,T]; \mathcal{S}^{\prime}(\mathbb{R}^{d}))\) solves the following linear transport equation:_ \[\partial_{t}f+\mathbf{u}\cdot\nabla f=g,\ \ \ \ f|_{t=0}=f_{0}.\] _1. There exists a constant \(C=C(\sigma,p,r)\) such that the following statement holds_ \[\|f(t)\|_{B^{\sigma}_{p,r}}\leq e^{CV(t)}\Big{(}\|f_{0}\|_{B^{\sigma}_{p,r}}+ \int_{0}^{t}e^{-CV(\tau)}\|g(\tau)\|_{B^{\sigma}_{p,r}}\mathrm{d}\tau\Big{)},\] _where_ \[V(t)=\int_{0}^{t}\|\nabla\mathbf{u}(\tau)\|_{B^{\sigma-1}_{p,r}}\mathrm{d}\tau \ \ \text{ if }\ \ \sigma>1+\frac{d}{p}\ \ \ \text{or}\ \ \ \{\sigma=1+\frac{d}{p},\ r=1\}.\] _2. If \(\sigma>0\), then there exists a constant \(C=C(\sigma,p,r)\) such that the following holds_ \[\|f(t)\|_{B^{\sigma}_{p,r}}\leq \|f_{0}\|_{B^{\sigma}_{p,r}}+\int_{0}^{t}\|g(\tau)\|_{B^{\sigma}_{ p,r}}\mathrm{d}\tau\] \[+\int_{0}^{t}\Big{(}\|f(\tau)\|_{B^{\sigma}_{p,r}}\|\nabla\mathbf{ u}\|_{L^{\infty}}+\|\nabla\mathbf{u}\|_{B^{\sigma-1}_{p,r}}\|\nabla f(\tau)\|_{L^{ \infty}}\Big{)}\mathrm{d}\tau.\] ## 3 Proof of the main theorem We first recall the local existence and uniqueness theory of solutions for the Cauchy problem (1.1) in Besov spaces [31], then provide some technical lemmas and propositions. ### Preparation and technical lemmas **Lemma 3.1** (See [31]).: _Assume that_ \[d\in\mathbb{N}_{+},1\leq p,r\leq\infty\text{ and }s>\max\{1+\frac{d}{p}, \frac{3}{2}\}. \tag{3.1}\] _Let \(u_{0}\in B^{s}_{p,r}(\mathbb{R}^{d})\), then there exists a time \(T=T(\|u_{0}\|_{B^{s}_{p,r}(\mathbb{R}^{d})})>0\) such that (1.1) has a unique solution in_ \[\left\{\begin{aligned} C([0,T];B^{s}_{p,r}(\mathbb{R}^{d}))\cap C^{ 1}([0,T];B^{s-1}_{p,r}(\mathbb{R}^{d})),& if\ \ r<\infty,\\ L^{\infty}([0,T];B^{s}_{p,\infty}(\mathbb{R}^{d}))\cap Lip([0,T];B^{s-1 }_{p,\infty}(\mathbb{R}^{d})),& if\ \ r=\infty.\end{aligned}\right.\] _And the mapping \(u_{0}\mapsto u\) is continuous from \(B^{s}_{p,r}(\mathbb{R}^{d})\) into \(C([0,T];B^{s^{\prime}}_{p,r}(\mathbb{R}^{d}))\cap C^{1}([0,T];B^{s^{\prime}-1 }_{p,r}(\mathbb{R}^{d}))\) for all \(s^{\prime}<s\) if \(r=\infty\), and \(s^{\prime}=s\) otherwise. Moreover, for all \(t\in[0,T]\), there holds_ \[\|u(t)\|_{B^{s}_{p,r}(\mathbb{R}^{d})}\leq C\|u_{0}\|_{B^{s}_{p,r}(\mathbb{R}^ {d})}.\] **Lemma 3.2**.: _Let \((s,p,r)\) satisfy (1.5), then for the symmetric bilinear operator \(\mathcal{T}(f,g)\) defined by (1.3) and (1.4), we have_ \[\|\mathcal{T}(f,g)\|_{B^{s}_{p,r}}\leq C\|f\|_{B^{s}_{p,r}}\|g\|_{B^{s}_{p,r}} \tag{3.2}\] _If \(0<p<\infty\), there holds_ \[\|\mathcal{T}(f,g)\|_{L^{p}} \leq\|\nabla f\|_{L^{p}}\|g,\nabla g\|_{L^{\infty}} \tag{3.3}\] \[\|\mathcal{T}(f,g)\|_{L^{p}} \leq\sum_{0\leq|a|,|b|\leq 1}\|\partial^{a}f\partial^{b}g\|_{L^{p}}=W _{1,p}(f,g) \tag{3.4}\] _And, for the gradient \(\nabla\mathcal{T}\), we have_ \[\|\nabla\mathcal{T}(f,g)\|_{L^{p}} \leq\|\nabla f\|_{L^{p}}\|g,\nabla g\|_{L^{\infty}} \tag{3.5}\] \[\|\nabla\mathcal{T}(f,g)\|_{L^{p}} \leq\sum_{0\leq|a|,|b|\leq 2}\|\partial^{a}f\partial^{b}g\|_{L^{p} }=W_{2,p}(f,g) \tag{3.6}\] _Where we denote \(W_{m,p}(f,g)=\sum_{0\leq|a|,|b|\leq m}\|\partial^{a}f\partial^{b}g\|_{L^{p}}\) with the multiindex \(a=(a_{1},a_{2},\cdots,a_{d}),\ |a|=a_{1}+\cdots+a_{d}\) and \(\partial^{a}=\frac{\partial^{|a|}}{\partial x_{1}^{a_{1}}\cdots\partial x_{d }^{a_{d}}}\)._ Proof.: As the operator \((I-\Delta)^{-1}\) is a Fourier \(S^{-2}\)-multiplier, it's easy to see that \[\|\mathcal{T}(f,g)\|_{B_{p,r}^{s}} \leq C\|M(\nabla f,\nabla g)\|_{B_{p,r}^{s-1}}+C\|N(f,\nabla f,g, \nabla g)\|_{B_{p,r}^{s-2}}\leq C\|f\|_{B_{p,r}^{s}}\|g\|_{B_{p,r}^{s}},\] here we have use the Lemma 2.3. Then in \(L^{p}\) spaces, \[\|\mathcal{T}(f,g)\|_{L^{p}} =\|(I-\Delta)^{-1}\mathrm{div}\big{(}M(\nabla f,\nabla g)\big{)}+ (I-\Delta)^{-1}\big{(}N(f,\nabla f,g,\nabla g)\big{)}\|_{L^{p}}\] \[\leq\|M(\nabla f,\nabla g)\|_{L^{p}}+\|N(f,\nabla f,g,\nabla g)\| _{L^{p}}\] \[\leq\|\nabla f\|_{L^{p}}\|\nabla g\|_{L^{\infty}}+\|\nabla f\|_{ L^{p}}\|g\|_{L^{\infty}}\] \[\leq\|\nabla f\|_{L^{p}}\|g,\nabla g\|_{L^{\infty}}\] we also have \[\|\mathcal{T}(f,g)\|_{L^{p}} \leq\|M(\nabla f,\nabla g)\|_{L^{p}}+\|N(f,\nabla f,g,\nabla g) \|_{L^{p}}\] \[\leq\sum_{0\leq|a|,|b|\leq 1}\|\partial^{a}f\partial^{b}g\|_{L^{p}}=W _{1,p}(f,g)\] For the gradient \(\nabla\mathcal{T}\), noting that \((I-\Delta)^{-1}\partial_{i}\partial_{j}=-\Delta(I-\Delta)^{-1}\big{(}(-\Delta )^{-1}\partial_{i}\partial_{j}\big{)}=\big{(}(1-\Delta)^{-1}+1\big{)}R_{i}R_{j}\) and the Riesz transform \(R_{i}\) is bounded in \(L^{p}\to L^{p},\ p\in(1,\infty)\), then we have \[\|\nabla\mathcal{T}(f,g)\|_{L^{p}} =\|\nabla(I-\Delta)^{-1}\mathrm{div}\big{(}M(\nabla f,\nabla g) \big{)}+\nabla(I-\Delta)^{-1}\big{(}N(f,\nabla f,g,\nabla g)\big{)}\|_{L^{p}}\] \[\leq\|M(\nabla f,\nabla g)\|_{L^{p}}+\|N(f,\nabla f,g,\nabla g)\| _{L^{p}}\] \[\leq\|\nabla f\|_{L^{p}}\|\nabla g\|_{L^{\infty}}+\|\nabla f\|_{ L^{p}}\|g\|_{L^{\infty}}\] \[\leq\|\nabla f\|_{L^{p}}\|g,\nabla g\|_{L^{\infty}}\] and \[\|\nabla\mathcal{T}(f,g)\|_{L^{p}} \leq\|\mathrm{div}M(\nabla f,\nabla g)\|_{L^{p}}+\|N(f,\nabla f,g,\nabla g)\|_{L^{p}}\] \[\leq\sum_{0\leq|a|,|b|\leq 2}\|\partial^{a}f\partial^{b}g\|_{L^{p} }=W_{2,p}(f,g)\] We'll need the following estimates of the difference \(u(t)-v(t)\) in Besov spaces. **Proposition 3.1**.: _Let \(1\leq p,r\leq\infty\) and \(s>\max\{1+\frac{d}{p},\frac{3}{2}\}\). Assume that \(u(t),v(t)\) are solutions of (E-P) with initial data \((u_{0},v_{0})\in B^{s}_{p,r}(\mathbb{R}^{d})\), then \(\delta(t):=u(t)-v(t)\) satisfies_ \[\|\delta(t)\|_{B^{s-1}_{p,r}}\leq\|\delta_{0}\|_{B^{s-1}_{p,r}}\exp\big{(}C\int _{0}^{t}\|u(\tau),v(\tau)\|_{B^{s}_{p,r}}d\tau\big{)}\] _and_ \[\|\delta(t)\|_{B^{s}_{p,r}}\leq\Big{(}\|\delta_{0}\|_{B^{s}_{p,r}}+C\int_{0}^{ t}\|\delta\|_{B^{s-1}_{p,r}}\|\nabla v\|_{B^{s}_{p,r}}d\tau\Big{)}\exp\big{(}C \int_{0}^{t}\|u(\tau),v(\tau)\|_{B^{s}_{p,r}}d\tau\big{)} \tag{3.7}\] Proof.: The first inequality has been proved in [31], it remains to prove (3.7). As \(\mathcal{T}\) is a symmetric bilinear operator, it's easy to deduce that \(\delta=u-v\) solves the transport equation \[\partial_{t}\delta+u\cdot\nabla\delta=-\delta\cdot\nabla v+\mathcal{T}(\delta,u+v). \tag{3.8}\] Then, by Lemma 2.4 and 3.2 \[\|\delta(t)\|_{B^{s}_{p,r}}\leq \|\delta_{0}\|_{B^{s}_{p,r}}+C\int_{0}^{t}\Big{(}\|u\|_{B^{s}_{p, r}}\|\delta\|_{B^{s}_{p,r}}+\|\delta\cdot\nabla v\|_{B^{s}_{p,r}}+\|\mathcal{T}( \delta,u+v)\|_{B^{s}_{p,r}}\Big{)}d\tau\] \[\leq \|\delta_{0}\|_{B^{s}_{p,r}}+C\int_{0}^{t}\Big{(}\|u(\tau),v(\tau )\|_{B^{s}_{p,r}}\|\delta\|_{B^{s}_{p,r}}+\|\delta\|_{B^{s-1}_{p,r}}\|\nabla v \|_{B^{s}_{p,r}}\Big{)}d\tau.\] now (3.7) is direct result from Gronwall's inequality. **Proposition 3.2**.: _Suppose \(\widetilde{u}(t),u(t),v(t)\) are the solutions of (E-P) of initial data \(u_{0}+v_{0},u_{0},v_{0}\) respectively. Then, under the assumptions of (1.5), we have_ \[\|\widetilde{u}-u-v\|_{B^{s}_{p,r}}\leq C\|u_{0},v_{0}\|_{B^{s+1}_{p,\infty}}^ {1-\theta}\exp\Big{(}\|u_{0},v_{0}\|_{B^{s+1}_{p,r}}\theta\Big{)}\Big{(}\int_ {0}^{t}W_{2,p}(u,v)d\tau\Big{)}^{\theta},\] _where \(\theta=\frac{1}{s+1}\) and use the notation \(W_{2,p}(u,v)=\sum_{0\leq|a|,|b|\leq 2}\|\partial^{a}u\partial^{b}v\|_{L^{p}}\)._ Proof.: Since \(\widetilde{u}(t),u(t),v(t)\) are solutions of \[\left\{\begin{aligned} &\partial_{t}\widetilde{u}+\widetilde{u} \cdot\nabla\widetilde{u}=\mathcal{T}(\widetilde{u},\widetilde{u}),&& \widetilde{u}(0)=u_{0}+v_{0},\\ &\partial_{t}u+u\cdot\nabla u=\mathcal{T}(u,u),&& u(0)=u_{0},\\ &\partial_{t}v+v\cdot\nabla v=\mathcal{T}(v,v),&& v(0)=v_{0}.\end{aligned}\right.\] by the symmetry and linearity of \(\mathcal{T}\), we can deduce that \(w(t)=\widetilde{u}(t)-u(t)-v(t)\) satisfies \[\left\{\begin{aligned} &\partial_{t}w+\widetilde{u}\cdot\nabla w=& -w\cdot\nabla(u+v)+\mathcal{T}(w,\widetilde{u}+u+v)\\ &-u\cdot\nabla v-v\cdot\nabla u-2\mathcal{T}(u,v),\\ w(0)=& 0.\end{aligned}\right. \tag{3.9}\] By the interpolation inequality (see Lemma 2.2 ), we obtain \[\|w\|_{B^{s}_{p,r}}\leq C\|w\|^{\theta}_{B^{0}_{p,\infty}}\|w\|^{1- \theta}_{B^{s+1}_{p,\infty}}\leq\|u_{0},v_{0}\|^{1-\theta}_{B^{s+1}_{p,\infty}} \|w\|^{\theta}_{L^{p}} \tag{3.10}\] The rest of the proof is to bound the \(L^{p}\) norm of \(w\), taking the inner product of (3.9) with \(\widetilde{w}^{p-1}:=(|w_{1}|^{p-2}w_{1},|w_{2}|^{p-2}w_{2},\cdots,|w_{d}|^{p-2 }w_{d})\), we obtain \[\frac{1}{p}\frac{d}{dt}\|w\|^{p}_{L^{p}}= \sum_{i=1}^{d}\int p^{-1}|w_{i}|^{p}(\mathrm{div}\widetilde{u})dx -\sum_{i,j}\int\widetilde{w}^{p-1}_{i}w_{j}\partial_{j}(u_{i}+v_{i})dx\] \[+\sum_{i=1}^{d}\int\widetilde{w}^{p-1}_{i}\mathcal{T}_{i}(w, \widetilde{u}+u+v)dx-\sum_{i=1}^{d}\int\widetilde{w}^{p-1}_{i}(u\cdot\nabla v_ {i}+v\cdot\nabla u_{i}+\mathcal{T}_{i}(u,v)dx\] \[\leq \frac{1}{p}\|\mathrm{div}\widetilde{u}\|_{L^{\infty}}\|w\|^{p}_{ L^{p}}+C_{d}(\|\nabla u\|_{L^{\infty}}+\|\nabla v\|_{L^{\infty}})\|w\|^{p}_{L^{p}}\] \[+\|w\|^{p-1}_{L^{p}}\|\mathcal{T}(w,\widetilde{u}+u+v)\|_{L^{p}}+ \|w\|^{p-1}_{L^{p}}\|u\cdot\nabla v+v\cdot\nabla u+\mathcal{T}(u,v)\|_{L^{p}} \tag{3.11}\] Thanks to the estimates of \(\mathcal{T}\) in Lemma 3.2, in particular take (3.3), (3.4), into (3.11) we have \[\frac{d}{dt}\|w\|_{L^{p}} \leq C(\|\mathrm{div}\widetilde{u}\|_{L^{\infty}}+\|\nabla u\|_{L^ {\infty}}+\|\nabla v\|_{L^{\infty}})\|w\|_{L^{p}}\] \[+C(\|\widetilde{u},\nabla\widetilde{u}\|_{L^{\infty}}+\|u,\nabla u \|_{L^{\infty}}+\|v,\nabla v\|_{L^{\infty}})\|\nabla w\|_{L^{p}}+W_{2,p}(u,v)\] \[\leq\|u_{0},v_{0}\|_{B^{s}_{p,r}}(\|w\|_{L^{p}}+\|\nabla w\|_{L^ {p}})+W_{2,p}(u,v)\] Now, we should bound the gradient matrix \(\nabla w\), take the gradient to (3.9), then in components \[\partial_{t}\partial_{j}w_{i}= -\widetilde{u}_{k}\partial_{k}\partial_{j}w_{i}-\partial_{j} \widetilde{u}_{k}\partial_{k}w_{i}+\partial_{j}T_{i}(w,\widetilde{u}+u+v)\] \[-w_{k}\partial_{k}(\partial_{j}u_{i}+\partial_{j}v_{i})-\partial _{j}w_{k}\partial_{k}(u_{i}+v_{i})-\partial_{j}\big{(}u_{k}\partial_{k}v_{i}-v _{k}\partial_{k}u_{i}-2\mathcal{T}_{i}(u,v)\big{)}\] Taking the \(L^{2}\) inner product with \(\widetilde{w}^{p-1}_{i,j}:=|\partial_{j}w_{i}|^{p-2}\partial_{j}w_{i}\) and sum the indices \(i,j\), we get \[\frac{1}{p}\frac{d}{dt}\|\nabla w\|^{p}_{L^{p}}= \sum_{1\leq i,j\leq d}\int p^{-1}|\partial_{j}w_{i}|^{p}(\mathrm{ div}\widetilde{u})dx-\int\nabla\widetilde{w}^{p-1}:(\nabla w\nabla \widetilde{u})dx+\int\nabla\widetilde{w}^{p-1}:\nabla\mathcal{T}(w, \widetilde{u}+u+v)dx\] \[-\int\nabla\widetilde{w}^{p-1}:\big{(}w\cdot\nabla(\nabla u+ \nabla v)\big{)}dx-\int\nabla\widetilde{w}^{p-1}:\big{(}(\nabla u+\nabla v) \nabla w\big{)}dx\] \[-\int\nabla\widetilde{w}^{p-1}:\nabla(u\cdot\nabla v+v\cdot \nabla u+2\mathcal{T}(u,v))dx\] \[\leq \frac{1}{p}\|\mathrm{div}\widetilde{u}\|_{L^{\infty}}\|\nabla w\|^ {p}_{L^{p}}+C_{d}\|\nabla\widetilde{u}\|_{L^{\infty}}\|\nabla w\|^{p}_{L^{p}}+ \|\nabla w\|^{p-1}_{L^{p}}\|\nabla\mathcal{T}(w,\widetilde{u}+u+v)\|_{L^{p}}\] \[+\|\nabla w\|^{p-1}_{L^{p}}\|w\|_{L^{p}}\big{(}\|\nabla^{2}u\|_{L ^{\infty}}+\|\nabla^{2}v\|_{L^{\infty}}\big{)}+\|\nabla w\|^{p}_{L^{p}}\big{(} \|\nabla u\|_{L^{\infty}}+\|\nabla v\|_{L^{\infty}}\big{)}\] \[+|\nabla w\|^{p-1}_{L^{p}}\|\nabla\big{(}u\cdot\nabla v+v\cdot \nabla u+2\mathcal{T}(u,v)\big{)}\|_{L^{p}} \tag{3.12}\] where we denote \(\nabla\widetilde{w}^{p-1}=(\widetilde{w}^{p-1}_{i,j})_{d\times d}\) and \(A:B:=\sum_{i,j}a_{i,j}b_{i,j}\). Again using Proposition 3.2 for the matrix operator \(\nabla\mathcal{T}\), by plug (3.5),(3.6) into (3.12), we obtain \[\frac{d}{dt}\|\nabla w\|_{L^{p}} \leq C(\|\nabla\widetilde{u}\|_{L^{\infty}}+\|\nabla u\|_{L^{ \infty}}+\|\nabla v\|_{L^{\infty}})\|\nabla w\|_{L^{p}}+\|w\|_{L^{p}}\big{(}\| \nabla^{2}u\|_{L^{\infty}}+\|\nabla^{2}v\|_{L^{\infty}}\big{)}\] \[+\|\nabla\mathcal{T}(w,\widetilde{u}+u+v)\|_{L^{p}}+\|\nabla \big{(}u\cdot\nabla v+v\cdot\nabla u+2\mathcal{T}(u,v)\big{)}\|_{L^{p}}\] \[\leq C(\|\widetilde{u},\nabla\widetilde{u}\|_{L^{\infty}}+\|u, \nabla u\|_{L^{\infty}}+\|v,\nabla v\|_{L^{\infty}})\|\nabla w\|_{L^{p}}\] \[+\big{(}\|\nabla^{2}u\|_{L^{\infty}}+\|\nabla^{2}v\|_{L^{\infty}} \big{)}\|w\|_{L^{p}}+W_{2,p}(u,v)\] \[\leq C\|u_{0},v_{0}\|_{B^{s+1}_{p,r}}(\|w\|_{L^{p}}+\|\nabla w\|_ {L^{p}})+W_{2,p}(u,v)\] Combining (3.14) and (3.17) yields that \[\frac{d}{dt}\|w,\nabla w\|_{L^{p}}\leq C\|u_{0},v_{0}\|_{B^{s+1}_{p,r}}(\|w\|_ {L^{p}}+\|\nabla w\|_{L^{p}})+2W_{2,p}(u,v)\] By Gronwall's inequality and (3.10) we complete the proof. **Remark 3.1**.: _The proofs of Proposition 3.1 and 3.2 rely on the symmetry of \(\mathcal{T}\), especially when it comes to getting simplified equations (3.8) and (3.9). Most previous studies on the well-posedness of Euler-Poincare equations use the bilinear form (1.2), the lack of symmetry makes the calculation complicated. Infact, when \(d=1\) namely the Camassa-Holm equation has the transport form \(\partial_{t}u+u\partial_{x}u=P(u,u)\) with \(P(u,v)=-\partial_{x}(1-\partial_{x}^{2})^{-1}\big{(}uv+\frac{1}{2}(\partial_{ x}u\partial_{x}v)\big{)}\) is symmetric by default. In this respect, our new form (E-P) is a more natural high-dimensional generalization of the CH equation._ ### Construction of Perturbation Data For localization in the Fourier domain, we introduce the following bump function in the frequency space. Let \(\widehat{\phi}\in\mathcal{C}_{0}^{\infty}(\mathbb{R})\) be a non-negative and even function satisfy \[\widehat{\phi}(\xi)=\begin{cases}1,&\text{if }|\xi|\leq\frac{1}{4},\\ 0,&\text{if }|\xi|\geq\frac{1}{2}.\end{cases}\] and let \[\begin{cases}&f_{n}=2^{-ns-N}\big{(}\cos(\tfrac{17}{12}2^{n}x_{1})\phi(x_{1}) \phi(x_{2})\cdots\phi(x_{d}),0,\cdots,0\big{)}\\ &g_{n}=\big{(}2^{-n}\phi(x_{1})\phi(x_{2})\cdots\phi(x_{d}),0,\cdots,0\big{)}. \end{cases} \tag{3.13}\] We define the perturbation data by adding a translation transform \[\begin{cases}&f_{n}^{m}=f_{n}(x_{1}-m,x_{2},\cdots,x_{d})\\ &g_{n}^{m}=g_{n}(x_{1}-m,x_{2},\cdots,x_{d})\end{cases} \tag{3.14}\] Noting that \(\widehat{f_{n}^{m}}\) is supported in \([-\frac{1}{2},\frac{1}{2}]^{d}\pm(\frac{17}{12}2^{n},0,\cdots,0)\), this support set is completely covered by the ring \(C_{n}=\{\xi\in\mathbb{R}^{d}:\frac{4}{3}2^{n}\leq|\xi|\leq\frac{3}{2}2^{n}\}.\) Thus, by the definition of \(\Delta_{j}\), we know \[\Delta_{j}(f_{n})=\begin{cases}\,f_{n}^{m},&\text{if }j=n,\\ 0,&\text{if }j\neq n.\end{cases} \tag{3.15}\] On account of above and the definition of Besov space, we can show that for \(k\in\mathbb{R}\) \[\|f_{n}^{m}\|_{B^{s+k}_{p,r}}\leq C2^{kn-N}\qquad and\qquad\|g_{n}^{m}\|_{B^{s+k }_{p,r}}\to 0\qquad for\ n\to\infty \tag{3.16}\] By the previous work [26] and translation invariance of the system (E-P), we know that, for the corresponding solutions \(S_{t}(f_{n}^{m}+g_{n}^{m})\) and \(S_{t}(f_{n}^{m})\) there is a positive constant \(c_{0}\) and a small time \(T_{0}\), such that for any \(t\in[0,T_{0}]\), \[\liminf_{n\to\infty}\|S_{t}(f_{n}^{m}+g_{n}^{m})-S_{t}(f_{n}^{m})\|_{B^{s}_{p, r}}\geq c_{0}t. \tag{3.17}\] ### Proof of Theorem 1.2 Roughly speaking, our proof of Theorem 1.2 based on the following approximation \[S_{t}(u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(u_{0}+f_{n}^{m})\] (I) \[=S_{t}(S_{n}u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(S_{n}u_{0}+f_{n}^{m} )+\mathcal{E}_{n}^{m}\] (II) \[=\Big{(}S_{t}(S_{n}u_{0})+S_{t}(f_{n}^{m}+g_{n}^{m})\Big{)}- \Big{(}S_{t}(S_{n}u_{0})+S_{t}(f_{n}^{m})\Big{)}+\mathcal{E}_{n,m}\] \[=S_{t}(f_{n}^{m}+g_{n}^{m})-S_{t}(f_{n}^{m})+\mathcal{E}_{n,m},\] (III) with some small error terms \(\mathcal{E}_{n}^{m},\ \mathcal{E}_{n,m}\). More precisely, we devide (I) into three parts \[S_{t}(u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(u_{0}+f_{n}^{m})=\] \[\underbrace{\Big{(}S_{t}(u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(S_{n}u_ {0}+f_{n}^{m}+g_{n}^{m})\Big{)}-\Big{(}S_{t}(u_{0}+f_{n}^{m})-S_{t}(S_{n}u_{0} +f_{n}^{m})\Big{)}}_{\mathcal{E}_{n}^{m}}+\] \[\underbrace{\Big{(}S_{t}(S_{n}u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(S_ {n}u_{0})-S_{t}(f_{n}^{m}+g_{n}^{m})\Big{)}-\Big{(}S_{t}(S_{n}u_{0}+f_{n}^{m} )-S_{t}(S_{n}u_{0})-S_{t}(f_{n}^{m})\Big{)}}_{\mathcal{E}_{n,m}}\] \[+S_{t}(f_{n}^{m}+g_{n}^{m})-S_{t}(f_{n}^{m}). \tag{3.18}\] We proof the approximation \((III)\to(II)\to(I)\) in the following sense. **Proposition 3.3**.: _Let \(f_{n}^{m},\ g_{n}^{m}\) be the perturbation data defined by (3.13) and (3.14), then for any initial data \(u_{0}\in B^{s}_{p,r}\) with \(\|u_{0}\|_{B^{s}_{p,r}}=\rho\), the error terms \(\mathcal{E}_{n}^{m},\ \mathcal{E}_{n,m}\) in (3.18) satisfy_ \[\sup_{m,t}\|\mathcal{E}_{n}^{m}\|_{B^{s}_{p,r}}\leq C_{\rho}\|(I -S_{n})u_{0}\|_{B^{s}_{p,r}}, \tag{3.19}\] \[\lim_{m\to\infty}\big{(}\sup_{0\leq t\leq T}\|\mathcal{E}_{n,m}\| _{B^{s}_{p,r}}\big{)}=0\quad for\ any\ fixed\ n. \tag{3.20}\] _._ Proof.: We first to handle (3.19). Using proposition 3.1 with \(\delta(t)=S_{t}(u_{0}+f_{n}^{m})-S_{t}(S_{n}u_{0}+f_{n}^{m})\), as \(\|u_{0}+f_{n}^{m}\|_{B^{s}_{p,r}}\approx\|S_{n}u_{0}+f_{n}^{m}\|_{B^{s}_{p,r}}\) for \(m\in\mathbb{R}\) and \(n\gg 1\), the solution sequences have a common lifespan \(T\approx T^{*}(\|u_{0}\|_{B^{s}_{p,r}})\), then for any \(t\in[0,T)\) we have \[\|\delta(t)\|_{B^{s}_{p,r}}\leq \Big{(}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}+\int_{0}^{t}\|\delta(\tau) \|_{B^{s-1}_{p,r}}\|\nabla S_{t}(S_{n}u_{0}+f_{n}^{m})\|_{B^{s}_{p,r}}d\tau \Big{)}\] \[\quad\cdot\exp\big{(}\int_{0}^{t}\|S_{t}(u_{0}+f_{n}^{m}),S_{t}(S _{n}u_{0}+f_{n}^{m})\|_{B^{s}_{p,r}}d\tau\big{)}\] \[\leq \Big{(}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}+\int_{0}^{t}\|\delta(\tau) \|_{B^{s-1}_{p,r}}\|S_{n}u_{0}+f_{n}^{m}\|_{B^{s+1}_{p,r}}d\tau\Big{)}\] \[\quad\cdot\exp\big{(}\int_{0}^{t}\|u_{0}+f_{n}^{m},S_{n}u_{0}+f_{ n}^{m}\|_{B^{s}_{p,r}}d\tau\big{)}\] \[\leq C_{\rho}\Big{(}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}+\int_{0}^{t}\| \delta(\tau)\|_{B^{s-1}_{p,r}}\cdot 2^{n}d\tau\Big{)} \tag{3.21}\] and \[\|\delta(t)\|_{B^{s-1}_{p,r}}\leq \|(I-S_{n})u_{0}\|_{B^{s-1}_{p,r}}\exp\big{(}\int_{0}^{t}\|S_{t}( u_{0}+f_{n}^{m}),S_{t}(S_{n}u_{0}+f_{n}^{m})\|_{B^{s}_{p,r}}d\tau\big{)}\] \[\leq C_{\rho}2^{-n}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}} \tag{3.22}\] take (3.22) into (3.21) we get \[\|\delta(t)\|_{B^{s}_{p,r}}\leq C_{\rho}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}. \tag{3.23}\] As in (3.23) the \(C_{\rho}\) not depend on the translation parameter \(m\) and \(t\in[0,T]\), then we have \[\sup_{m,t}\|S_{t}(u_{0}+f_{n}^{m})-S_{t}(S_{n}u_{0}+f_{n}^{m})\|_{B^{s}_{p,r}} =\sup_{m,t}\|\delta(t)\|_{B^{s}_{p,r}}\leq C_{\rho}\|(I-S_{n})u_{0}\|_{B^{s}_{ p,r}}. \tag{3.24}\] With exactly the same argument, we can deduce that \[\sup_{m,t}\|S_{t}(u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(S_{n}u_{0}+f_{n}^{m}+g_{n}^ {m})\|_{B^{s}_{p,r}}\leq C_{\rho}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}},\] along with (3.24), we complete the proof of (3.19). In order to deduce (3.20), we should use Proposition 3.2 with the setting \(\widetilde{u}(t)=S_{t}(S_{n}u_{0}+f_{n}^{m}),u(t)=S_{t}(S_{n}u_{0})\) and \(v(t)=S_{t}(f_{n}^{m})\), and denote \[w=\widetilde{u}-u-v=S_{t}(S_{n}u_{0}+f_{n}^{m})-S_{t}(S_{n}u_{0})-S_{t}(f_{n}^ {m}).\] Since \(u_{0}\in B^{s}_{p,r}\) and \(\|f_{n}^{m}\|_{B^{s}_{p,r}}\approx 1\), it's easy to see that \[\|S_{n}u_{0},f_{n}^{m}\|_{B^{s+1}_{p,r}}\leq C_{\rho}2^{n}, \tag{3.25}\] with \(C_{\rho}\) only depend on \(\rho:=\|u_{0}\|_{B^{s}_{p,r}}\), Then from Proposition 3.2 we know that \[\|w(t)\|_{B^{s}_{p,r}}\leq C_{\rho}2^{n}e^{C_{\rho}2^{n}\theta}\Big{(}\sum_{0 \leq|a|,b|\leq 2}\int_{0}^{t}\|\partial^{a}S_{t}(S_{n}u_{0})\partial^{b}S_{t}(f_{n} ^{m})\|_{L^{p}}d\tau\Big{)}^{\theta}. \tag{3.26}\] Notice that, by definition \(\partial^{b}S_{t}(f_{n}^{m})=\partial^{b}S_{t}(f_{n}(x_{1}-m,\cdots,x_{d}))= \partial^{b}S_{t}(f_{n})(x_{1}-m,\cdots,x_{d})\), for fixed \(n\) and any \((t,x)\), considering that \(S_{t}(f_{n})\) is a smooth function decay at infinity, we have \[\lim_{m\to\infty}\partial^{a}S_{t}(S_{n}u_{0})(x)\partial^{b}S_{ t}(f_{n})(x_{1}-m,\cdots,x_{d})=0,\] \[|\partial^{a}S_{t}(S_{n}u_{0})(x)\partial^{b}S_{t}(f_{n})(x_{1}-m,\cdots,x_{d})|\leq M|\partial^{a}S_{t}(S_{n}u_{0})|(\tau,x)\in L^{1}\big{(}[0,T],L^{p}(\mathbb{R})\big{)}.\] By the Lebesgue Dominated Convergence Theorem, we have \[\lim_{m\to\infty}\int_{0}^{T}\|\partial^{a}S_{t}(S_{n}u_{0})\partial^{b}S_{t}(f_{n }^{m})\|_{L^{p}}d\tau=0 \tag{3.27}\] Then from (3.26),(3.27) we know that, for the fixed \(n\) and any \(t\in[0,T]\) \[\lim_{m\to\infty}\sup_{0\leq t\leq T}\|w(t)\|_{B^{s}_{p,r}}=\lim_{m\to\infty} \sup_{0\leq t\leq T}\|S_{t}(S_{n}u_{0}+f_{n}^{m})-S_{t}(S_{n}u_{0})-S_{t}(f_{n }^{m})\|_{B^{s}_{p,r}}=0, \tag{3.28}\] with the same argument, we can also get that, for any fixed \(n\) and \(t\in[0,T]\) \[\lim_{m\to\infty}\sup_{0\leq t\leq T}\|S_{t}(S_{n}u_{0}+f_{n}^{m}+g_{n}^{m})-S _{t}(S_{n}u_{0})-S_{t}(f_{n}^{m}+g_{n}^{m})\|_{B^{s}_{p,r}}=0. \tag{3.29}\] Combining (3.28) and (3.29), this yields (3.20). With (3.17), (3.18) and Propositions 3.3 in hand, we can complete our proof of Theorem 1.2. First of all, from the identity (3.18) we know that for any time \(t\in[0,T]\) \[\begin{split}&\|S_{t}(u_{0}+f_{n}^{m}+g_{n}^{m})-S_{t}(u_{0}+f_{n}^{ m})\|_{B^{s}_{p,r}}\\ &\geq\|S_{t}(f_{n}^{m}+g_{n}^{m})-S_{t}(f_{n}^{m})\|_{B^{s}_{p,r }}-\sup_{m,t}\|\mathcal{E}_{n}^{m}\|_{B^{s}_{p,r}}-\sup_{0\leq t\leq T}\| \mathcal{E}_{n,m}\|_{B^{s}_{p,r}}\end{split} \tag{3.30}\] Then, by (3.20) in Proposition 3.3, for the fixed \(n\), we can find a sufficiently large \(m_{n}\) such that \[\sup_{0\leq t\leq T}\|\mathcal{E}_{n,m_{n}}\|_{B^{s}_{p,r}}\leq 2^{-n}.\] combining this and (3.19) in Proposition 3.3, by (3.30) we get \[\|S_{t}(u_{0}+f_{n}^{m_{n}}+g_{n}^{m_{n}})-S_{t}(u_{0}+f_{n}^{m_{ n}})\|_{B^{s}_{p,r}}\] \[\geq\|S_{t}(f_{n}^{m_{n}}+g_{n}^{m_{n}})-S_{t}(f_{n}^{m_{n}})\|_{ B^{s}_{p,r}}-C_{\rho}\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}-2^{-n}. \tag{3.31}\] As \(u_{0}\in B^{s}_{p,r}\), that means \(\|(I-S_{n})u_{0}\|_{B^{s}_{p,r}}\to 0\) when \(n\to\infty\). For the small \(t\in[0,T_{0}]\), we already have (3.17), it follows from (3.31) that \[\liminf_{n\to\infty}\|S_{t}(u_{0}+f_{n}^{m_{n}}+g_{n}^{m_{n}})-S_{t}(u_{0}+f_{ n}^{m_{n}})\|_{B^{s}_{p,r}}\geq c_{0}t,\quad\forall t\in[0,T_{0}]. \tag{3.32}\] And the sequences of initial data satisfy \[\lim_{n\to\infty}\|(u_{0}+f_{n}^{m_{n}}+g_{n}^{m_{n}})-(u_{0}+f_{n}^{m_{n}})\| _{B^{s}_{p,r}}=\lim_{n\to\infty}\|g_{n}^{m_{n}}\|_{B^{s}_{p,r}}=0. \tag{3.33}\] This complete the proof Theorem 1.2. **Acknowledgements.** M. Li was supported by Educational Commission Science Programm of Jiangxi Province (No. GJJ190284) and Natural Science Foundation of Jiangxi Province (No. 20212BAB211011 and 20212BAB201008).
2309.09932
$W_m$-algebras and fractional powers of difference operators
In this paper we describe a Poisson pencil associated to the lattice $W_m$-algebras defined in \cite{IM}, and we prove that the Poisson pencil is equal to the one defined in \cite{MW} and \cite{CM} using a type of discrete Drinfel'd-Sokolov reduction. We then show that, much as in the continuous case, a family of Hamiltonians defined by fractional powers of difference operators commute with respect to both structures, defining the kernel of one of them and creating an integrable hierarchy in the Liouville sense.
Gloria Marí Beffa
2023-09-18T16:50:59Z
http://arxiv.org/abs/2309.09932v1
# \(W_{m}\)-algebras and fractional powers of difference operators ###### Abstract. In this paper we describe a Poisson pencil associated to the lattice \(W_{m}\)-algebras defined in [7], and we prove that the Poisson pencil is equal to the one defined in [10] and [3] using a type of discrete Drinfel'd-Sokolov reduction. We then show that, much as in the continuous case, a family of Hamiltonians defined by fractional powers of difference operators commute with respect to both structures, defining the kernel of one of them and creating an integrable hierarchy in the Liouville sense. The author gratefully acknowledge support through research funding from the College of Letters & Science at UW-Madison. Semenov-Tian-Shansky in [12] could be reduced to a quotient of the form \(\mathrm{PSL}(m+1)^{N}/H^{N}\) where \(\mathbb{RP}^{m}=\mathrm{PSL}(m+1)/H\) is the homogeneous representation of the projective space, with \(H^{N}\) acting on \(\mathrm{PSL}(m+1)^{N}\) by discrete gauges. The resulting bracket coincides with the reduced bracket on monic operators, as shown in [7]. The authors also identified a second bracket defined through the same reduction process, but failed to prove that the two brackets were compatible. The authors of [3] naturally connected the Hamiltonians with respect to the quadratic bracket to invariant evolutions of projective polygons, lifting the two Poisson structures to pre-symplectic forms on the space of projectively invariant polygonal vector fields. They used this connection to show that the two brackets where compatible and some associated evolutions were biHamiltonian. Through these general constructions one can recover familiar structures that have appeared in the literature as Hamiltonian structures for the lattice Visasoro algebra or Volterra lattice [13, 5], and the lattice \(W_{3}\)-algebra [2]. In this paper we aim to describe this Hamiltonian pencil using the \(W_{m}\)-algebra definition as in [7], and we will prove that the companion bracket to the \(W_{m}\)-algebra coincides with the companion bracket defined in [10]. This interpretation will allow us to readily identify a Liouville integrable system defined by Hamiltonians defined by the traces of fractional powers of the difference operators \[\mathcal{F}^{s}(D)=\sum_{n}\mathrm{Tr}(D^{s/m}),\] much like in the continuous case. The proof of the equivalence of both pencils is achieved through the identification of the pre-symplectic forms \(\omega_{i}\), \(i=1,2\), that lift both Poisson structures \(\{,\}_{i}\), \(i=1,2\), to the space of invariant vector fields on twisted polygons in centro-affine geometry, that is, the case of arbitrary \(u^{0}\) (rather than constant). Finally, we will show that if the lift of an \(\mathcal{F}\)-Hamiltonian evolution with respect to \(\{,\}_{1}\) to a polygonal vector field is denoted by \(X^{\mathcal{F}}\), then \(X^{\mathcal{F}}\) is the Hamiltonian vector field with respect to the pre-symplectic form \(\omega_{1}\), for both centro-affine and projective cases. In particular, in the projective case \(X^{\mathcal{F}^{s}}\) is defined by the nonnegative part of \(D^{s/m}\), for any \(s\). The author is deeply grateful to Professor Anton Izosimov for discussions and for his input on the content of section 5. His suggestions and ideas facilitated the results presented in this paper. ## 2. A discretization of the Adler-Gelfand-Dikii bracket: Discrete \(W_{m}\)-algebras We denote the space of \(N\)-periodic upper-triangular difference operators of order \(m\) by \(\mathrm{DO}(N,m)\). That is, its elements are of the form \[D=\sum_{i=0}^{m}a^{i}\mathcal{T}^{i}, \tag{1}\] where \(a_{i}\)'s are \(N\)-periodic functions \(\mathbb{Z}\to\mathbb{R}\) acting on functions of the same kind by term-wise multiplication, while \(\mathcal{T}\) is the _left_ shift operator \((\mathcal{T}f)(x)=f(x+1)\) (the term _upper-triangular_ is used to distinguish such operators from those which may also contain terms of negative power in \(\mathcal{T}\)). We define an \(N\)-periodic pseudodifference operator_ as an expression of the form \[\sum_{i=-\infty}^{k}b^{i}\mathcal{T}^{i}, \tag{2}\] where \(k\in\mathbb{Z}\), and each \(b^{i}\colon\mathbb{Z}\to\mathbb{R}\) is an \(N\)-periodic function. Such an expression can be regarded either as a formal sum, or as an actual operator acting on the space \(\{\xi\colon\mathbb{Z}\to\mathbb{R}\mid\exists\,j\in\mathbb{Z}:\xi(x)=0\,\forall \,x>j\}\) of eventually vanishing functions. We will denote the set of \(N\)-periodic pseudodifference operators by \(\Psi\mathrm{DO}(N)\). This set is an associative algebra. Moreover, almost every pseudodifference operator is invertible. In particular, (2) is invertible if the coefficient \(b^{k}\) of highest power in \(T\) is a non-vanishing sequence. We will denote the set of invertible \(N\)-periodic pseudodifference operators by \(\mathrm{I\Psi DO}(N)\). This is a group with respect to multiplication. At least formally, one can regard it as an infinite-dimensional Lie group. The following proposition was proved in [7] **Proposition 2.1**.: _There exists a natural Poisson structure \(\pi\) on the group \(\mathrm{I\Psi DO}(N)\) of \(N\)-periodic invertible pseudodifference operators. This structure has the following properties:_ 1. _It is multiplicative, in the sense that the group multiplication is a Poisson map. In other words, the group_ \(\mathrm{I\Psi DO}(N)\)_, together with the structure_ \(\pi\)_, is a Poisson-Lie group._ 2. _The subset_ \(\mathrm{IDO}(N,k):=\mathrm{I\Psi DO}(N)\cap\mathrm{DO}(N,k)\) _of order_ \(k\) _invertible_ _upper-triangular difference operators is a Poisson submanifold._ 3. _The Poisson structure_ \(\pi\) _vanishes on the submanifold_ \(\mathrm{IDO}(N,0)\) _of invertible order zero operators._ 4. _The Poisson structure_ \(\pi\) _is invariant under an automorphism of_ \(\mathrm{I\Psi DO}(N)\) _given by conjugation_ \(\mathcal{D}\to f\mathcal{D}f^{-1}\) _with quasiperiodic_ \(f\colon\mathbb{Z}\to\mathbb{R}\)_._ The natural Poisson structure defined above appears on any Lie group which is embedded as an open subset into an associative multiplicative algebra \(A\) (for example, as its invertible elements). In that case, the Lie algebra of \(G\) (or the tangent space to \(G\) at any point) can be naturally identified with \(A\). Assume also that \(A\) is endowed with an invariant inner product, that is, \((xy,z)=(x,yz)\) for any \(x,y,z\in A\) (in particular, the inner product is adjoint invariant). Furthermore, assume that \(r\colon A\to A\) is a skew-symmetric operator satisfying the modified Yang-Baxter equation \[[rx,ry]-r[rx,y]-r[x,ry]=-[x,y]\quad\forall\,x,y\in\mathfrak{g}. \tag{3}\] Then \(G\) carries a structure of a factorizable Poisson-Lie group. Identifying the cotangent space \(T_{g}^{*}G\) with the tangent space \(T_{g}G=A\) by means of the invariant inner product, one can then write the formula for the corresponding Poisson tensor on \(G\) as \[\pi_{g}(x,y)=(r(xg),yg)-(r(gx),gy)\quad\forall\,g\in G,x,y\in A. \tag{4}\] Property 4 in proposition 2.1 allows us to reduce the natural Poisson bracket on \(\mathrm{IDO}(N,m)\) given by operators as in (1), to difference operators where \(a^{m}=-1\) and \(a^{0}=(-1)^{m-1}\), both constant. The reasons for these particular choices will become clear in our next section. With this construction in mind, consider the inner product on \(\Psi\mathrm{DO}(N)\) given by \[\langle V,W\rangle=\sum_{n=0}^{N}\mathrm{Tr}(VW)(n)\] where if \(V=\sum v^{r}\mathcal{T}^{r}\), \(\mathrm{Tr}(V)=v^{0}\). This inner product is invariant and can be used to define a Poisson bracket on the space of invertible difference operators. Indeed, if \(\mathcal{F}:\mathrm{IDO}(N,m)\to\mathbb{R}\), its variational derivative is represented by a pseudo-difference operator of order \(m\), denoted here by \(\delta_{D}\mathcal{F}\) and defined uniquely by \[\frac{d}{d\epsilon}|_{\epsilon=0}\mathcal{F}(D(\epsilon))=\langle\delta_{D} \mathcal{F},\frac{d}{d\epsilon}|_{\epsilon=0}D(\epsilon)\rangle.\] With this notation, the quadratic Poisson bracket above becomes \[\{\mathcal{F},\mathcal{G}\}(D)=\sum_{n-1}^{N}\mathrm{Tr}\left(r(D\delta_{D} \mathcal{F})D-Dr(\delta_{D}\mathcal{F}D),\delta_{D}\mathcal{G}\right)(n) \tag{5}\] where \(r(L)=\frac{1}{2}(L_{+}-L_{-})\). Notice that we can substitute \(r\) by \(r^{+}(L)=L_{+}+\frac{1}{2}L_{0}\) and obtain the same bracket. If the bracket is reduced, it will have an identical formula, with \(\delta_{D}\mathcal{F}\) modified in standard fashion by the corresponding reduction. For more details about this construction, please see [7]. ## 3. A discretization of the Drinfel'd-Sokolov reduction The authors of [10] defined a pair of Poisson structures associated to the background geometry of projective twisted polygons in \(\mathbb{RP}^{m-1}\). The pair was shown to be compatible in a subsequent paper [3], where the authors linked them to pre-symplectic forms on the space of projectively-invariant polygonal vector fields. In this section we will briefly recount the definition of the pair, defined through a discrete version of the well-known Drinfel'd-Sokolov reduction [4]. As a homogenous space, the projective space \(\mathbb{RP}^{m-1}\) can be described as \(\mathrm{PSL}(m)/H\), where the subgroup \(H\) is the isotropy subgroup of a distinguished point. The projective group \(\mathrm{PSL}(m)\) acts on the quotient via left multiplication on class representatives. We say an infinite polygon in \(\mathbb{RP}^{m-1}\) is \(N\)_-twisted_ is there exists an element of the projective group \(M\in\mathrm{PSL}(m)\) called the _monodromy_, such that, if \(\gamma_{n}\) is the \(n\)th vertex, then \(\gamma_{n+N}=M\cdot\gamma_{n}\), for all \(n\). We focus on the moduli space of equivalent classes of polygons in \(\mathbb{RP}^{m-1}\) under the action of the projective group, and define coordinates for this space. The authors of [10] proved that an \(N\)-twisted projective polygon \(\gamma\) completely determines the solution of a recursion equation of the form \[x_{n+m}=a_{n}^{m-1}x_{n+m-1}+a_{n}^{m-2}x_{n+m-2}+\cdots+a_{n}^{1}x_{n+1}+(-1) ^{m-1}x_{n} \tag{6}\] for any \(n\), up to the action of the projective group on \(\gamma\), whenever \(N\) and \(m\) are co-prime (the case \(m=3\) appeared in [11]). The solution \(x\) is defined by the entries of a unique lift of \(\gamma\) to \(\mathbb{R}^{m}\), more about this in our next section. The discrete functions \(a^{k}\), \(k=1,\dots,m-1\) define bi-infinite \(N\)-periodic sequences, which are invariant under the projective action of \(\mathrm{PSL}(m)\) on the polygon \(\gamma\). Therefore, they can be considered to be coordinates in the moduli space. The same description holds if we consider the centroaffine space \(\mathbb{R}^{m}\) with \(\mathrm{GL}(m)\) acting on it linearly. In this case the invariant coordinate \(a_{n}^{0}\), coefficient of \(x_{n}\), will not be constant. The following proposition was proven in [10] for the projective case, below is the \(\operatorname{GL}(m)\) case in [7]. The proofs are identical and we do not include it. **Proposition 3.1**.: _The moduli space of non-degenerate twisted polygons under the linear action of \(\operatorname{GL}(m)\) can be identified with an open and dense subset of \(\operatorname{GL}^{N}(m)/H^{N}\), where \(H\subset\operatorname{GL}(m)\) is the subgroup \(H=\{g\in G,\ ge_{1}=e_{1}\}.\)\(H^{N}\) acts on \(\operatorname{GL}(m)^{N}\) via the right discrete gauge action_ \[(h,g)\to((\mathcal{T}h)gh^{-1}) \tag{7}\] _with \(h\in H^{N}\) and \(g\in\operatorname{GL}(m)^{N}\) representing a bi-infinite \(N\)-periodic sequence._ Finally, let \[r=\sum_{i>j}E_{ij}\otimes E_{ji}+\frac{1}{2}\sum_{r}E_{rr}\otimes E_{rr}\] be the standard \(r\) matrix for \(G=\operatorname{GL}(m)\), where \(E_{ij}\) has a \(1\) in the entry \((i,j)\) and zeroes elsewhere. Given \(\mathcal{F},\mathcal{H}\) smooth scalar-valued functions on \(G^{N}\) and \(A\in G^{N}\), the _twisted Poisson bracket_ is defined as in [6]: \[\begin{split}\{\mathcal{F},\mathcal{H}\}(A)&:=\sum_ {s=1}^{N}r(\nabla_{s}\mathcal{F}\wedge\nabla_{s}\mathcal{H})+\sum_{s=1}^{N}r( \nabla_{s}^{\prime}\mathcal{F}\wedge\nabla_{s}^{\prime}\mathcal{H})\\ &\quad-\sum_{s=1}^{N}r\left((\mathcal{T}\otimes 1)(\nabla_{s}^{ \prime}\mathcal{F}\otimes\nabla_{s}\mathcal{H})\right)+\sum_{s=1}^{N}r\left(( \mathcal{T}\otimes 1)(\nabla_{s}^{\prime}\mathcal{H}\otimes\nabla_{s}\mathcal{F}) \right),\end{split} \tag{8}\] where \(\xi\wedge\eta=\frac{1}{2}(\xi\otimes\eta-\eta\otimes\xi)\), and \(\nabla^{\prime}\mathcal{F}\) and \(\nabla\mathcal{F}\) are the right and left gradient, respectively. Equation (8) defines a Hamiltonian structure on \(G^{N}\), as shown by Semenov-Tian-Shansky in [12]. Moreover, the _right gauge action_ of \(G^{N}\) on itself \[(g,A)\to((\mathcal{T}g)Ag^{-1}) \tag{9}\] is a Poisson map and its orbits coincide with the symplectic leaves [6, 12]. **Theorem 3.2**.: _([7]) The Poisson bracket (8) reduces locally to the quotient \(G^{N}/H^{N}\), where \(H^{N}\) is acting on \(G^{N}\) via de right gauge action._ This theorem will naturally define a Poisson bracket on the open a dense subset of \(G^{N}/H^{N}\) with \(a^{i}\) defined in (6) as coordinates. In our next section, we will describe how both brackets, the one defining the \(W_{m}\) algebras, and the one defined by the discrete Drinfel'd-Sokolov reduction, coincide. The proofs and all details can be found in [7]. ## 4. Connection between both brackets There is a natural connection between evolutions of difference operators of the form (1) with \(a^{m}=-1\), and those of projective polygons defined by the kernel of the operators. This relation also exists in many other geometric backgrounds, including centro-affine geometry (the case when \(G=\operatorname{GL}(m)\)). In this section we remind the reader about this connection for both centro affine and projective cases, and we summarize the results in [7] that used this connection to prove that both brackets defined in previous sections coincide. Assume \(\gamma\in(\mathbb{R}^{m})^{N}\) defines a twisted polygon in \(\mathbb{R}^{m}\), twisted with respect to the linear action of \(\operatorname{GL}(m)\) on \(\mathbb{R}^{m}\). Simply from dimensional reasons, there exist \(a^{k}\in\mathbb{R}^{N}\), \(k=0,1,\ldots,m-1\) such that \[\mathcal{T}^{m}\gamma=a^{m-1}\mathcal{T}^{m-1}\gamma+a^{m-2}\mathcal{T}^{m-2} \gamma+\cdots+a^{0}\gamma. \tag{10}\] As proven in [11] for \(m=3\) and in [10] for any \(m\), given a \(N\)-twisted projective polygon, there exists a unique lift to \(\mathbb{R}^{m}\) satisfying (10) with \(a^{0}=(-1)^{m-1}\), whenever \(N\) and \(m\) are co-primes. The following theorem was proven in [10] for the projective case and in [7] for the centro-affine case. The proof was constructive, showing that the vector field could be obtained explicitly and algebraically from the variation of the Hamiltonian. **Theorem 4.1**.: _Let \(a^{k}(t)\), \(k=0,1,2\ldots,m-1\) be the coordinates of a solution to an evolution that is Hamiltonian with respect to the reduced bracket defined in subsection 3.2, with Hamiltonian \(f(\mathbf{a})\). Let \(D(t)\) be defined by_ \[D(t)=-\mathcal{T}^{m}+a^{m-1}(t)\mathcal{T}^{m-1}+\cdots+a^{1}(t)\mathcal{T}+ a^{0}(t) \tag{11}\] _and let \(\gamma(t)\) be the twisted polygon in \(\mathbb{R}^{m}\) defined by its kernel, \(D(\gamma)=0\). There exists a unique polygonal vector field \(X^{f}\) in an open and dense subset of \((\mathbb{R}^{m})^{N}\) such that_ \[\gamma_{t}=X^{f}(\gamma(t)). \tag{12}\] _And vice-versa, if \(\gamma\) is a solution of (12), then the invariants \(a^{k}\) will evolve following an \(f\)-Hamiltonian evolution with respect to the bracket in theorem 3.2._ It is worth to briefly describe \(X^{f}\)'s connection to the invariants \(a^{k}\), as we will use it in our next section. Let \(\rho=(\gamma,\mathcal{T}\gamma,\ldots,\mathcal{T}^{m-1}\gamma)\) and assume \(\gamma_{t}=X^{f}\), where \(f(\mathbf{a})\) is an invariant Hamiltonian function. Then \[\rho_{t}=\rho Q^{X^{f}} \tag{13}\] for some invariant matrix \(Q^{X^{f}}\) depending on \(a^{k}\) and their shifts. The matrix \(Q^{X^{f}}\) defines \(X^{f}\) and its shifts and it is directly connected to the group right and left gradients appearing in (8), as we will see in our next section where it will be widely used. Moving now to the \(W_{m}\)-algebra picture, one can readily find the \(\gamma\) evolutions that are directly linked to evolutions of difference operators which are Hamiltonian with respect to the bracket (18). If \(D(\gamma)=0\), then \(D_{t}(\gamma)+D(\gamma_{t})=0\) and if \(D\) is Hamiltonian with respect to (18), with Hamiltonian \(\mathcal{F}\), then \[D_{t}=r(D\delta_{D}\mathcal{F})D-Dr(\delta_{D}\mathcal{F}D)\] and so \(D(\gamma_{t})=-D_{t}(\gamma)=Dr(\delta_{D}\mathcal{F}D)(\gamma)\). We call \(Y^{\mathcal{F}}\) the vector field \[Y^{\mathcal{F}}=r(\delta_{D}\mathcal{F}D)(\gamma).\] **Theorem 4.2**.: _([7]) If \(\mathcal{F}(D)=f(\mathbf{a})\) whenever \(D\) and \(a\) are related as in (11), then_ \[X^{f}=Y^{\mathcal{F}}\] _along \(\gamma\). As a corollary, both brackets (18) and the one in theorem 3.2 coincide when defined on the coordinates \(\mathbf{a}=(a^{k})\)._ ## 5. A Poisson pencil The space \(\operatorname{DO}(N,m)\) is a Poisson submanifold with the quadratic bracket (18) defined on the space or difference operators. In this section we will identify its compatible bracket. Consider the 1-parameter family of maps \[\phi_{\lambda}:\operatorname{DO}(N,m)\to\operatorname{DO}(N,m)\] with \(\lambda\in\mathbb{R}\), defined as \(\phi_{\lambda}(\sum_{i=0}^{m}a^{i}\mathcal{T}^{i})=\sum_{i=1}^{m}a^{i}\mathcal{T} ^{i}+\lambda^{-1}a^{0}\). Furthermore, consider the push-forward of the quadratic bracket (18) \(\{,\}_{\lambda}=(\phi_{\lambda})_{*}(\{,\})\) defined as \[\{\mathcal{F},\mathcal{G}\}_{\lambda}(\mathcal{D})=\{\mathcal{F}\circ\phi_{ \lambda},\mathcal{G}\circ\phi_{\lambda}\}(\phi_{\lambda}^{-1}(\mathcal{D})). \tag{14}\] Clearly \(\{,\}_{\lambda}\) is Poisson for any \(\lambda\neq 0\). **Theorem 5.1**.: _The bracket (14) is a Poisson bracket for any \(\lambda\in\mathbb{R}\). In fact, it is a Poisson pencil, that is, linear in \(\lambda\)._ Proof.: The proof is a straightforward calculation. For simplicity we will denote the operator \(\delta_{D}\mathcal{F}\) by \(V\) and \(\delta_{D}\mathcal{G}\) by \(W\). We will denote by \(V_{+}\) (resp. \(V_{-}\)) its positive (resp. negative) part as operator, and \(V_{0}\) its zero term. Therefore \[\phi_{\lambda}^{-1}(D)=\lambda D_{0}+D\] and \[\delta(\mathcal{F}\circ\phi_{\lambda})=\phi_{\lambda}^{*}V=\lambda^{-1}V_{0} +V_{-}.\] (Notice that \(V_{+}=0\)). We will next substitute these values in (14) and use (18). This will require calculating individual terms, which we will do next. \[r^{+}\big{(}\delta(\mathcal{F}\circ\phi_{\lambda}\phi_{\lambda}^{-1}(D))=r^{+ }((\lambda^{-1}V_{0}+V_{-})(\lambda D_{0}+\mathcal{D}))=\lambda^{-1}V_{0}D_{+ }+\frac{1}{2}V_{0}D_{0}+r^{+}(V_{-}D_{+}),\] and from here \[\phi_{\lambda}^{*}\sum_{n}\operatorname{Tr}\left(D_{n}r^{+}( \delta_{D}\mathcal{F}D_{n}),\delta_{D}\mathcal{G}\right)\\ =\sum_{n}\operatorname{Tr}\left(\lambda^{-1}V_{0}D_{+}+\frac{1}{2 }V_{0}D_{0}+r^{+}(V_{-}D_{+}),,(W_{-}+\lambda^{-1}W_{0})(D_{+}\lambda D_{0}) \right). \tag{15}\] The terms in \[\phi_{\lambda}^{*}\sum_{n}\operatorname{Tr}\left(r^{+}(D_{n}\delta_{D} \mathcal{F})D_{n},\delta_{D}\mathcal{G}\right) \tag{16}\] will be analogous but with the factors in different order. When both terms are brought together in (14), the terms in (15) will carry a negative sign. The coefficient of \(\lambda^{-2}\) in (15) is given by \(\operatorname{Tr}(V_{0}D_{+}W_{0}D_{+})\), and the corresponding coefficient in (16) can be equally calculated to be \(\operatorname{Tr}(D_{+}V_{0}D_{+}W_{0})=\operatorname{Tr}(V_{0}D_{+}W_{0}D_{+})\). Therefore, they will cancel in (14). The coefficient of \(\lambda^{-1}\) in (15) is \[\operatorname{Tr}(V_{0}D_{+}W_{-}D_{+}+V_{0}D_{+}W_{0}D_{0}+\frac {1}{2}V_{0}D_{0}W_{0}D_{+}+r^{+}(V_{-}D_{+})W_{0}D_{+})\\ =\operatorname{Tr}(V_{0}D_{+}W_{-}D_{+})\] and the corresponding one for (16) can be equally calculated to be \(\operatorname{Tr}(D_{+}V_{0}D_{+}W_{-})\). Thus, the \(\lambda^{-1}\) term in (14) equals \[\sum_{n}-\operatorname{Tr}(V_{0}D_{+}W_{-}D_{+})+\operatorname{Tr}(D_{+}V_{0} D_{+}W_{-})=0\] Therefore, the expression in (14) is indeed a pencil and the coefficients of \(\lambda^{1}\) and \(\lambda^{0}\) define compatible Poisson brackets. One can find a companion bracket straightforwardly. The coefficient of \(\lambda\) in (15) is given by \[\operatorname{Tr}\left((\frac{1}{2}V_{0}D_{0}+r^{+}(V_{-}D_{+}))W_{-}D_{0} \right)=\operatorname{Tr}(r^{+}(V_{-}D_{+}))W_{-}D_{0}).\] And when placed in (14) together with its counterpart in (16), we have the \(\lambda\) coefficient of (14) to be \[\{\mathcal{F},\mathcal{G}\}_{2}(D)=\sum_{n}\operatorname{Tr}\left(r^{+}(D_{+} V_{-})D_{0}W_{-}-r^{+}(V_{-}D_{+})W_{-}D_{0}\right).\] This is the companion bracket to our original quadratic bracket. While this bracket is also quadratic, upon reduction to \(\operatorname{SL}(m)\) we do have a linear bracket since \(D_{0}=(-1)^{m-1}\). In that case, the bracket becomes \[\{\mathcal{F},\mathcal{G}\}_{2}(D)=(-1)^{m-1}\sum_{n}\operatorname {Tr}\left([(D_{+}V_{-})_{+}+\frac{1}{2}(D_{+}V_{-})_{0}]W_{-}-[(V_{-}D_{+})_{+ }W_{-}-\frac{1}{2}(V_{-}D_{+})_{0}W_{-}\right)\\ =(-1)^{m-1}\sum_{n}\operatorname{Tr}([D_{+},V_{-}]_{+}W). \tag{17}\] ## 6. Pre-symplectic forms on polygonal vector fields and the equivalence of Poisson pencils The authors of [3] defined a pair of Poisson brackets and lifted them to pre-symplectic forms on projectively invariant vector fields on polygons in \(\mathbb{RP}^{m}\). In this section we will describe the corresponding pre-symplectic forms for the centro-affine case (\(a^{0}\) generic), and we will show that the Poisson pencil just found on our previous section is equal to the one found in [3] for the projective case. Consider the two Poisson tensors generating the pencil \[\{\mathcal{F},\mathcal{G}\}_{1}(D)=\sum_{n}\operatorname{Tr}\left(r(D_{n} \delta_{D}\mathcal{F})D_{n}-D_{n}r(\delta_{D}\mathcal{F}D_{n}),\delta_{D} \mathcal{G}\right) \tag{18}\] where \(r(L)=\frac{1}{2}(L_{+}-L_{-})\), and \[\{\mathcal{F},\mathcal{G}\}_{2}(D)=\sum_{n}\operatorname{Tr}\left([D_{0}(( \delta_{D}\mathcal{F})_{-}(D_{n})_{+})_{+}-((D_{n})_{+}(\delta_{D}\mathcal{F })_{-})_{+}D_{0}]\,\delta_{D}\mathcal{G}\right) \tag{19}\] Assume \(X^{f}\) is defined as in (12), and define \(\rho_{n}=(\gamma_{n},\gamma_{n+1},\ldots,\gamma_{n+m-1})\) and \(\mathbf{d}_{n}=\det\rho_{n}\). In the projective case \(\mathbf{d}_{n}=1\) for all \(n\) and \(\gamma\) is a lift for the projective polygon determined uniquely by that property ([10]). Define also the discrete form \(\theta=(\theta_{n})\) along polygons given by \[\theta_{n}(X)=\det(X_{n},\gamma_{n+1},\ldots,\gamma_{n+m-1})\] whenever \(X\) is a vector field along \(\gamma\) in \(\mathbb{R}^{m}\). Let \(\mathcal{F}:DO(N,m)\to\mathbb{R}\) and let \(f:\mathbb{R}^{(m+1)N}\to\mathbb{R}\) be defined as \[\mathcal{F}(\left(\sum_{r=0}^{m}a^{r}\mathcal{T}^{r}\right))=f((a^{r})).\] From the definition of variational derivative, we can see that \[\delta_{D}\mathcal{F}=\sum_{r=0}^{m}\mathcal{T}^{-r}\delta_{a^{r}}f\] where \(\delta_{a^{r}}f\) is the standard variational derivative of \(f\) in the \(a^{r}\) direction. Recall that the reduction of both brackets to the \(\mathrm{SL}(m)\) case is explicitly achieved using the left and right multiplication so that if \[L=\sum_{k=0}^{m}a^{k}\mathcal{T}^{k}\] we reduce by finding \(a,b\) such that \(a^{-1}b^{-1}Lb=D\). The same process is used if we are to work in the \(\mathrm{GL}(m)\) case, in which case we only need to find \(b\) so that \(a^{m}=-1\), while \(a^{0}\) is still unconstrained. If we equate the \(m\) terms, \(b^{-1}a^{m}b_{m}=-1\), or \(b^{-1}b_{m}=-\frac{1}{a^{m}}\), which has a unique solution if \((N,m)=1\). Notice that if \(a^{m}=-1\), then \(b=1\). In any case, the reduced variational derivative in \(\mathrm{GL}(m)\) (\(a^{m}=-1\)) will look like \[\delta_{D}\mathcal{F}=\sum_{r=0}^{m-1}\left[\mathcal{T}^{-r}\delta_{a^{r}}f+ \mathcal{T}^{-m}\beta^{r}\delta_{a^{r}}f\right]\] for some \(\beta^{r}\) that can easily be found explicitly, but which will cancel in our calculations and hence we do not need to know. **Theorem 6.1**.: _Assume \(N\) and \(m\) are co-prime. Then, the Poisson bracket (19) satisfies_ \[\{\mathcal{F},\mathcal{G}\}_{2}(D_{n})=(-1)^{m-1}\omega_{2}(X^{g},X^{f})\] _where \(\omega_{2}=\sum_{n}d(\frac{1}{\mathbf{d}_{n}}\theta_{n})\), that is_ \[\omega_{2}(X,Y) = \sum_{n}\frac{1}{\mathbf{d}_{n}}\bigg{[}Y\theta_{n}(X)-X\theta_{n }(Y)-\theta_{n}([Y,X])+\frac{1}{\mathbf{d}_{n}}(\theta_{n}(Y)X(\mathbf{d}_{n} )-\theta_{n}(X)Y(\mathbf{d}_{n}))\bigg{]},\] _and where \(X^{f}\) is as in (12)._ Proof.: Straightforward calculations show that \[D_{0}((\delta_{D}\mathcal{F})_{-}(D_{n})_{+})_{+}-((D_{n})_{+}( \delta_{D}\mathcal{F})_{-})_{+}D_{0}\] \[= \sum_{r=1}^{m-1}\left[\mathcal{T}^{m-r}a^{0}\delta_{a_{n}^{r}}f-a ^{0}\mathcal{T}^{-r}\delta_{a_{n}^{r}}f\mathcal{T}^{m}\right]+\sum_{r=1}^{m-2} \sum_{s=r+1}^{m-1}\left[a^{0}\mathcal{T}^{-r}a_{n}^{s}\delta_{a_{n}^{r}}f \mathcal{T}^{s}-a_{n}^{s}\mathcal{T}^{s-r}\delta_{a_{n}^{r}}fa^{0}\right].\] Using this expression, we get \[\{\mathcal{F},\mathcal{G}\}_{2}(D_{n}) = \sum_{n}\sum_{r=1}^{m-1}\delta_{a_{m}^{m-r}}g\left[\mathcal{T}^{ m-r}a^{0}\delta_{a_{n}^{r}}f-a^{0}\mathcal{T}^{-r}\delta_{a_{n}^{r}}f\mathcal{T}^{m }\right]\mathcal{T}^{r-m} \tag{21}\] \[+ \sum_{r=1}^{m-2}\sum_{s=r+1}^{m-1}\delta_{a_{n}^{s-r}}g\left[a^{0 }\mathcal{T}^{-r}a_{n}^{s}\delta_{a_{n}^{r}}f\mathcal{T}^{s}-a_{n}^{s}\mathcal{ T}^{s-r}a^{0}\delta_{a_{n}^{r}}f\right]\mathcal{T}^{r-s}. \tag{20}\] Let \(Q_{n}^{X}\in\mathfrak{gl}(m)\) be defined as in (13), that is, defined by the relation \[X(\rho)=\rho Q^{X}.\] If \(X=X^{f}\) is as in (12), and \(A\) is defined by \(\rho_{n+1}=\rho_{n}A\), then \(X(A)=A\mathcal{T}Q-QA\), and \[Q^{X}=(\mathbf{q},A\mathcal{T}\mathbf{q},\ldots,A\mathcal{T}A\mathcal{T}A \ldots\mathcal{T}A\mathcal{T}\mathbf{q})\] where \(\mathbf{q}\) is an invariant vector defined by \(X(\gamma)=\rho\mathbf{q}\) and \(\mathcal{T}\) appears \(m-1\) times in the last column of the matrix above. As pointed out before, the matrix \(Q^{X}\) was explicitly related to the left and right gradients in the Lie group (called \(\nabla\mathcal{F}\) and \(\nabla^{\prime}\mathcal{F}\) in (8)) in [7] and we will refer the reader to that paper for details of this relation, we will simply quote them next. We know from [7] that if \(X=X^{f}\), then \[Q_{n}^{X}e_{1}=\frac{1}{2}\big{(}\nabla_{n-1}\mathcal{F}+\nabla_{n}^{\prime} \mathcal{F}\big{)}e_{1} \tag{22}\] (equation (35) in [7]), where \[\nabla_{n}^{\prime}F=\begin{pmatrix}-a_{n}^{0}\delta_{a_{n}^{0}}f&-a_{n}^{0}( \delta_{\mathfrak{a}_{n}}f)^{T}\\ *&*\end{pmatrix},\quad\nabla_{n}F=\begin{pmatrix}*&*\\ -(\delta_{\mathfrak{a}_{n}}f)^{T}&-a_{n}^{0}\delta_{a_{n}^{0}}f-\mathfrak{a}_ {n}\cdot\delta_{\mathfrak{a}_{n}}f\end{pmatrix} \tag{23}\] (Lemma 4.5 in [7]). Recall that since (8) has been reduced using the Lie group discrete gauge action of \(H^{N}\), we have that \[\nabla_{n+1}^{\prime}F-\nabla_{n}F\in\mathfrak{h}^{0}\] for any \(n\), where \(\mathfrak{h}\) is the Lie algebra of \(H\), and \(\mathfrak{h}^{0}\) is its annihilator. Using this information we can find both \(Q_{1,r+1}^{X}\) and \(Q_{r+1,1}^{X}\). Indeed \[Q^{X}e_{r+1}=K\mathcal{T}Qe_{r}\mathcal{T}^{-1}=K\mathcal{T}K\ldots\mathcal{ T}K\mathcal{T}Qe_{1}\mathcal{T}^{-r}=\frac{1}{2}K\mathcal{T}K\ldots\mathcal{T}K( \nabla\mathcal{F}+\mathcal{T}\nabla^{\prime}\mathcal{F}\mathcal{T}^{-1})e_{1 }\mathcal{T}^{-r}\] with \(\mathcal{T}\) appearing \(r\) times. We notice that when calculating \(Q_{1,r+1}^{X}\) no entries from the first row of \(\nabla^{\prime}\mathcal{F}\) are involved. Therefore, since \[\mathcal{T}^{-1}\nabla_{n}\mathcal{F}\mathcal{T}-\nabla_{n}^{\prime}\mathcal{ F}\in\mathfrak{h}^{0} \tag{24}\] we can substitute \(\nabla\mathcal{F}\) by \(\mathcal{T}\nabla^{\prime}\mathcal{F}\mathcal{T}^{-1}\) above to obtain \[e_{1}^{T}Q^{X}e_{r+1}=K\mathcal{T}K\ldots\mathcal{T}K(\nabla\mathcal{F})e_{1} \mathcal{T}^{-r}.\] Finally, \(K\nabla\mathcal{F}=\nabla^{\prime}\mathcal{F}K\), and so \[e_{1}^{T}Q^{X}e_{r+1} = e_{1}^{T}K\mathcal{T}K\ldots\mathcal{T}K\mathcal{T}(\nabla^{ \prime}\mathcal{F})Ke_{1}\mathcal{T}^{-r}=e_{1}^{T}K\mathcal{T}K\ldots \mathcal{T}K\mathcal{T}(\nabla^{\prime}\mathcal{F})e_{2}\mathcal{T}^{-r}\] \[= e_{1}^{T}K\mathcal{T}K\ldots\mathcal{T}K\nabla\mathcal{F}e_{2} \mathcal{T}^{-r+1}=e_{1}^{T}K\mathcal{T}K\ldots\mathcal{T}\nabla^{\prime} \mathcal{F}Ke_{2}\mathcal{T}^{-r+1}=\cdots=e_{1}^{T}K\mathcal{T}\nabla^{\prime }\mathcal{F}e_{r}\mathcal{T}^{-1} \tag{25}\] \[Q_{1,r+1}=e_{1}^{T}\nabla^{\prime}\mathcal{F}e_{r+1}=-a^{0}\delta_{a^{r}}f\] for any \(r=1,2,\ldots,m-1\). Also from (22), (24) and (23) we see that if \(r=2,\ldots,m\) \[Q_{r,1}^{X} = \frac{e_{r}^{T}}{2}(\mathcal{T}^{-1}\nabla\mathcal{F}\mathcal{T}+ \nabla^{\prime}\mathcal{F})e_{1}=e_{r}^{T}\mathcal{T}^{-1}\nabla\mathcal{F}e_{1 }\mathcal{T}=e_{r+1}^{T}\mathcal{T}^{-1}K\nabla\mathcal{F}e_{1}\mathcal{T}- \mathcal{T}^{-1}a^{r}e_{m}^{T}\nabla\mathcal{F}e_{1}\mathcal{T}\] \[= e_{r+1}^{T}\mathcal{T}^{-1}K\nabla\mathcal{F}e_{1}\mathcal{T}+ \mathcal{T}^{-1}a^{r}\delta_{a^{1}}f\mathcal{T}=e_{r+1}^{T}\mathcal{T}^{-1} \nabla^{\prime}\mathcal{F}Ke_{1}\mathcal{T}+\mathcal{T}^{-1}a^{r}\delta_{a^{1} }f\mathcal{T}\] \[= e_{r+1}^{T}\mathcal{T}^{-1}\nabla^{\prime}\mathcal{F}e_{2} \mathcal{T}+\mathcal{T}^{-1}a^{r}\delta_{a^{1}}f\mathcal{T}=e_{r+1}^{T} \mathcal{T}^{-2}\nabla\mathcal{F}e_{2}\mathcal{T}^{2}+\mathcal{T}^{-1}a^{r} \delta_{a^{1}}f\mathcal{T}\] Iterating this process we obtain that \[Q_{r,1} = \sum_{s=1}^{m-r}\mathcal{T}^{-s}a^{s+r-1}\delta_{a^{s}}f\mathcal{ T}^{s}+e_{m}^{T}\mathcal{T}^{r-m-1}\nabla\mathcal{F}e_{m-r+1}\mathcal{T}^{m-r+1} \tag{27}\] \[= \sum_{s=1}^{m-r}\mathcal{T}^{-s}a^{s+r-1}\delta_{a^{s}}f\mathcal{T} ^{-s}-\mathcal{T}^{r-m-1}\delta_{a^{m-r+1}}f\mathcal{T}^{m-r+1}. \tag{26}\] We are now ready to put everything together. Using all the information above we can conclude that \[\sum_{n}\frac{1}{\mathbf{d}_{n}}d(\theta_{n})(Y,X)=\sum_{n}\frac{1}{ \mathbf{d}_{n}}\left[Y\theta_{n}(X)-X\theta_{n}(Y)-\theta_{n}([Y,X])\right]\] \[= \sum_{n}\frac{1}{\mathbf{d}_{n}}\sum_{k=1}^{m-1}\left[\det(X_{n}, \gamma_{n+1},\ldots,Y_{n+k},\ldots,\gamma_{n+m-1})-det(Y_{n},\gamma_{n+1}, \ldots,X_{n+k},\ldots,\gamma_{n+m-1})\right]\] \[= \sum_{n}\left[(Q_{n}^{X})_{1,1}\mathrm{Tr}(Q_{n}^{Y})-(Q_{n}^{Y} )_{1,1}\mathrm{Tr}(Q_{n}^{X})+\sum_{k=1}^{m}\left[(Q_{n}^{Y})_{k,1}(Q_{n}^{X}) _{1,k}-(Q_{n}^{X})_{k,1}(Q_{n}^{Y})_{1,k}\right]\right]\] \[= \sum_{n}\left[\frac{1}{\mathbf{d}_{n}^{2}}\theta_{n}(X)Y(\mathbf{ d}_{n})-\frac{1}{\mathbf{d}_{n}^{2}}\theta_{n}(Y)X(\mathbf{d}_{n})+\sum_{k=1}^{m} \left[(Q_{n}^{Y})_{k,1}(Q_{n}^{X})_{1,k}-(Q_{n}^{X})_{k,1}(Q_{n}^{Y})_{1,k} \right]\right],\] and substituting (25) and (26) above, we get, after some minor rewriting, \[\omega_{2}(X^{f},X^{g}) = \sum_{n}\left(\sum_{r=1}^{m-1}a_{n}^{0}\delta_{a_{n-r}^{r}}g \delta_{a_{n}^{m-r}}f-\sum_{r=1}^{m-1}a_{n+m-r}^{0}\delta_{a_{n}^{m-r}}f\delta _{a_{n+m-r}^{r}}g\right)\] \[- \sum_{n}\left(\sum_{r=1}^{m-2}\sum_{s=1}^{m-r-1}a_{n}^{0}a_{n+s}^ {r+s}\delta_{a_{r}^{r}}f\delta_{a_{n+s}^{s}}g-\sum_{r=1}^{m-2}\sum_{s=r+1}^{m-1 }a_{n+m-s}^{0}a_{n}^{r+m-s}\delta_{a_{n+m-s}^{r}}g\delta_{a_{n}^{m-s}}f\right).\] We can compare this expression times \((-1)^{m-1}\) to (20) and, after some straightforward modifications, conclude that they are equal. **Theorem 6.2**.: _The Poisson bracket (18) coincides with the negative of the bracket previously defined as a reduction of (8). Furthermore, let \(D_{n}=\sum_{k=0}^{m-1}a_{n}^{k}\mathcal{T}^{k}-\mathcal{T}^{m}\) so that \(D_{n}(\gamma)=0\) for any lift of its associated projective polygon \(\gamma\). Then_ \[\{\mathcal{F},\mathcal{G}\}_{1}(D_{n})=(-1)^{m-1}\omega_{1}(X^{g},X^{f})\] _where_ \[\omega_{1}(X,Y)=\frac{1}{2}\sum_{n}\frac{1}{\mathbf{d}_{n+1}}\left[X\theta_{n }(D(Y))-Y\theta_{n}(D(X))-\theta_{n}(XD(Y)-YD(X))\right)]\] \[-\frac{1}{\mathbf{d}_{n}\mathbf{d}_{n+1}}\left[\theta_{n}(D(Y))X(\mathbf{d}_{n })-\theta_{n}(D(X))Y(\mathbf{d}_{n})\right].\] Proof.: The fact that the reduction of (8) and (18) are equal was proved in [7]. Given that \(D_{n}(\gamma)=0\) for all \(n\), and differentiating in the direction of a vector field \(Y\), we obtain \[(D_{n})_{t_{Y}}(\gamma)+D_{n}Y=0\quad\rightarrow\quad D_{n}(Y)=-\sum_{s=0}^{m- 1}(a_{n}^{s})_{t_{Y}}\gamma_{n+s}.\] From here, if the \(n+k\) field is placed in the \(\gamma_{n+k}\) position in \(\mathbf{d}_{n}\), we have \[X\theta_{n}(D(Y))-Y\theta_{n}(DX))-\theta_{n}(XD(Y)-YD(X))\] \[= \sum_{k=1}^{m-1}\left[\det(D_{n}Y,\gamma_{n+1},\ldots,X_{n+k}, \ldots,\gamma_{n+m-1})-\det(D_{n}X,\gamma_{n+1},\ldots,Y_{n+k},\ldots,\gamma_{n +m-1})\right]\] \[= \sum_{k=1}^{m-1}(a_{n}^{k})_{t_{Y}}\det(X_{n+k},\gamma_{n+1}, \ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})-\sum_{k=1}^{m-1}(a_{n}^{k})_{t_{X}} \det(Y_{n+k},\gamma_{n+1},\ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})\] \[- (a_{n}^{0})_{t_{Y}}\sum_{k=1}^{m-1}\det(\gamma_{n},\gamma_{n+1}, \ldots,X_{n+k},\ldots,\gamma_{n+m-1})+(a_{n}^{0})_{t_{X}}\sum_{k=1}^{m-1}\det (\gamma_{n},\gamma_{n+1},\ldots,Y_{n+k},\ldots,\gamma_{n+m-1})\] \[= \sum_{k=0}^{m-1}(a_{n}^{k})_{t_{Y}}\det(X_{n+k},\gamma_{n+1}, \ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})-\sum_{k=0}^{m-1}(a_{n}^{k})_{t_{X}} \det(Y_{n+k},\gamma_{n+1},\ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})\] \[- (a_{n}^{0})_{t_{Y}}X(\mathbf{d}_{n})+(a_{n}^{0})_{t_{X}}Y( \mathbf{d}_{n}).\] As in the previous proof, and using (25) \[\det(X_{n+k}^{f},\gamma_{n+1},\ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})= \mathbf{d}_{n}(Q_{n}^{X^{f}})_{1,k+1}=-\mathbf{d}_{n}a_{n}^{0}\delta_{a_{n}^{ k}}f.\] Also, given that \(D_{n}(\gamma)=0\), we conclude that \(a_{n}^{0}=(-1)^{m-1}\frac{\mathbf{d}_{n+1}}{\mathbf{d}_{n}}\). Therefore \[\frac{1}{\mathbf{d}_{n+1}}det(X_{n+k}^{f},\gamma_{n+1},\ldots,\gamma_{n+k}, \ldots,\gamma_{n+m-1})=(-1)^{m}\delta_{a_{n}^{k}}f\] and \[\sum_{n}\frac{1}{\mathbf{d}_{n+1}}\sum_{k=0}^{m-1}(a_{n}^{k})_{t_ {X^{g}}}\det(X_{n+k}^{f},\gamma_{n+1},\ldots,\gamma_{n+k},\ldots,\gamma_{n+m-1})\] \[= (-1)^{m}\sum_{n}(a_{n}^{k})_{t_{X^{g}}}\delta_{a_{n}^{k}}f=(-1)^{ m-1}\{f,g\}_{1}(\mathbf{a}).\] Finally \[(a_{n}^{0})_{t_{X}}=\frac{1}{\mathbf{d}_{n}}\det(L_{n}(X),\gamma_{n+1}, \ldots,\gamma_{n+m-1})=\frac{1}{\mathbf{d}_{n}}\theta_{n}(D(X)).\] The theorem follows. **Corollary 6.3**.: _Both \(\omega_{1}\) and \(\omega_{2}\) are closed forms when defined on the space of invariant vector fields, and hence pre-symplectic. Furthermore, \(X^{f}=Y^{f}\) is the \(f\)-Hamiltonian vector field with respect to \(\omega_{1}\)._ Proof.: Using that \(X^{f}\) induces the \(f\)-Hamiltonian evolution with respect to \(\{,\}_{1}\) on the invariants \(\mathbf{a}\), we get that \(X^{f}\) is the Hamiltonian vector field for \(\omega_{1}\) when restricted to invariant fields. That is, \(\omega_{1}(X^{f},Y)=Y(f)\) for every \(f\) invariant. We then get that \([X^{f},X^{g}]=X^{\{f,g\}_{1}}\) and from here \(\omega_{1}\) will be a closed form on the space of invariant vector fields since the property is equivalent to \(\{,\}_{1}\) being Poisson. We obtain directly that \(\omega_{2}\) is also closed since it is an exact form. ## 7. Commuting family of Hamiltonians In this section we will assume that we are working on the \(\operatorname{SL}(m)\) case. Notice that while the previous results were proven for the \(\operatorname{GL}(m)\) case, the \(\operatorname{SL}(m)\) case is directly obtained through reduction. We will prove that, as it happened in the continuous case, there exists a hierarchy of completely integrable systems with \(\{,\}_{1}\)-Hamiltonians given by \[\mathcal{F}_{s}(D)=\sum_{n}\operatorname{Tr}(D_{n}^{s/m}) \tag{28}\] for any \(s=1,\dots,m-1\), where the fractional powers are naturally defined. We will do so by showing that the Hamiltonians above are in involution with respect to both Poisson brackets. In addition to this integrable system there is an additional system, the Boussinesq Lattice, which is also biHamiltonian with respect to our pencil (see [10]), with \(\{,\}_{2}\)-Hamiltonian given by \[\mathcal{H}(D)=\sum_{n}\ln a_{n}^{1}.\] The authors of [10] showed that \(\mathcal{H}\) is in the kernel of the Poisson bracket \(\{,\}_{1}-\{,\}_{2}\). **Proposition 7.1**.: _Let \(\mathcal{F}_{s}\) be defined as in (28). Then the variational derivative of \(\mathcal{F}_{s}\) after the reduction to \(\operatorname{SL}(m)\) is given by the difference operator_ \[Z^{s}=\frac{s}{m}D^{\frac{s-m}{m}}+(-1)^{m}\frac{s}{m}\mathrm{Tr}D^{s/m} \tag{29}\] _for any \(s\)._ Proof.: First of all we will investigate the effect of the reduction by left and right multiplication on \(L\) on the fractional power. Denote by \(\hat{L}(\epsilon)=a^{-1}b^{-1}(L+\epsilon V)b\) the reduced operator, and define \(a^{\prime}=\frac{d}{d\epsilon}|_{\epsilon=0}a\) with \(a^{0}=(-1)^{m-1},a^{m}=-1\). Since \(a^{0}(\epsilon)=\mathrm{Tr}(L+\epsilon V)\) and \(a^{m}=\mathrm{Tr}(\mathcal{T}^{-m}(L+\epsilon V))\), we have that \[a^{\prime}=(-1)^{m-1}(a^{0})^{\prime}=(-1)^{m-1}\mathrm{Tr}V.\] Define \[Z(1,V)=((\hat{L}(\epsilon))^{1/m})^{\prime}|_{D}.\] Using \((\hat{L}^{1/m})^{m}=\hat{L}=a^{-1}b^{-1}(L+\epsilon V)b\), we conclude that, if we differentiate and evaluate at \(D\), we obtain \[\sum_{s=0}^{m-1}D^{s/m}Z(1,V)D^{\frac{m-1-s}{m}}=V-a^{\prime}D-b^{\prime}L+ lb^{\prime}\] and from here \[\sum_{i=0}^{m-1}D^{i/m}Z(1,V)D^{-i/m}=(V-a^{\prime}D-b^{\prime}D+Db^{\prime}) D^{\frac{1-m}{m}}.\] Applying the trace \[m\mathrm{Tr}(Z(1,V))=\mathrm{Tr}(VD^{\frac{1-m}{m}})+(-1)^{m} \mathrm{Tr}V\mathrm{Tr}D^{1/m}-\mathrm{Tr}(b^{\prime}D^{1/m}-Db^{\prime}D^{ \frac{1-m}{m}})\] \[=\mathrm{Tr}(V\left(D^{\frac{1-m}{m}}+(-1)^{m}\mathrm{Tr}D^{1/m} \right))\] and therefore \(Z^{1}=\frac{1}{m}\left(D^{\frac{1-m}{m}}+(-1)^{m}\mathrm{Tr}D^{1/m}\right)\). Likewise, if \[Z(s,V)=((\hat{L}(\epsilon))^{s/m})^{\prime}|_{D}\] we have that \[Z(s,V)=\sum_{i=0}^{s-1}D^{i/m}Z(1,V)D^{\frac{s-i-1}{m}}=\sum_{i=0}^{s-1}D^{i/m} \Big{[}Z(1,V)D^{\frac{s-1}{m}}\Big{]}D^{-i/m}.\] We also know that \[\sum_{i=0}^{m-1}D^{i/m}Z(1,V)D^{\frac{s-1}{m}}D^{-i/m}\] \[=(V-a^{\prime}D-b^{\prime}D+Db^{\prime})D^{\frac{1+m}{m}}D^{\frac{s-1}{m}}=(V- a^{\prime}D-b^{\prime}D+Db^{\prime})D^{\frac{s-m}{m}}\] and from here \[m\mathrm{Tr}(Z(1,V)D^{\frac{s-1}{m}})=\mathrm{Tr}(VD^{\frac{s-m}{m}}-a^{\prime }D^{s/m})\] Putting everything together \[\mathrm{Tr}(Z(s,V))=s\mathrm{Tr}(Z(1,V)D^{\frac{s-1}{m}})=\frac{s}{m}\mathrm{ Tr}(VD^{\frac{s-m}{m}}-(-1)^{m-1}\mathrm{Tr}VD^{s/m})\] \[=\frac{s}{m}\mathrm{Tr}(V(D^{\frac{s-m}{m}}+(-1)^{m}\mathrm{Tr}D^{s/m})\] and so \(Z^{s}=\frac{s}{m}(D^{\frac{s-m}{m}}+(-1)^{m}\mathrm{Tr}D^{s/m})\), as stated. **Theorem 7.2**.: _The family \(\{\mathcal{F}_{s}\}_{s=1}^{\infty}\) commute with respect to both (18) and (19), and so they generate an integrable system hierarchy. Furthermore, the kernel of \(\omega_{2}\) is at least \(m-1\) dimensional._ Proof.: The proof that they commute with respect to (19) is straightforward substituting \(Z^{s}=\delta_{D}\mathcal{F}\) and \(Z^{p}=\delta_{D}\mathcal{G}\) in (19) and observing that they both vanish when the \(\mathcal{T}^{0}\) term in \(D_{n}\) is constant and independent from \(n\). Indeed \([Z^{s}_{-},D_{+}]_{+}=[Z^{s}_{-},D]_{+}=[D^{\frac{s-m}{m}},D]_{+}=0\), and so \(\mathcal{F}_{s}\) is in the kernel of (19) for all \(s\), proving that the dimension of the kernel of \(\omega_{2}\) is at least \(m-1\). We also have that \[\{\mathcal{F}_{p},\mathcal{F}_{s}\}_{1}(D)=\frac{p}{m}\langle r(DZ^{p})D-Dr(Z ^{p}D),Z^{s}\rangle\] \[=\frac{p}{2m}(D_{+}^{p/m}D+(-1)^{m}D_{+}\mathrm{Tr}(D^{p/m})D-D_{-}^{p/m}D-DD_ {+}^{p/m}+(-1)^{m-1}D\mathrm{Tr}(D^{p/m})D_{+}+DD_{-}^{p/m},Z^{s})\] \[=\frac{p}{2m}([D_{+}^{p/m},D]+[\mathrm{Tr}(D^{p/m},D]+[D,D_{-}^{p/m}],Z^{s})= \frac{p}{m}([D_{+}^{p/m},D]+[\mathrm{Tr}(D^{p/m}),D],Z^{s})\] where we have used that \([D^{p/m},D]=0\). Substituting \(Z^{s}\) and noticing that the zero order term in \([D_{+}^{p/m},D]+[\mathrm{Tr}(D^{p/m}),D]\) vanishes when \(a^{0}\) is constant, we get \[\{\mathcal{F}_{p},\mathcal{F}_{s}\}_{1}(D)=\frac{ps}{m^{2}}\langle[D_{+}^{p/m },D]+[\mathrm{Tr}(D^{p/m}),D],D^{\frac{s-m}{m}}\rangle.\] We also have \[\mathrm{Tr}([\mathrm{Tr}(D^{p/m}),D]D^{\frac{s-m}{m}})=\mathrm{Tr}(D^{p/m})[D,D^{\frac{s-m}{m}}])=0,\] \[\mathrm{Tr}([D_{+}^{p/m},D]D^{\frac{s-m}{m}})=\mathrm{Tr}(D_{+}^{p/m}[D,D^{ \frac{s-m}{m}}])=0,\] and therefore, \(\{\mathcal{F}_{p},\mathcal{F}_{s}\}(D)=0\) for any \(p,s\). The existence of this hierarchy was conjectured in [3] as linked to the two elements in the kernel of \(\omega_{2}\) described in that paper (the statement in that paper is not correct, as the dimension is higher than 2. A brief correction is forthcoming). Indeed, the two vector fields in that paper, \(X^{1}\) and \(X^{2}\) coincide with \(X^{\mathcal{F}_{1}}\) and \(X^{-\mathcal{F}_{2}}\), as shown next. **Theorem 7.3**.: _Let \(X^{f}\) be the polygonal invariant vector field inducing the \(f\)-Hamiltonian on \(D\), where \(D(\gamma)=0\). Then \(X^{\mathcal{F}_{s}}\) is defined by the nonnegative part of \(D^{s/m}\)_ \[X^{\mathcal{F}_{s}}=\frac{s}{m}(D^{s/m}-D_{-}^{s/m})(\gamma)=\frac{s}{m}(D_{+} ^{s/m}+\operatorname{Tr}(D^{s/m}))(\gamma).\] Proof.: Give the Hamiltonian \(\mathcal{F}_{s}\) as in (28) and its variation (29), its Hamiltonian evolution with respect to (18) is given by \[D_{t}=\frac{1}{2}\left([(DZ^{s})_{+}-(DZ^{s})_{-}]D-D[(Z^{s}D)_{+}-(Z^{s}D)_{ -}]\right).\] Upon substituting (29) we obtain \[D_{t}=\frac{s}{2m}\left(D_{+}^{s/m}D-DD_{+}^{s/m}+DD_{-}^{s/m}-D_{-}^{s/m}D+( -1)^{m}(D_{+}D_{0}^{s/m}D-DD_{0}^{s/m}D_{+})\right).\] Next we observe that \([D,D^{s/m}]=0\) and so \([D,D_{-}^{s/m}]=-[D,D_{+}^{s/m}+D_{0}^{s/m}]\). We also note that \[D_{+}D_{0}^{s/m}D-DD_{0}^{s/m}D_{+} = DD_{0}^{s/m}D-DD_{0}^{s/m}D-D_{0}D_{0}^{s/m}D+DD_{0}^{s/m}D_{0}\] \[= (-1)^{m-1}(DD_{0}^{s/m}-D_{0}^{s/m}D).\] Substituting these above we obtain \[D_{t}=\frac{s}{m}\left(D_{+}^{s/m}D+D_{0}^{s/m})D-D(D_{+}^{s/m}+D_{0}^{s/m}) \right).\] Finally, \(D(\gamma)=0\) implies that \[D(\gamma_{t})=-D_{t}(\gamma)=\frac{s}{m}D(D_{+}^{s/m}+D_{0}^{s/m})(\gamma).\] From here, the theorem follows as the kernel of \(D\) does not include any invariant vector field.
2309.13942
Speed Co-Augmentation for Unsupervised Audio-Visual Pre-training
This work aims to improve unsupervised audio-visual pre-training. Inspired by the efficacy of data augmentation in visual contrastive learning, we propose a novel speed co-augmentation method that randomly changes the playback speeds of both audio and video data. Despite its simplicity, the speed co-augmentation method possesses two compelling attributes: (1) it increases the diversity of audio-visual pairs and doubles the size of negative pairs, resulting in a significant enhancement in the learned representations, and (2) it changes the strict correlation between audio-visual pairs but introduces a partial relationship between the augmented pairs, which is modeled by our proposed SoftInfoNCE loss to further boost the performance. Experimental results show that the proposed method significantly improves the learned representations when compared to vanilla audio-visual contrastive learning.
Jiangliu Wang, Jianbo Jiao, Yibing Song, Stephen James, Zhan Tong, Chongjian Ge, Pieter Abbeel, Yun-hui Liu
2023-09-25T08:22:30Z
http://arxiv.org/abs/2309.13942v1
# Speed Co-Augmentation for Unsupervised Audio-Visual Pre-training ###### Abstract This work aims to improve unsupervised audio-visual pre-training. Inspired by the efficacy of data augmentation in visual contrastive learning, we propose a novel speed co-augmentation method that randomly changes the playback speeds of both audio and video data. Despite its simplicity, the speed co-augmentation method possesses two compelling attributes: (1) it increases the diversity of audio-visual pairs and doubles the size of negative pairs, resulting in a significant enhancement in the learned representations, and (2) it changes the strict correlation between audio-visual pairs but introduces a partial relationship between the augmented pairs, which is modeled by our proposed SoftInfoNCE loss1 to further boost the performance. Experimental results show that the proposed method significantly improves the learned representations when compared to vanilla audio-visual contrastive learning. Footnote 1: Our study [6] also validates the effectiveness of the proposed “SoftInfoNCE loss” in single-modality contrastive learning. ## 1 Introduction Audio-visual contrastive learning [14, 17] for unsupervised pre-training has received growing attention due to the observation that video content is usually accompanied by audio signals. The alignment of signals across audio and video forms a natural correspondence to benefit contrastive learning. Under this framework, quite a few existing approaches [13, 17] focus on achieving better discrimination of positive and negative pairs to improve audio-visual representation learning. While promising results have been achieved, most works [12, 13] apply data augmentations to each modality individually, which may potentially limit the diversity of the generated data views and restrict the potential of augmentation for contrastive learning. In this work, we propose a novel technique termed "speed co-augmentation" for unsupervised audio-visual pre-training, which involves modifying the playback speeds of both audio and visual data simultaneously. The speed co-augmentation method enhances the diversity of audio-visual pairs and doubles the number of negative pairs during training, which has been shown to be a crucial aspect of contrastive learning [5]. Our experimental results demonstrate that this simple co-augmentation method yields a significant performance improvement of 10.0% over the baseline audio-visual contrastive learning approach on the HMDB51 [10] dataset. Meanwhile, it was observed that after the speed co-augmentation, the audio and video pairs derived from the same clip are no longer strictly positively related, as is com Figure 1: An intuitive example of _semantic shift_ after applying speed co-augmentation on audio-visual pairs. Top: when speeding up video data, the semantic meaning of the content doesn’t change drastically. Bottom: when speeding up audio data, the semantic meaning of the content changed drastically. monly assumed. As an intuitive special example (Fig. 1), a sped-up _cow_ still visually looks like a _cow_, but a sped-up _cow_ may auditorily sound like a _cat_. To generalize, we posit that there exists a partial relationship between the augmented audio-visual pairs, which is influenced by the degree of speed augmentation applied. To capture this relationship, we introduce a cross-affinity module that automatically learns the audio-visual correlations across different views. The resulting learned correlations quantitatively measure the audio-visual consistency and are employed for computations of SoftInfoNCE loss, leading to a further performance boost. Combining the proposed speed augmentation and the cross-affinity module, we present a Speed-augmented visual-audio Contrastive Learning framework, which we call _SvaCLR_. ## 2 Method Our target is to train video and audio encoders via unsupervised contrastive learning. Given an aligned pair \((v,a)\), we apply speed-up augmentations on both audio \(a\) and video \(v\) data to synthesize two additional views (_i.e_., \(\widetilde{v}\) and \(\widetilde{a}\)). These audio and video samples are then fed into the audio and video encoders \(f(\cdot)\) and \(g(\cdot)\) to extract representations \(y\). We then project the video and audio representations separately via projectors \(h_{v}(\cdot)\) and \(h_{a}(\cdot)\). The projected embeddings \(z\) are then utilized to compute the contrastive InfoNCE loss [15]. In parallel, we introduce a cross-affinity module to model the audio-visual embedding correlations. The modeled correlations are used to reweigh the InfoNCE loss, _i.e_., the proposed SoftInfoNCE loss, when learning audio-visual representations. In the following, we first introduce the speed-up augmentation with the vanilla InfoNCE loss and then introduce the cross-affinity module to re-weigh the InfoNCE loss (_i.e_., SoftInfoNCE loss). ### Speed co-augmentation For speed co-augmentation, we use a speed library to diversify training data pairs. We use \(\mathcal{T}\) to represent the speed co-augmentation set in which the maximum speed is denoted by \(S\). Each time, two speed augmentation factors for the audio and video data are selected randomly from \(\mathcal{T}\) and are applied to each data, respectively. In practice, the proposed speed co-augmentation is implemented by applying different sampling rates of the audio and video samples. Before computing the contrastive InfoNCE loss [15], we project the video and audio representations separately via projectors. We use one video projector \(h_{v}(\cdot)\) to connect the video encoder and use one audio projector \(h_{a}(\cdot)\) to connect the audio encoder. The projected representations are then utilized to compute the contrastive InfoNCE loss as follows: \[L(i,j)=\frac{\exp(z_{i}\cdot z_{j}\ /\ \eta)}{\exp(z_{i}\cdot z_{j}\ /\ \eta)+ \sum\limits_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\exp(z_{i}\cdot z_{j}\ /\ \eta)} \tag{1}\] where \(z_{i}=h_{a}(y_{i})\) is the audio projection, \(z_{j}=h_{v}(y_{j})\) is the video projection, and \(\eta\) is a constant temperature value. The dot product measures the similarity between the projected audio and video representations. For the input audio \(a_{i}\), the summation term is computed by utilizing all the video clips \(v_{j}\), as long as \(a_{i}\) and \(v_{j}\) are from different samples (_i.e_., unpaired). ### Cross-affinity module We propose a cross-affinity module to measure the correlations between the augmented video and audio representations. Fig. 2 illustrates the proposed module. Given the audio embedding \(y_{i}\) and the video embedding \(y_{j}\), the cross-modality attention \(\lambda(a_{i}^{\tau_{1}},v_{j}^{\tau_{2}})\) can be computed as follows: \[\lambda(a_{i}^{\tau_{1}},v_{j}^{\tau_{2}})=\mathrm{softmax}\left[l(y_{i})\times l (y_{j}^{\intercal})\right], \tag{2}\] where \(l(\cdot)\) is a mapping with learnable parameters. The projected video and audio embeddings are correlated via the matrix multiplication operation. In practice, three different mapping function are examined, including identity mapping, linear mapping and nonlinear mapping, from those we find the identity mapping achieves the best results. We speculate this is because heavier mapping could deteriorate the ability of encoders to learn general representations. We compute the cross-modality attention in Eq. 2 for one co-augmented audio-visual view. The cross-modality affinity can be formulated as a two-by-two matrix (as shown in Fig. 2 right). Each element in this matrix represents the correlation between the speed-augmented audio and video views. By using these elements, we reweigh the contributions of each co-augmented audio-visual view when computing the contrastive loss. Figure 2: The proposed cross-affinity module for SoftInfoNCE loss computation. The cross-modality attention module takes video and audio representations as input where there are co-augmented audio-visual data. The output is a cross-modality affinity matrix shown on the right. Each element in this matrix represents the correlations between audio and video for each input signal. ### Training with SoftInfoNCE Following [12, 17], we use a 9-layered 2D ResNet [7] as the audio encoder and R(2+1)D-18 [19] as the video encoder. The projector is a two-layered multilayer perceptron (MLP). The training process is end-to-end, without using a two-stage setting as in previous works [13, 14]. Given a batch of audio-visual pairs \(\mathcal{A}\) and \(\mathcal{V}\), where both \(\mathcal{A}\) and \(\mathcal{V}\) contain \(N\) samples, we denote the speed-up augmentation set as \(\mathcal{T}\), from which we can sample augmentations \(\tau\sim p(\mathcal{T})\). The encoders and projectors are trained with the following SoftInfoNCE loss: \[\small\small\mathcal{L}(f,g,\mathcal{A},\mathcal{V})=\mathbb{E}_{(v_{i},s) \rightarrow(\mathcal{T})}\left[\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{N}\lambda (q_{i}^{n},v_{j}^{n})\cdot L(g(q_{i}^{n}),f(v_{j}^{n}))\right] \tag{3}\] where \(L(\cdot,\cdot)\) is the contrastive InfoNCE loss function as illustrated in Eq. 1, \(a_{i}\in\mathcal{A}\), and \(v_{j}\in\mathcal{V}\). The cross-modality attention \(\lambda(\cdot,\cdot)\) takes the audio and video signal as input and measures their correlations. The output correlation value further reweighs the contrastive loss during the training process consequently. works. We improve performances on UCF101 and HMDB51 datasets by large margins, 9.2% and 14.3%. This demonstrates that our proposed co-augmentation method enlarges the diversity of the training views and benefits contrastive learning a lot. **(2)** Our approach demonstrates great scalability in terms of dataset size. When pre-trained on a large dataset K400, our approach exceeds the state-of-the-art audio-visual representation learning approach GDT [17] by a large margin, especially on the HMDB51 dataset, where we outperform GDT by 3.8%. Note that GDT applies hierarchical data augmentations while we only use one-speed augmentation. **(3)** Our approach also demonstrates scalability in terms of resolution. We can further improve the performances by using a large input size. Audio-Video Retrieval.To further evaluate the cross-modality ability of the proposed approach, we propose to use an audio-video retrieval task on K-Sounds [2]. We compare to audio-visual contrastive learning with vanilla InfoNCE loss and current state-of-the-art GDT [17] in Table 3. We show that our approach achieves the best performances on both audio-to-video retrieval task and video-to-audio retrieval task. It is interesting to note that pre-trained on K-sounds can achieve better performance to retrieve top-1 nearest neighbor. But its ability to generalize to more visual-audio pairs is restricted that it performs worse than pre-training on a larger dataset K400 to retrieve top-5, top-10, and top-20 nearest neighbors. ## 4 Conclusions We proposed a speed co-augmentation method for unsupervised audio-visual pre-training. We observed that speed co-augmentation leads to a partial relationship between audio-visual pairs. To combat this, we propose a cross-affinity module, which can adaptively model the cross-modality partial relationship and further improve performances. Extensive experimental results show that our approach significantly improves the performances.
2309.07937
Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
We propose a decoder-only language model, VoxtLM, that can perform four tasks: speech recognition, speech synthesis, text generation, and speech continuation. VoxtLM integrates text vocabulary with discrete speech tokens from self-supervised speech features and uses special tokens to enable multitask learning. Compared to a single-task model, VoxtLM exhibits a significant improvement in speech synthesis, with improvements in both speech intelligibility from 28.9 to 5.6 and objective quality from 2.68 to 3.90. VoxtLM also improves speech generation and speech recognition performance over the single-task counterpart. Further, VoxtLM is trained with publicly available data and training recipes and model checkpoints are open-sourced to make fully reproducible work.
Soumi Maiti, Yifan Peng, Shukjae Choi, Jee-weon Jung, Xuankai Chang, Shinji Watanabe
2023-09-14T03:13:18Z
http://arxiv.org/abs/2309.07937v3
VoxtLM: Unified decoder-only models for consolidating speech recognition, synthesis and speech, text continuation tasks ###### Abstract We propose a decoder-only language model, _VoxtLM_, that can perform four tasks: speech recognition, speech synthesis, text generation, and speech continuation. VoxtLM integrates text vocabulary with discrete speech tokens from self-supervised speech features and uses special tokens to enable multitask learning. Compared to a single-task model, VoxtLM exhibits a significant improvement in speech synthesis, with improvements in both speech intelligibility from 28.9 to 5.6 and objective quality from 2.68 to 3.90. VoxtLM also improves speech generation and speech recognition performance over the single-task counterpart. VoxtLM is trained with publicly available data and training recipes and model checkpoints will be open-sourced to make fully reproducible work. Soumi Maiti\({}^{1}\), Yifan Peng\({}^{1}\), Shukjae Choi\({}^{2}\), Jee-weon Jung\({}^{1}\), Xuankai Chang\({}^{1}\), Shinji Watanabe\({}^{1}\)\({}^{1}\)Carnegie Mellon University, USA \({}^{2}\)42dot Inc., Republic of Korea Multitask, speech synthesis, speech recognition, spoken language model ## 1 Introduction In recent years text language models (textLMs) have emerged as a powerful generative model in natural language processing (NLP) [1, 2, 3]. These textLMs can accommodate multiple tasks within a single model, leading to improvement in performance across a variety of tasks. On the other hand, with advances in discrete speech representations speech language models (speechLMs) [4, 5] have also been proposed. However, prior speechLMs focus on individual tasks, such as speech continuation or text-to-speech (TTS) [6, 7]. Our hypothesis is that by unifying diverse speech tasks into a generative language model (LM), we can potentially address multiple speech tasks using a single model with improved generalization thanks to multitask learning. Traditionally, speech applications such as automatic speech recognition (ASR) and text-to-speech (TTS) use encoder-decoder architectures [8, 9, 10]. These architectures consist of an encoder, for input processing and a decoder, for generating the output. For example, speech-to-text involves a speech encoder and a text decoder, whereas text-to-speech employs a text encoder and a speech decoder. Integrating task-specific and modality-specific encoder-decoder components complicates the incorporation of multiple tasks [11, 12]. In contrast, we can simplify multitask integration with a joint speech-text decoder-only model (depicted in Fig. 1). In this work, we investigate two main questions. Firstly, can we cast diverse speech tasks as language modeling? ASR and TTS are used as example speech tasks. Secondly, can we combine speech tasks in a joint speech-text language modeling framework? To this purpose, we introduce a novel LM framework _VoxtLM_ (**Voice-text Language Model**). VoxtLM combines multiple speech tasks within a single autoregressive decoder model. Specifically, we combine four tasks: speech recognition (speech-to-text), speech synthesis (text-to-speech), text generation (text-to-text), and speech generation (speech-to-speech). We create a _Voxt_ (voice + text) _vocabulary_ by merging self-supervised discrete speech tokens with the text vocabulary and incorporate sub-word modeling to efficiently process long sequences of speech. We show that VoxtLM can model both ASR and TTS as conditioned language model. Moreover, combining four tasks leads to improvement in speech generation, ASR, and TTS. Most significant improvement is observed in the TTS task with improvement in both intelligibility (28.9 to 5.6) and neural-predicted quality (2.68 to 3.90). Additionally, we demonstrate improved initialization with pretrained textLM and scaling model parameters helps in ASR. To ensure reproducibility, we use publicly available datasets and will open-source our training and inference procedures along with model checkpoints using the open-source toolkit ESPnet.1 TTS samples are also available.2 Footnote 1: [https://github.com/ESPnet/ESPnet](https://github.com/ESPnet/ESPnet) Footnote 2: [https://soumimasiti.github.io/icassp24_voxtlm/](https://soumimasiti.github.io/icassp24_voxtlm/) ## 2 Related Work **Discrete speech representations.** Speech signals can be represented as two types of discrete tokens: semantic tokens and acoustic tokens. Semantic tokens are quantized from self-supervised learning features (e.g., HuBERT [13], w2v-BERT [14]) through clustering, which mostly captures the linguistic content. Acoustic tokens are generated by audio codec models [15, 16]. They capture rich acoustic information which is suitable for high-quality speech synthesis, but they consist of multiple code streams and are thus difficult to model. In this work, we follow GSLM [4] to use semantic tokens derived from HuBERT. **Joint modeling of speech and text.** Several studies [11, 12, 17] propose to learn shared speech-text representations in a self-supervised manner. However, they employ separate encoders and decoders for different modalities. They also require additional losses like an alignment loss to encourage cross-modal transfer between speech Figure 1: ASR and TTS use encoder-decoder architecture while VoxtLM is decoder-only. In VoxtLM, all parameters are shared between speech and text modalities, compared to separate encoder/ decoder for speech and text. and text. Recent concurrent studies employ a single model for multiple speech and text conversion tasks [18, 19, 20], which are similar to our approach.3 SpeechGPT [20] uses a three-stage adaptation to combine audio generation with textLMs. PolyVoice [18] applies speechLM to speech-to-speech translation (S2ST) with three decoder-only LMs. VioLA [19] extends VALL-E [7] for ASR and S2ST. Among them, VioLA is the most related method to this work. However, VioLA does not incorporate speech or text continuation tasks and requires additional sequence modeling for speech representations, which makes it more complicated than our approach. Moreover, we utilize textually pre-trained OPT [21] for better initialization inspired by [22] and leverage different speech tokens. Also in comparison to other works, our work is fully reproducible. Footnote 3: These works have only been published in a pre-print form and have not undergone the peer-review process. ## 3 Method Consider \(Y=(y_{i}\in\mathcal{V}_{\text{txt}}|i=1,\cdots,t_{\text{txt}})\) is a text utterance from a vocabulary \(\mathcal{V}_{\text{txt}}\) with length \(t_{\text{txt}}\). The probability of \(Y\) can be expressed as \(p(Y)=\Pi_{i=1}^{t_{\text{txt}}}p(y_{i}|y_{1},\cdots,y_{i-1})\). Now, when dealing with a continuous speech signal, we can convert it into discrete speech tokens (dst), represented as \(D=(d_{i}\in\mathcal{V}_{\text{dst}}|i=1,\cdots,t_{\text{txt}})\) using a tokenizer. In this context \(\mathcal{V}_{\text{txt}}\) is the vocabulary of discrete speech tokens. These discrete speech tokens can be treated as spoken language within \(\mathcal{V}_{\text{dst}}\) and modeled in a manner similar to text. We combine text and speech in a new vocabulary _Voxt vocabulary_ by \(\mathcal{V}_{\text{txt}}=\mathcal{V}_{\text{txt}}\cup\mathcal{V}_{\text{ dat}}\). Therefore, we can model the probability of both speech and text tokens as \(Z\), where \(Z=(z_{i}\in\mathcal{V}|i=1,\cdots,t)\). This probability is expressed as: \[p(Z)=\Pi_{i=1}^{t}p(z_{i}|z_{1},\cdots,z_{i-1}). \tag{1}\] Here, \(Z\) can represent discrete speech tokens \(D(\mathcal{V}=\mathcal{V}_{\text{dst}})\) or text tokens \(Y(\mathcal{V}=\mathcal{V}_{\text{txt}})\) or various combinations of \(Y\) and \(D\). ### VoxtLM Fig. 2 illustrates the model's overall architecture. Input of VoxtLM can be both speech and text within the \(\mathcal{V}_{\text{voxt}}\) vocabulary. To process speech, we use two additional modules to convert between continuous and discrete domains in speech. The speech tokenizer maps \(X\) to \(D\), while the speech token decoder maps generated \(\hat{D}\) back to \(\hat{X}\). Similar to [4], our speech tokenizer uses \(k\)-means clustering to derive discrete features from the pretrained HuBERT [13]. It is worth noting that selecting a small \(k\) value may capture linguistic information effectively, but might fall short in representing other acoustic aspects particularly crucial for speech synthesis. We experiment with different \(k\) to assess the impact. Furthermore, within \(\mathcal{V}_{\text{voxt}}\) vocabulary, we apply subword modeling [23, 24, 25] to replace frequent patterns with matchens. Such subword modeling technique is used to include more contextual information in text [1] or to reduce the long sequence length of speech [26]. #### 3.1.1 Data format We use special tokens to guide the model in performing various tasks. There are four such tokens used. (start-text) and (start-speech) indicate the beginning of text or speech conditioning in the language model. (generate-speech) and (generate-text) instruct the model whether to generate speech or text. Table 1 shows examples of the _Voxt data format_ for various tasks during training. Ideally, we can extend to more tasks with additional task-specific tokens. #### 3.1.2 Training VoxtLM consists of an embedding layer and a series of transformer [27] decoder layers. The embedding layer maps input \(Z\) (in Eq. 1) into \(F\)-dimensional feature space, \(E=(e_{i}\in\mathbb{R}^{F}|i=1,\cdots,t)\) using an embedding table of size \(|\mathcal{V}_{\text{txt}}|\times F\). We use \(L\) transformer decoder layers with \(H\) attention heads. The model's output includes a linear layer followed by softmax, generating a probability distribution over the tokens in \(\mathcal{V}_{\text{voxt}}\). VoxtLM is trained as an autoregressive language model. In training, teacher forcing is used for the preceding tokens. Given \(Z\), at each timestep \(i\), predicted distribution is \(\hat{p}_{i}=\text{VoxtLM}(z_{1},\cdots,z_{i-1})\). Given true probability distribution \(p_{i}\), the loss is calculated using cross-entropy as \(L_{\text{CE}}(p_{i},\hat{p}_{i})=-\sum_{c=1}^{\mathcal{V}_{\text{txt}}}p_{i}(c )\log\hat{p}_{i}(c)\). **Initialization with pretrained textLM.** Previous work [22] shows that, in speechLM initializing with a pretrained textLM achieves better performance and faster convergence. Motivated by this approach, we use the pretrained textLM OPT [21] to initialize VoxtLM weights and learn the embedding table from scratch. The same model configuration is used as the pretrained model except for \(|\mathcal{V}_{\text{voxt}}|\). OPT \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ & & & \\ \hline \(\mathcal{D}_{\text{hd}}\) & 300K & 300K & 281K & 404K \\ \(\mathcal{D}_{\text{dst}}\) & 3M & 3M & 281K & 404K \\ \(\mathcal{D}_{\text{dst}}\) & 12M & 40M & 281K & 404K \\ \(\mathcal{D}_{\text{dst}}\) & 12M & 40M & 11M & 404K \\ \hline \hline \end{tabular} \end{table} Table 2: Number of utterances used in training of different VoxtLM setups. Bal: balanced data for four tasks; and 3M: uses the same number (3M) of text-only and speech-only utterances, a balanced setup for total text and total speech data. \begin{table} \begin{tabular}{l|c|c} \hline \hline Task & Training & Inference \\ & Condition & Prediction \\ \hline TextLM & \(\langle\)generate-text\(), \(Y\) & \(\langle\)generate-text\(), \(Y^{\text{txt}}\) & \(\hat{Y}\) \\ SpeechLM & \(\langle\)generate-speech\(), \(D\) & \(\langle\)generate-speech\(), \(D^{\text{txt}}\) & \(\hat{D}\) \\ ASR & \(\langle\)start-speech\(), \(D\), \((\)generate-text\(), \(Y\) & \(\langle\)start-speech\(), \(D^{\text{txt}}\), \((\)generate-text\() \() \() \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\)\(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\(\) \ is used due to training on publicly available data and availability of smaller pretrained models. #### 3.1.3 Inference Given the trained VoxLM model, we use beam search in the inference phase. Prediction from the VoxLM is expressed as: \[\text{prediction}\gets p(\cdot|\text{condition}). \tag{2}\] For TTS, the provided condition is test text utterance \(Y^{\text{test}}\) and speech tokens \(\hat{D}\) are predicted. In the case of ASR, the condition is test speech tokens \(D^{\text{test}}\) and prediction is the recognized text \(\hat{Y}\). For speech continuation, the condition involves prefix speech tokens \(D^{\text{test}}\) and prediction is continued speech tokens \(\hat{D}\). For text continuation, the condition is the text \(Y^{\text{test}}\) and prediction is continued text \(\hat{Y}\). The inference conditions are shown in Table 1. **Speech token decoder.** The speech token decoder takes both \(\hat{D}\) and a speaker embedding \(s_{\text{spk}}\in\mathbb{R}^{N}\) of dimensionality \(N\) as inputs and produces \(\hat{X}\). We use the HiFiGAN [28] as the architecture and x-vector [29] as speaker embedding vector. ### Evaluation Metrics We use the following evaluation metrics. * For speech and text generation, we use sWUGGY and sBLIMP using dev dataset as proposed in [30] and perplexity (PPL).4 Only text is used for text generation evaluation and we do not use corresponding audio. Footnote 4: PPL is only compared between models with the same vocabulary size. * In the case of ASR, we use the WER. * Regarding TTS, we measure intelligibility with CER and quality using neural-predicted MOS quality score MOSNet [31, 32]. ## 4 Experiments **Dataset.** We use a combination of speech-only, text-only, and paired speech-text datasets from public corpora. * _Speech-only data:_ we use LibriLight (LL) [33] with 60K hours of audiobook speech from 7K speakers (12M utterances). * _Text-only data:_ we use the Librispeech (LS) [34] external textLM dataset (40M text utterances). * _Speech-text paired data:_ * For ASR, we mainly use Librispeech [34] with 960 hours of data ( 281K utterances). For an additional supervised data experiment, we use English Multilingual Librispeech (MLS) [35] with 44K hours of data from 5490 speakers (11M utterances). * For TTS, we use LibriTTS (LT) and VCTK (VC). LT [36] contains 580 hours of audiobook data from 2456 speakers. VC [37] contains 44 hours of studio recorded data from 109 speakers (404K utterances). We standardized the data by downsampling speech to a 16kHz rate, converting text to lowercase, and removing punctuation. We use separate test/dev sets for each task. For textLM and speechLM, we use the test set from LS and dev sets from sWUGGY and sBLIMP, _text_ for textLM, and _speech_ counterpart for speechLM. For ASR we use _speech-text_ test set from LS test-clean and test-other and report both test-clean/test-other separately. In TTS for computational efficiency, we create a test set of 100 utterances from two speakers from the LT test-clean. The test speakers are chosen via random sampling (specifically, speaker ids 1089 and 1284). **Experimental setup.** To train the sub-word model, we use paired text-speech from ASR and TTS datasets. We experiment with three \(k\) values (introduced in Sec. 3.1), 50, 200, and 1000, denoted as _VoxLM-k_. We also vary BPE sizes, setting them at 2K, 5K, and 10K for \(k\) values 50, 100 and 200, respectively. We use three configurations, small (\(L\)=12, \(F\)=768, \(H\)=12), medium (\(L\)=24, \(F\)=1024, \(H\)=16), and large (\(L\)=24, \(F\)=2048, \(H\)=32), with \(L\), \(H\) and \(F\) detailed in Sec. 3.1.2. We use 4 A100 GPUs for training small/medium and 8 A100 GPUs for large with Adam optimizer [38] and warmup learning rate schedule. Training data size varies considerably between different tasks. For example, the paired data for ASR and TTS are \(100\times\) smaller than text-only data and \(40\times\) smaller than speech-only data. We can assume that achieving optimal performance across all tasks requires balanced data for each of them. It is also worth noting that text-only data is more readily available compared to speech-only and paired data. Nonetheless, to assess the effect of different dataset sizes for tasks, we consider balanced and unbalanced data sets for training, as summarized in Table 2. ### Results **Single vs multitask.** We compare a multitask and _four_ single-task models using VoxLM-k50. The single-task LMs are trained separately on all data for each task, ASR, TTS, speechLM, and textLM and reported in the first row of Table 3, each column is a separate single-task model. Compared to single-task, VoxLM shows competitive results for all four tasks although the best model differs. For textLM \(\mathcal{D}_{\text{Set}}\) exhibits a higher sWUGGY but a lower sBLIMP score. In speechLM, \(\mathcal{D}_{\text{3M}}\) has the best scores in both sWUGGY and sBLIMP, followed by \(\mathcal{D}_{\text{Set}}\). In TTS across all multitask models show improvement compared to single task. ASR reports improvement in \(\mathcal{D}_{\text{2M}}\). We note that ASR is most affected in the unbalanced case: likely due to the lower ASR data ratio to textLM/speechLM (\(100\times\)/\(40\times\) less). The least degradation in ASR is seen with \(\mathcal{D}_{\text{3M}}\) where ASR data ratio to textLM/speechLM is relatively better (\(10\times\) \begin{table} \begin{tabular}{l|c|c c|c c|c c|c c} \hline \multirow{2}{*}{Name} & \multirow{2}{*}{\# params} & \multicolumn{2}{c|}{**TextLM**} & \multicolumn{2}{c|}{**SpeechLM**} & \multicolumn{2}{c|}{**ASR**} & \multicolumn{2}{c}{**TTS**} \\ & & PPL(\(\downarrow\)) & & PPL(\(\downarrow\)) & & PPL(\(\downarrow\)) & & WER(\(\downarrow\)) & CER(\(\downarrow\)) & MOSNet(\(\uparrow\)) \\ \hline w/o PT & 125M & 11.1 & 62.1 & 21.0 / 37.4 & **8.8** & 3.66 \\ w/ PT & 125M & **10.4** & **58.8** & **13.1 / 28.8** & 9.4 & **3.92** \\ \hline \end{tabular} \end{table} Table 4: Experimental results comparing with and without initialization with pretrained (PT) textLM for VoxLM-k50 with \(\mathcal{D}_{\text{Set}}\). \begin{table} \begin{tabular}{l|c|c c c|c c c|c c c} \hline \multirow{2}{*}{Source} & \multirow{2}{*}{\# params} & \multicolumn{2}{c|}{**TextLM**} & \multicolumn{2}{c|}{**SpeechLM**} & \multicolumn{2}{c|}{**ASR**} & \multicolumn{2}{c}{**TTS**} \\ & & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIMP(\(\uparrow\)) & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIMP(\(\uparrow\)) & WER(\(\downarrow\)) & CER(\(\downarrow\)) & MOSNet(\(\uparrow\)) \\ \hline \(\mathcal{D}_{\text{St}}\) & 125M & 18.3 & 77.1 & **80.3** & 73.8 & 62.9 & 53.9 & 8.8 / 21.4 & 28.9 & 2.68 \\ \hline \(\mathcal{D}_{\text{Mat}}\) & 125M & 15.4 & 77.7 & 66.7 & 68.5 & 60.7 & 52.7 & **8.6 / 20.9** & **5.6** & 3.76 \\ \(\mathcal{D}_{\text{Mat}}\) & 125M & 13.5 & 77.9 & 68.0 & 58.1 & **63.6** & **55.2** & 11.0 / 24.4 & 7.0 & **3.90** \\ \(\mathcal{D}_{\text{Mat}}\) & 125M & 11.1 & **80.3** & 74.2 & 62.1 & 62.8 & 54.1 & 21.0 / 37.4 & 8.8 & 3.86 \\ \hline \end{tabular} \end{table} Table 3: Experimental results comparing multitasking VoxLM against _four_ single-task VoxLM for textLM, speechLM, ASR and TTS. We use token size (\(k\)) 50 for all models. Single-task models are trained with all available data, for VoxLM we report different training data (Table 2) cases. For ASR we report test-clean/test-other results. \(\mathcal{D}_{\text{Set}}^{*}\): four single-task models whereas other rows depict multitask model. less). **Initialization with pretrained textLM.** We compare the VoxtLM-\(k50\) with the \(\mathcal{D}_{\text{Set}}\) with and without initialization with OPT and report in Table 4. Initialization improves the performance of three tasks: textLM, speechLM, and ASR. For TTS, a slight degradation in CER is observed, whereas objective quality improves. Notably, better initialization aids ASR performance in the unbalanced scenario (reducing test-clean WER from 21.0 to 13.1). **Effect of token vocabulary size.** We compare \(k\)=50, 200, and 1000, as outlined in Table 5. Comparisons are made with \(\mathcal{D}_{\text{Bal}}\) and \(\mathcal{D}_{\text{Set}}\). For ASR and TTS, \(k\)=50 performance are notably worst. For speechLM with \(\mathcal{D}_{\text{Set}}\) best sWUGGY and sBLIMP scores are observed with the \(k\)=200 model. TextLM, as expected, shows no significant pattern with varying \(k\). **Scalability.** Next, we explore if model size can help with data balancing by comparing medium and large models with \(k\)=200, presented in Table 6. TextLM, speechLM, and ASR all metrics show improvement with larger model. TTS shows a very small degradation in intelligibility (0.4) and quality (0.03). To mitigate the smaller ratio of paired data, we incorporate more supervised data for ASR in \(\mathcal{D}_{\text{Set}}\). We compare this with both \(k\)=200 and \(k\)=1000 and observe improvement in the ASR task. **Comparison with baselines.** Furthermore, we conduct comparisons with well-established models in TTS, ASR, and speechLM. It is important to note that comparison models are not fully comparable due to differences in training data, strategies, and architecture. For speechLM, we compare with GSLM [4] and AudioLM [5]. For TTS, we compare with VITS [39] and we use a pretrained VITS model with LibriTTS. For ASR, we compare two models using state-of-the-art architecture E-Branchformer [40]. One model is spectrogram-based (ASR-Fbank). The second one is discrete speech tokens-based (dst-ASR-Hubert), trained following the procedure [26] and the same speech tokenizer as VoxtLM-\(k\)1000. For speechLM (Table 7), sBLIMP score of VoxtLM is higher than that of GSLM-\(k\)200 reported. However, AudioLM has a higher score in both sWUGGY and sBLIMP. This suggests potential for further improvement in performance with hierarchical tokens and multistage training. For ASR, we observe lower WER compared to dst-ASR-Hubert. Compared to ASR-Fbank, WER is higher (2.7 against 2.2 in test-clean). In the case of TTS (Table 8), compared to VITS, VoxtLM reports better intelligibility (CER 7.7 to 2.6) and quality (MOSNet 4.20 to 4.30). Though VoxtLM was trained with larger dataset compared to VITS, it is noteworthy that for TTS models, having diverse training data with more noise and more speakers (as is the case with our datasets except VCTK) degrades rather than improves. To summarize, we show that both ASR and TTS can be modeled as a language modeling task. Using special tokens we can combine ASR and TTS with speech-text joint language modeling framework. Although the four tasks are quite different, combining four tasks leads to improvement. ## 5 Conclusion The integration of speech and text tasks within a unified language modeling framework presents a promising avenue for advancing speech processing. We present a special token-based approach to combine four speech and text tasks: speech recognition, speech synthesis, text and speech generation. Our results demonstrate that by integrating different speech tasks into one generative model, we can improve the performance of the tasks. Specially TTS show impressive performance compared to state-of-the-art VITS. We will expand this work to include more speech tasks in future. \begin{table} \begin{tabular}{l|c|c|c c|c c|c c|c c|c} \hline \hline Name & Source & \# params & \multicolumn{3}{c|}{**TextLM**} & \multicolumn{3}{c|}{**SpeechLM**} & \multicolumn{1}{c|}{**ASR**} & \multicolumn{1}{c}{**TTS**} \\ & & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIM(\(\uparrow\)) & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIMP(\(\uparrow\)) & WER(\(\downarrow\)) & CER(\(\downarrow\)) & MOSNet(\(\uparrow\)) \\ \hline VoxtLM-\(k\)50 & \(\mathcal{D}_{\text{Bal}}\) & 125M & 15.4 & **77.7** & 66.7 & 68.5 & 60.7 & **52.7** & 8.6 / 20.9 & 5.6 & 3.76 \\ VoxtLM-\(k\)200 & \(\mathcal{D}_{\text{Bal}}\) & 125M & 21.6 & 77.3 & **67.9** & 58.6 & **61.6** & 52.1 & 6.1 / 15.4 & 3.2 & **4.36** \\ VoxtLM-\(k\)1000 & \(\mathcal{D}_{\text{Bal}}\) & 125M & 26.3 & 76.4 & 67.6 & 38.7 & 60.7 & 52.5 & **5.4 / 14.5** & **2.6** & 4.30 \\ \hline VoxtLM-\(k\)50\({}^{\dagger}\) & \(\mathcal{D}_{\text{Sat}}\) & 350M & 10.3 & **81.0** & 75.1 & 68.2 & 62.7 & 53.8 & 13.5 / 27.2 & 6.6 & 3.91 \\ VoxtLM-\(k\)200\({}^{\dagger}\) & \(\mathcal{D}_{\text{Sat}}\) & 350M & 12.7 & 80.2 & **78.8** & 45.7 & **65.5** & **55.3** & **6.5 / 17.6** & **3.5** & **4.36** \\ \hline \hline \end{tabular} \end{table} Table 6: Experimental results comparing larger model size and more supervised data for VoxtLM. \({}^{\dagger}\) denotes initialization with OPT. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Name & Source & \# params & \multicolumn{3}{c|}{**SpeechLM**} & \multicolumn{1}{c}{**ASR**} \\ & & sWUGGY(\(\uparrow\)) & sBLIM(\(\uparrow\)) & WER(\(\downarrow\)) \\ \hline GSLM-\(k\)50 [4] & LL & 172M & - & 55.9 & - / - \\ GSLM-\(k\)200 [4] & LL & 172M & - & 53.0 & - / - \\ AudioLM [5] & LL & 900M & 71.5 & 64.7 & - / - \\ ASR-Fbank & LS & 149M & - & - & 2.2 / 4.6 \\ dst-ASR-Hubert & LS & 39M & - & - & 4.2 / 10.8 \\ \hline VoxtLM-\(k\)200\({}^{\dagger}\) & \(\mathcal{D}_{\text{Set}}\) & 1.3B & 66.1 & 56.7 & 4.6 / 12.1 \\ VoxtLM-\(k\)200\({}^{\dagger}\) & \(\mathcal{D}_{\text{Sat}}\) & 1.3B & 65.6 & 57.1 & 2.7 / 6.5 \\ \hline \hline \end{tabular} \end{table} Table 7: SpeechLM and ASR results: Comparison with the state-of-the-art models with VoxtLM. \({}^{\dagger}\) denotes initialization with OPT. \begin{table} \begin{tabular}{l|c|c|c|c c|c c|c c|c c} \hline \hline Name & Source & \# params & \multicolumn{3}{c|}{**TextLM**} & \multicolumn{3}{c|}{**SpeechLM**} & \multicolumn{1}{c}{**ASR**} & \multicolumn{1}{c}{**TTS**} \\ & & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIM(\(\uparrow\)) & PPL(\(\downarrow\)) & sWUGGY(\(\uparrow\)) & sBLIMP(\(\uparrow\)) & WER(\(\downarrow\)) & CER(\(\downarrow\)) & MOSNet(\(\uparrow\)) \\ \hline VITS [39] & LT & 97M & 7.7 & 4.20 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline VoxtLM-\(k\)200\({}^{\dagger}\) & \(\mathcal{D}_{\text{Set}}\) & 350M & 3.5 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ VoxtLM-\(k\)1000 & \(\mathcal{D}_{\text{Bal}}\) & 125M & 26.3 & 76.4 & 67.6 & 38.7 & 60.7 & 52.5 & **5.4 / 14.5** & **2.6** & 4.30 \\ \hline VoxtLM-\(k\)50\({}^{\dagger}\) & \(\mathcal{D}_{\text{Sat}}\) & 350M & 10.3 & **81.0** & 75.1 & 68.2 & 62.7 & 53.8 & 13.5 / 27.2 & 6.6 & 3.91 \\ VoxtLM-\(k\)200\({}^{\dagger}\) & \(\mathcal{D}_{\text{Sat}}\) & 350M & 12.7 & 80.2 & **78.8** & 45.7 & **65.5** & **55.3** & **6.5 / 17.6** & **3.5** & **4.36** \\ \hline \hline \end{tabular} \end{table} Table 5: Experimental results comparing speech token size \(k\) for VoxtLM. We compare the two conditions: \(\mathcal{D}_{\text{Bal}}\) and \(\mathcal{D}_
2307.16428
Decay estimates for Beam equations with potentials in dimension three
This paper is devoted to studying time decay estimates of the solution for Beam equation (higher order type wave equation) with a potential $$u_{t t}+\big(\Delta^2+V\big)u=0, \,\ u(0, x)=f(x),\ u_{t}(0, x)=g(x)$$ in dimension three, where $V$ is a real-valued and decaying potential on $\R^3$. Assume that zero is a regular point of $H:= \Delta^2+V $, we first prove the following optimal time decay estimates of the solution operators \begin{equation*} \big\|\cos (t\sqrt{H})P_{ac}(H)\big\|_{L^{1} \rightarrow L^{\infty}} \lesssim|t|^{-\frac{3}{2}}\ \ \hbox{and} \ \ \Big\|\frac{\sin(t\sqrt{H})}{\sqrt{H}} P_{a c}(H)\Big\|_{L^{1} \rightarrow L^{\infty}} \lesssim|t|^{-\frac{1}{2}}. \end{equation*} Moreover, if zero is a resonance of $H$, then time decay of the solution operators above also are considered. It is noticed that the first kind resonance does not effect the decay rates for the propagator operators $\cos(t\sqrt{H})$ and $\frac{\sin(t\sqrt{H})}{\sqrt{H}}$, but their decay will be dramatically changed for the second and third resonance types.
Miao Chen, Ping Li, Avy Soffer, Xiaohua Yao
2023-07-31T06:24:50Z
http://arxiv.org/abs/2307.16428v3
# Decay estimates for beam equations with potential in dimension three ###### Abstract. This paper is devoted to studying time decay estimates of the solution for Beam equation (higher order type wave equation) with a potential \[u_{tt}+\big{(}\Delta^{2}+V\big{)}u=0,\ \ u(0,x)=f(x),\ u_{t}(0,x)=g(x)\] in dimension three, where \(V\) is a real-valued and decaying potential on \(\mathbb{R}^{3}\). Assume that zero is a regular point of \(H:=\Delta^{2}+V\), we first prove the following optimal time decay estimates of the solution operators \[\big{\|}\cos(t\sqrt{H})P_{ac}(H)\big{\|}_{L^{1}\to L^{\infty}}\lesssim|t|^{- \frac{3}{2}}\ \ \text{and}\ \ \big{\|}\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\big{\|}_{L^{1}\to L^{ \infty}}\lesssim|t|^{-\frac{1}{2}}.\] Moreover, if zero is a resonance of \(H\), then time decay of the solution operators above also are considered. It is noticed that the first kind resonance does not effect the decay rates for the propagator operators \(\cos(t\sqrt{H})\) and \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\), but their decay will be dramatically changed for the second and third resonance types. Key words and phrases:Higher order wave equations (Beam); Asymptotic expansions; Decay estimates; Fourth order Schrodinger operator 2000 Mathematics Subject Classification: 58J50, 42B15, 35P15, 42B20, 47F05 ## 1. Introduction and main results ### Introduction In this paper we consider the following Beam equation (higher order type wave equation) with a real-valued decaying potential in dimension three: \[\begin{cases}u_{tt}+\big{(}\Delta^{2}+V(x)\big{)}u=0,\ \ \ (t,x)\in\mathbb{R} \times\mathbb{R}^{3},\\ u(0,x)=f(x),\ \ u_{t}(0,x)=g(x).\end{cases} \tag{1.1}\] Let \(H:=\Delta^{2}+V\) and \(|V(x)|\lesssim\langle x\rangle^{-\beta}\) for some \(\beta>0\). Then \(H\) is self-adjoint on \(L^{2}\), and the solution to equation (1.1) is given by the formula \[u(t,x)=\cos(t\sqrt{H})f(x)+\frac{\sin(t\sqrt{H})}{\sqrt{H}}g(x). \tag{1.2}\] The expressions above depend on the branch chosen for \(\sqrt{H}\), so are well defined even if \(H\) is not positive. We will study the decay estimates of the propagator operators \(\cos(t\sqrt{H})\) and \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\). In the free case, i.e. \(V=0\) and \(\sqrt{H}=-\Delta\), if the initial data \((f,g)\in L^{1}(\mathbb{R}^{3})\times L^{1}(\mathbb{R}^{3})\), then the solution operators of the equation (1.1) satisfy the following optimal time decay estimates: \[\big{\|}\cos(t\Delta)f\big{\|}_{L^{\infty}(\mathbb{R}^{3})}\lesssim|t|^{-\frac {3}{2}}\|f\|_{L^{1}(\mathbb{R}^{3})}, \tag{1.3}\] and \[\Big{\|}\frac{\sin(t\Delta)}{\Delta}g\Big{\|}_{L^{\infty}(\mathbb{R}^{3})} \lesssim|t|^{-\frac{1}{2}}\|g\|_{L^{1}(\mathbb{R}^{3})}, \tag{1.4}\] see Theorem 1.3 below, also see Theorem 7 in [8]. When \(V\neq 0\), the decay estimates of solution operators of the equation (1.1) are affected by the spectrum of \(H\), which in turn depends on the conditions of potential \(V\). It was well-known that the spectrum of \(H\) has negative eigenvalues \(\{\lambda_{1}\leq\lambda_{2}\leq\cdots<0\}\) and continuous spectrum \([0,\infty)\) provided that the potential \(V\) possesses certain decaying rate, see e.g. [1, 44]. If \(H\) has no positive eigenvalues embedded in the continuous spectrum, then \((0,\infty)\) is the pure absolutely continuous spectrum. In this case, assume that \(\lambda_{j}(j\geq 1)\) be the negative eigenvalues of the spectrum of \(H\) (the counting multiplicity of \(\lambda_{j}\)) and \(H\phi_{j}=\lambda_{j}\phi_{j}(j\geq 1)\) for \(\phi_{j}\in L^{2}(\mathbb{R}^{3})\), denotes by \(P_{ac}(H)\) the projection onto the absolutely continuous spectrum space of \(H\). Then the solutions to the equation (1.1) can be written as \[u(t,x):=u_{d}(t,x)+u_{c}(t,x),\] where \[u_{d}(t,x)=\sum_{j}\cosh(t\sqrt{-\lambda_{j}})(f,\phi_{j})\phi_{j}(x)+\frac{ \sinh(t\sqrt{-\lambda_{j}})}{\sqrt{-\lambda_{j}}}(g,\phi_{j})\phi_{j}(x), \tag{1.5}\] \[u_{c}(t,x)=\cos(t\sqrt{H})P_{ac}(H)f(x)+\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac} (H)g(x). \tag{1.6}\] Obviously, the negative eigenvalues of the spectrum of \(H\) cause exponential growth of \(u_{d}\) defined in (1.5) as \(t\) becomes large. Hence, we need to project away from the part of eigenvalues, and focus on decay estimates for the continuous part \(u_{c}(t,x)\). In particular, we notice that the absence of positive eigenvalue of \(H\) has been an indispensable assumption for dispersive estimates, see Subsection 1.3 below for more comments on the positive eigenvalues of \(H\). In this paper, we are devoted to establishing the time decay bounds of the solution (1.6) with decaying assumptions on the potential \(V\) in dimension three. In order to establish the dispersive bounds of the solution operators, we first need to deduce the asymptotic expansions of the resolvent \(R_{V}(\lambda^{4})\) for \(\lambda\) near zero in the presence of resonances or eigenvalue, then using the Stone's formula, Littlewood-Paley method and oscillatory integral theory to establish the desired time decay bounds. ### Main results We use the notation \(a\pm:=a\pm\epsilon\) for some small but fixed \(\epsilon>0\). For \(a,b\in\mathbb{R}^{+}\), \(a\lesssim b\) means that there exists some constant \(c>0\) such that \(a\leq cb\). We first state one of the main results for the case that zero is a regular point of \(H\) ( i.e. \(H\) has neither zero eigenvalue nor zero resonance). **Theorem 1.1**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-\beta}(x\in\mathbb{R}^{3})\) with \(\beta>7\). Assume that \(H=\Delta^{2}+V(x)\) has no positive embedded eigenvalues. Let \(P_{ac}(H)\) denotes the projection onto the absolutely continuous spectrum space of \(H\). If zero is a regular point of \(H\), then_ \[\Big{\|}\cos(t\sqrt{H})P_{ac}(H)\Big{\|}_{L^{1}\to L^{\infty}}\lesssim\ |t|^{-\frac{3}{2}}, \tag{1.7}\] \[\left\|\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\right\|_{L^{1}\to L^{\infty}} \lesssim\ |t|^{-\frac{1}{2}}. \tag{1.8}\] **Remark 1.2**.: _Some comments on Theorem 1.1 are given as follows:_ 1. _Clearly, when_ \(V=0\)_, in view of (_1.3_) and (_1.4_) (also see Proposition_ 2.2 _below), hence the estimates (_1.7_) and (_1.8_) are actually optimal when zero is a regular point._ 2. _By the spectrum theorem of self-adjoint operator, it immediately follows that_ \[\left\|\cos(t\sqrt{H})P_{ac}(H)f\right\|_{L^{2}}+\left\|\frac{\sin(t\sqrt{H})}{ t\sqrt{H}}P_{ac}(H)f\right\|_{L^{2}}\lesssim\|f\|_{L^{2}}.\] _Hence using (_1.7_), (_1.8_) and Riesz-Thorin interpolation theorem, we obtain that for_ \(1\leq p\leq 2\) _and_ \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\)_,_ \[\left\|\cos(t\sqrt{H})P_{ac}(H)f\right\|_{L^{p^{\prime}}}+\left\|\frac{\sin(t \sqrt{H})}{t\sqrt{H}}P_{ac}(H)f\right\|_{L^{p^{\prime}}}\lesssim|t|^{-3(\frac{ 1}{2}-\frac{1}{p})}\|f\|_{L^{p}}.\] (1.9) 3. _Recently, Goldberg and Green_ _[_22_]_ _have showed that the following wave operators_ \[W_{\pm}=W_{\pm}(H,\Delta^{2}):=s-\lim_{t\to\pm\infty}e^{itH}e^{-it\Delta^{2}}\] (1.10) _are bounded on_ \(L^{p}(\mathbb{R}^{3})\) _for_ \(1<p<\infty\) _if zero is regular point of_ \(H=\Delta^{2}+V\) _with_ \(|V(x)|\lesssim\langle x\rangle^{-12-}\)_. Note that_ \(W_{\pm}\) _satisfy the following intertwining identity:_ \[f(H)P_{ac}(H)=W_{\pm}f(\Delta^{2})W_{\pm}^{*},\] (1.11) _where_ \(f\) _is any Borel measurable function on_ \(\mathbb{R}\)_. By the_ \(L^{p}\)_-boundedness of_ \(W_{\pm}\) _and_ \(W_{\pm}^{*}\)_, one can reduce the_ \(L^{p}\)_-_\(L^{p^{\prime}}\) _estimates of_ \(f(H)P_{ac}(H)\) _to the same estimates of_ \(f(\Delta^{2})\) _by_ \[\left\|f(H)P_{ac}(H)\right\|_{L^{p}\to L^{p^{\prime}}}\leq\left\|W_{\pm} \right\|_{L^{p^{\prime}}\to L^{p^{\prime}}}\left\|f(\Delta^{2})\right\|_{L^{p} \to L^{p^{\prime}}}\left\|W_{\pm}^{*}\right\|_{L^{p}\to L^{p}}.\] (1.12) _Hence we obtain the same_ \(L^{p}\)_-_\(L^{p^{\prime}}\) _estimates (_1.9_) for any_ \(1<p\leq 2\)_. However, due to the absence of the_ \(L^{1}\) _and_ \(L^{\infty}\) _boundednesses of wave operators_ \(W_{\pm}\) _above in_ _[_22_]__, also see_ _[_43_]_ _for the counterexamples of the endpoint cases. Therefore the time decay estimates in Theorem_ 1.1 _can not be obtained by wave operator methods._ It is known that the time decay rate may decrease for many different types of dispersive estimates if zero resonance or eigenvalue arises, see for example [31, 16] for three-dimensional and [12] for two-dimensional Schrodinger operators. For fourth order Schrodinger operators, for example, see [15] in three dimension, [25] in dimension four and [38] in dimension two. For the classical wave equation with potential (1.18), it was shown that the first kind of zero resonance does not destroy the time decay rate, but the second kind of zero resonance decrease the decay rate, see e.g. [24] in dimension two and [14] in dimension four. Now we will present the decay estimates in the presence of zero resonance or eigenvalue. At first, we give the definitions of zero resonances. Let \(\langle x\rangle=(1+|x|^{2})^{1/2}\), we define the classical weighted space \(L^{2}_{-\sigma}(\mathbb{R}^{3})=\left\{f\ |\ \langle x\rangle^{-\sigma}f(x)\in L^{2}( \mathbb{R}^{3})\right\}\) for \(\sigma\in\mathbb{R}\). We say that zero is _the first kind resonance of \(H\)_ if there exists some nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) for some \(\sigma>\frac{3}{2}\) but no any nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) for any \(\sigma>\frac{1}{2}\) such that \(H\phi=0\) in the distributional sense; zero is _the second kind resonance of \(H\)_ if there exists some nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) for some \(\sigma>\frac{1}{2}\) but no any nonzero \(\phi\in L^{2}(\mathbb{R}^{3})\) such that \(H\phi=0\); zero is _the third kind resonance of \(H\) (i.e. eigenvalue)_ if there exists some nonzero \(\phi\in L^{2}(\mathbb{R}^{3})\) such that \(H\phi=0\). We remark that such resonance solutions of \(H\phi=0\) also can be characterized in the form of \(L^{p}\) spaces, see Theorem 5.3 below. **Theorem 1.3**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-\beta}(x\in\mathbb{R}^{3})\) with some \(\beta>0\). Assume that \(H=\Delta^{2}+V(x)\) has no positive embedded eigenvalues. Let \(P_{ac}(H)\) denote the projection onto the absolutely continuous spectrum space of \(H\). Then the following statements hold:_ 1. _If zero is the first kind resonance of_ \(H\) _and_ \(\beta>11\)_, then_ \[\left\|\cos(t\sqrt{H})P_{ac}(H)\right\|_{L^{1}\to L^{\infty}}\lesssim|t|^{- \frac{3}{2}},\] (1.13) \[\left\|\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\right\|_{L^{1}\to L^{ \infty}}\lesssim|t|^{-\frac{1}{2}}.\] (1.14) 2. _If zero is the second kind resonance of_ \(H\) _and_ \(\beta>19\)_, or the third kind resonance of the_ \(H\) _and_ \(\beta>23\)_, then_ \[\left\|\cos(t\sqrt{H})P_{ac}(H)\right\|_{L^{1}\to L^{\infty}}\lesssim|t|^{- \frac{1}{2}}.\] (1.15) _Moreover, there exist two finite rank operators_ \(F_{t}\) _and_ \(G_{t}\) _satisfying_ \[\|F_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}\ \ \text{and}\ \ \|G_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{\frac{1}{2}},\] _such that_ \[\left\|\cos(t\sqrt{H})P_{ac}(H)-F_{t}\right\|_{L^{1}\to L^{\infty}}\lesssim|t| ^{-\frac{3}{2}},\] (1.16) \[\left\|\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)-G_{t}\right\|_{L^{1}\to L^{ \infty}}\lesssim|t|^{-\frac{1}{2}}.\] (1.17) **Remark 1.4**.: _Some remarks on Theorem 1.3 are given as follows:_ 1. _As shown in Theorem_ 1.3 _above, if zero is the first kind resonance, then the decay estimates of solution operators_ \(\cos(t\sqrt{H})\) _and_ \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) _are identical with regular case, excerpt for requiring faster decaying rate of potentials_ \(V\)_._ 2. _The second and third kind resonance destroy the time decay rates of solution operators. In particular, the_ \(L^{1}\)_-_\(L^{\infty}\) _estimates of solution operator_ \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) _becomes much worse and even leads to a positive growth_ \(O(|t|^{\frac{1}{2}})\) _as time_ \(t\) _goes to infinite if zero is the second or third resonance._ 3. _In addition, we note that the time decay rates of solution operators_ \(\cos(t\sqrt{H})\) _and_ \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) _are optimal in high energy case when zero is a resonance of_ \(H\)_, see Theorem_ 4.1 _below._ ### Further remarks and backgrounds Here we make further comments on spectral assumptions, and record some known results on the second order wave equation with real-valued decaying potential. #### 1.3.1. Absence of embedded positive eigenvalues It was well-known by Kato's theorem in [33] that Schrodinger operator \(-\Delta+V\) has no positive eigenvalues if a bounded potential \(V(x)=o(|x|^{-1})\) as \(|x|\) goes to infinite, also see [46, 17, 29, 37] for more related results and references. However, such a criterion does not work again for fourth-order Schrodinger operator \(H=\Delta^{2}+V\). Indeed, for any dimension \(n\geq 1\), one can easily construct some \(V\in C_{0}^{\infty}(\mathbb{R}^{n})\) so that \(H\) has some embedding positive eigenvalues, see Section 7.1 in [18]. These results clearly indicate that the absence of positive eigenvalues for the fourth-order Schrodinger operator would be more subtle and unstable than second order cases with a bounded potential perturbation \(V\). It should be noticed that Feng et al. in [18] have proved that \(H=\Delta^{2}+V\) does not contain positive eigenvalues assuming that potential \(V\) is bounded and satisfies the repulsive condition (i.e. \((x\cdot\nabla)V\leq 0\)). Moreover, we also remark that for a general self-adjoint operator \(\mathcal{H}\) on \(L^{2}(\mathbf{R}^{n})\), even if \(\mathcal{H}\) has a simple embedded eigenvalue, Costin and Soffer [9] have proved that \(\mathcal{H}+\epsilon W\) can kick off the eigenvalue located in a small interval under generic small perturbation of potential. #### 1.3.2. The decay estimates of the fourth order Schrodinger operators Recently, there exist several works devoting to the time decay estimates of \(e^{-itH}\) generated by the fourth order Schrodinger operator \(H=\Delta^{2}+V\) with a decaying potential \(V\). Feng et al. [19] first proved that Kato-Jensen decay estimate of \(e^{-itH}\) is bounded by \((1+|t|)^{-n/4}\) for \(n\geq 5\), and \(L^{1}-L^{\infty}\) decay estimate is \(O(|t|^{-1/2})\) for \(n=3\) in the regular case. Somewhat later, Erdogan et al. [15] for \(n=3\) and Green et al. [25] for \(n=4\), proved that the \(L^{1}-L^{\infty}\) estimates of \(e^{-itH}\) is \(O(|t|^{-n/4})\) for \(n=3,4\) if zero is a regular point, and the time decay rate will be changed as zero energy resonance occurs. More recently, Soffer et al. [47] proved the \(L^{1}-L^{\infty}\) estimates of \(e^{-itH}\) is \(O(|t|^{-1/4})\) for dimension \(n=1\) whatever zero is a regular point or resonance. It should be emphasized that the different types of zero resonance do not change the optimal time decay rate of \(e^{-itH}\) in dimension one just at the cost of faster decay rate of potential. In [38] the authors studied the \(L^{1}-L^{\infty}\) estimates of \(e^{-itH}\) in dimension two. Furthermore, we also remark that there exist some interesting works [22, 13] on the \(L^{p}\) bounds of wave operators for \(n=3\) and \(n>2m\) in the regular case. Also see [42] for the \(L^{p}\)-boundedness of wave operators of fourth order Schrodinger operators in dimension one. #### 1.3.3. Background on wave equation with potential Below we make some comments about classical wave equation with potential: \[u_{tt}+(-\Delta+V)u=0,\ \ u(0,x)=f(x),\ \ u_{t}(0,x)=g(x),\,x\in\mathbb{R}^{n}. \tag{1.18}\] In the free case, when \(V=0\), it is well known that the \(W^{k+1,1}-L^{\infty}\) estimate of solution operator \(\cos(t\sqrt{H})\) is \(t^{-\frac{1}{2}}\), and the \(W^{k,1}-L^{\infty}\) estimate of solution operator \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) is \(t^{-\frac{1}{2}}\), for \(k>\frac{1}{2}\) in dimension two, see e.g. [39, 2, 3]. In general, one must use Hardy or Besov spaces or BMO to obtain the sharp \(k=\frac{n-1}{2}\) smoothness bound in even dimensions. One can attain the bound for Sobolev spaces in dimension three where one can use the divergence theorem, see [49]. Beals et al. [3] first studied the \(W^{k,1}-L^{\infty}\) dispersive estimates of solution operators to equation (1.18) in dimension \(n\geq 3\). There is not much work on the \(W^{k,1}\to L^{\infty}\) dispersive estimates or'regularized' \(L^{1}\to L^{\infty}\) type estimates for \(n=2\), where negative powers of \(-\Delta+V\) are employed. Moulin [40] studied the high frequency estimates of this type. In dimension two, Kopylova [41] studied local estimates based on polynomially weighted \((-\Delta+V)^{s}\) spaces when zero energy is regular; one obtains the decay rate of \(t^{-1}(\log t)^{-2}\) for large \(t\). Dispersive bounds of solution operators for the wave equation in dimension three have been studied, see e.g. [36, 4]. For Strichartz estimates for wave equation, particularly in dimensions \(n\geq 3\), see e.g. [27, 5, 6, 26, 11, 2]. Dispersive estimates for the wave equation (1.18) with loss of derivatives in dimension three see for example [2, 3, 11, 26]. Some progress was made in other dimensions, see e.g. [35] in dimension two in weighted \(L^{2}\) sense, and [28] for dimensions \(4\leq n\leq 7\). These results all require the assumption that zero is regular. In [14] the authors established a low energy \(L^{1}-L^{\infty}\) dispersive bounds for solutions to the wave equation with potential in four spatial dimensions, and the loss of derivatives on the initial data in the dispersive estimates for the wave equation is a high energy phenomenon. Green [24] studied the \(L^{1}-L^{\infty}\) dispersive estimates of solution operator of wave equation with potential in dimension two, and time decay rate of solution operators were improved in weighted space if zero is a regular point of the spectrum of \(H\). ### The outline of the proof Here we briefly explain some ideas of the proofs of the theorems above. For simplicity, we only consider the regular case. In order to derive the decay estimates in Theorem 1.1 and Theorem 1.3, we will use the following Stone's formulas \[\cos(t\sqrt{H})P_{ac}(H)f(x)=\frac{2}{\pi i}\int_{0}^{\infty}\lambda^{3}\cos( t\lambda^{2})[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]f(x)d\lambda, \tag{1.19}\] \[\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)g(x)=\frac{2}{\pi i}\int_{0}^{\infty }\lambda\sin(t\lambda^{2})[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]g(x) d\lambda, \tag{1.20}\] where \(R_{V}^{\pm}(\lambda^{4})=(H-\lambda^{4}\mp i0)^{-1}\) are the boundary values of the perturbed resolvents. We need to study the expansions of the resolvent operators \(R_{V}^{\pm}(\lambda^{4})\) as \(\lambda\) is near zero by using perturbations of the following free resolvent \(R_{0}(z)\) (see e.g. [19]): \[R_{0}(z):=\big{(}(-\Delta)^{2}-z\big{)}^{-1}=\frac{1}{2z^{\frac{1}{2}}}\big{(} R(-\Delta;z^{\frac{1}{2}})-R(-\Delta;-z^{\frac{1}{2}})\big{)},\ z\in\mathbb{C}\setminus[0,\infty). \tag{1.21}\] Here the resolvent \(R(-\Delta;z^{\frac{1}{2}}):=(-\Delta-z^{\frac{1}{2}})^{-1}\) with \(\Im z^{\frac{1}{2}}>0\). For \(\lambda\in\mathbb{R}^{+}\), we define the limiting resolvent operators by \[R_{0}^{\pm}(\lambda):=R_{0}^{\pm}(\lambda\pm i0)=\lim_{\epsilon\to 0}\big{(} \Delta^{2}-(\lambda\pm i\epsilon)\big{)}^{-1}, \tag{1.22}\] \[R_{V}^{\pm}(\lambda):=R_{V}^{\pm}(\lambda\pm i0)=\lim_{\epsilon\to 0}\big{(}H-( \lambda\pm i\epsilon)\big{)}^{-1}. \tag{1.23}\] By using the equality (1.21) for \(R_{0}(z)\) with \(z=w^{4}\) for \(w\) in the first quadrant of the complex plane, and taking limits as \(w\to\lambda\) and \(w\to i\lambda\), we have \[R_{0}^{\pm}(\lambda^{4})=\frac{1}{2\lambda^{2}}\big{(}R^{\pm}(-\Delta;\lambda ^{2})-R(-\Delta;-\lambda^{2})\big{)},\ \lambda>0. \tag{1.24}\] It is well-known that by the limiting absorption principle (see e.g. [1]), \(R^{\pm}(-\Delta;\lambda^{2})\) are well-defined as the bounded operators of \(B(L_{s}^{2},L_{-s}^{2})\) for any \(s>1/2\), therefore \(R_{0}^{\pm}(\lambda^{4})\) are also well-defined between the weighted spaces. This property is extended to \(R^{\pm}_{V}(\lambda^{4})\) for \(\lambda>0\) for certain decay bounded potentials, see [19]. Note that the kernel of the free resolvent of Laplacian in dimension three (see e.g. [23]): \[R^{\pm}(-\Delta;\lambda^{2})(x,y)=\frac{e^{\pm i\lambda|x-y|}}{4\pi|x-y|},\ x,y \in\mathbb{R}^{3}, \tag{1.25}\] so by the identity (1.24) we have \[R^{\pm}_{0}(\lambda^{4})(x,y)=\frac{1}{2\lambda^{2}}\Big{(}\frac{e^{\pm i \lambda|x-y|}}{4\pi|x-y|}-\frac{e^{-\lambda|x-y|}}{4\pi|x-y|}\Big{)}. \tag{1.26}\] Notice that \[\cos(t\sqrt{H})=\frac{e^{it\sqrt{H}}+e^{-it\sqrt{H}}}{2},\ \ \frac{\sin(t\sqrt{H})}{ \sqrt{H}}=\frac{e^{it\sqrt{H}}-e^{-it\sqrt{H}}}{2i\sqrt{H}}. \tag{1.27}\] In order to estimate (1.19) and (1.20), it suffices to estimate \(H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\) for \(\alpha=-1,0\) by using \[H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)f= \frac{2}{\pi i}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2 \alpha}[R^{+}_{V}(\lambda^{4})-R^{-}_{V}(\lambda^{4})]fd\lambda. \tag{1.28}\] Next, we decompose the integral (1.28) into the low energy \(\{0\leq\lambda\ll 1\}\) and the high energy \(\{\lambda\gg 1\}\) two parts. For the high energy part, we will use the following resolvent identity: \[R^{\pm}_{V}(\lambda^{4})=R^{\pm}_{0}(\lambda^{4})-R^{\pm}_{0}(\lambda^{4})VR^{ \pm}_{0}(\lambda^{4})+R^{\pm}_{0}(\lambda^{4})VR^{\pm}_{V}(\lambda^{4})VR^{\pm }_{0}(\lambda^{4}). \tag{1.29}\] Hence we will need to study the contribution of every term in (1.29) to the integral (1.28). For the low energy part, we need to establish the expansions of the resolvent operators \(R^{\pm}_{V}(\lambda^{4})\) for \(\lambda\) near zero. Set \(U(x)=\mbox{sign}\big{(}V(x)\big{)}\) and \(v(x)=|V(x)|^{1/2}\). Let \(M^{\pm}(\lambda)=U+vR^{\pm}_{0}(\lambda^{4})v\). Then we have the following symmetric resolvent identity \[R^{\pm}_{V}(\lambda^{4})=R^{\pm}_{0}(\lambda^{4})-R^{\pm}_{0}(\lambda^{4})v(M ^{\pm}(\lambda))^{-1}vR^{\pm}_{0}(\lambda^{4}).\] Now we need to establish the expansions for \((M^{\pm}(\lambda))^{-1}\). In the regular case, for example, the expansions of \((M^{\pm}(\lambda))^{-1}\) is of the following form( see Theorem 3.1 below ) \[\big{(}M^{\pm}(\lambda)\big{)}^{-1}=QA^{0}_{0,1}Q+\Gamma_{1}(\lambda),\ \ \lambda<<1.\] where \(A^{0}_{0,1},Q\in B(L^{2},L^{2})\) satisfying \(Qv=0\), \(\|\Gamma_{1}(\lambda)\|_{L^{2}\to L^{2}}=O(\lambda^{2})\). Thus, we need to study the following integral for low energy \[\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\big{[}R^{\pm}_{0}( \lambda^{4})v(QA^{0}_{0,1}Q)vR^{\pm}_{0}(\lambda^{4})\big{]}(x,y)d\lambda. \tag{1.30}\] In the integral (1.30), in order to make use of cancellations \(Qv=0\), we will use Lemma 3.9, which is also frequently used to study the low energy dispersive estimates for resonance cases. Moreover, we also make use of Lemma 2.1 to estimate the oscillatory integral (1.30). The paper is organized as follows. In Section 2, we establish the dispersive bounds in the free case. In Section 3, we first recall the resolvent expansions when \(\lambda\) is near zero, then by Stone's formula, Littlewood-Paley method and oscillatory integral we establish the low energy decay bounds of Theorem 1.1 and Theorem 1.3. In Section 4, we prove Theorem 1.1 and Theorem 1.3 in high energy. Finally, for the convenience of reader, we give the asymptotic expansion of \((M^{\pm}(\lambda))^{-1}\) as \(\lambda\) is near zero as an appendix. ## 2. The decay estimates for the free case In this section, we are devoted to establishing the decay bounds of the free case by Littlewood-Paley method and oscillatory integral theory. Choosing a fixed even function \(\varphi\in C_{c}^{\infty}(\mathbb{R})\) such that \(\varphi(s)=1\) for \(|s|\leq\frac{1}{2}\) and \(\varphi(s)=0\) for \(|s|\geq 1\). Let \(\varphi_{N}(s)=\varphi(2^{-N}s)-\varphi(2^{-N+1}s),\ N\in\mathbb{Z}\). Then \(\varphi_{N}(s)=\varphi_{0}(2^{-N}s)\), \(\text{supp}\varphi_{0}\subset[\frac{1}{4},1]\) and \[\sum_{N=-\infty}^{\infty}\varphi_{0}(2^{-N}s)=1,\ s\in\mathbb{R}\setminus\{0\}. \tag{2.1}\] To estimate the integrals in (1.19) and (1.20) when \(V=0\), by (1.27) it is enough to establish the \(L^{1}-L^{\infty}\) bounds of \((-\Delta)^{\frac{\alpha}{2}}e^{it\Delta}\) for \(\alpha=-1,0\). By using Stone's formula (1.28) and (2.1), one has \[\begin{split}(-\Delta)^{\frac{\alpha}{2}}e^{it\Delta}f=& \frac{2}{\pi i}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2 \alpha}[R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})]fd\lambda\\ =&\frac{2}{\pi i}\sum_{N=-\infty}^{\infty}\int_{0}^ {\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N}\lambda)[R_{0} ^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})]fd\lambda.\end{split} \tag{2.2}\] Therefore, in order to obtain the \(L^{1}-L^{\infty}\) decay estimate of \((-\Delta)^{\frac{\alpha}{2}}e^{it\Delta}\), it suffices to estimate the following integral kernel for each \(N\): \[\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N} \lambda)R_{0}^{\pm}(\lambda^{4})(x,y)d\lambda.\] At first, we give a lemma which plays an important role in estimating our integrals mentioned in this paper. Since its proof is similar to Lemma 3.3 in [38], we omit the details. **Lemma 2.1**.: _Let \(A\) be some subset of \(\mathbb{Z}\). Suppose that \(\Phi(s,z)\) is a function on \(\mathbb{R}\times\mathbb{R}^{m}\) which is smooth in the first variable \(s\), and satisfies for any \((s,z)\in[1/4,1]\times\mathbb{R}^{m}\),_ \[|\partial_{s}^{k}\Phi(2^{N}s,z)|\lesssim 1,\,k=0,1,N\in A\subset\mathbb{Z}.\] _Suppose that \(\varphi_{0}(s)\) be a smoothing function of \(\mathbb{R}\) defined in (2.1), \(\Psi(z)\) is a nonnegative function on \(\mathbb{R}^{m}\). Let \(N_{0}=\left[\frac{1}{3}\log_{2}\frac{\Psi(z)}{|t|}\right]\), for each \(z\in\mathbb{R}^{m}\), \(l\in\mathbb{R}\) and \(t\neq 0\), we have_ \[\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{N}s\Psi(z)}s^{l}\varphi_ {0}(s)\Phi(2^{N}s,z)ds\Big{|}\lesssim\begin{cases}(1+|t|2^{2N})^{-\frac{1}{2}}, &\text{if }|N-N_{0}|\leq 2,\\ (1+|t|2^{2N})^{-1},&\text{if }|N-N_{0}|>2.\end{cases}\] Throughout this paper, \(\Theta_{N_{0},N}(t)\) always denotes the following function: \[\Theta_{N_{0},N}(t):=\begin{cases}(1+|t|2^{2N})^{-\frac{3}{2}},&\text{if }|N-N_{0}|\leq 2,\\ (1+|t|2^{2N})^{-2},&\text{if }|N-N_{0}|>2.\end{cases} \tag{2.3}\] where \(N_{0}=\left[\frac{1}{3}\log_{2}\frac{\Psi(z)}{|t|}\right]\) and \(\Psi(z)\) is a non-negative real value function on \(\mathbb{R}^{m}\). **Proposition 2.2**.: _Let \(\Theta_{N_{0},N}(t)\) be the function defined in (2.3) with \(\Psi(z)=|x-y|\) and \(z=(x,y)\in\mathbb{R}^{6}\). Then for each \(x\neq y\) and \(-\frac{3}{2}<\alpha\leq 0\),_ \[\Big{|}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N} \lambda)R_{0}^{\pm}(\lambda^{4})(x,y)d\lambda\Big{|}\lesssim 2^{(3+2\alpha)N} \Theta_{N_{0},N}(t). \tag{2.4}\] _Moreover,_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{ 3+2\alpha}R_{0}^{\pm}(\lambda^{4})(x,y)d\lambda\Big{|}\lesssim|t|^{-\frac{3+2 \alpha}{2}}, \tag{2.5}\] _which gives_ \[\big{\|}(-\Delta)^{\frac{\alpha}{2}}e^{it\Delta}f\big{\|}_{L^{\infty}( \mathbb{R}^{3})}\lesssim|t|^{-\frac{3+2\alpha}{2}}\big{\|}f\big{\|}_{L^{1}( \mathbb{R}^{3})}. \tag{2.6}\] _As a consequence, we immediately obtain that_ \[\big{\|}\cos(t\Delta)f\big{\|}_{L^{\infty}(\mathbb{R}^{3})}\lesssim|t|^{-\frac {3}{2}}\ \|f\|_{L^{1}(\mathbb{R}^{3})}, \tag{2.7}\] _and_ \[\big{\|}\frac{\sin(t\Delta)}{\Delta}g\big{\|}_{L^{\infty}(\mathbb{R}^{3})} \lesssim|t|^{-\frac{1}{2}}\ \|g\|_{L^{1}(\mathbb{R}^{3})}. \tag{2.8}\] **Remark 2.3**.: _When \(\alpha=0\) in (2.6), it is well-known that_ \[e^{it\Delta}f(x)=\frac{1}{(4\pi it)^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}e^{- \frac{i|x-y|^{2}}{4t}}f(y)dy,\ \ f\in L^{1}\cap L^{2}. \tag{2.9}\] _As a consequence, we immediately obtain the decay estimate (2.7) from Young's inequality and Gaussian integral (2.9) above._ Proof.: For each \(N\in\mathbb{Z}\), we write \[K_{0,N}^{\pm}(t,x,y):=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha} \varphi_{0}(2^{-N}\lambda)R_{0}^{\pm}(\lambda^{4})(x,y)d\lambda.\] Let \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}\), \(p\geq 0\), by the identity (1.26), we have \[R_{0}^{\pm}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|). \tag{2.10}\] Let \(\lambda=2^{N}s\), then \[K_{0,N}^{\pm}(t,x,y)= \frac{2^{(3+2\alpha)N}}{8\pi}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s ^{2+2\alpha}\varphi_{0}(s)F^{\pm}(2^{N}s|x-y|)ds.\] Note that \(s\in\mathrm{supp}\varphi_{0}\subset[1/4,1]\), by using integration by parts, we have \[\begin{split}|K_{0,N}^{\pm}(t,x,y)|\lesssim&\frac{ 2^{(3+2\alpha)N}}{1+|t|2^{2N}}\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}} \partial_{s}\Big{(}s^{1+2\alpha}\varphi_{0}(s)F^{\pm}(2^{N}s|x-y|)\Big{)}ds \Big{|}\\ \lesssim&\frac{2^{(3+2\alpha)N}}{1+|t|2^{2N}}\bigg{(} \Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{1+2\alpha} \varphi_{0}(s)\big{)}F^{\pm}(2^{N}s|x-y|)ds\Big{|}\\ &+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha}\varphi_ {0}(s)\partial_{s}\big{(}F^{\pm}(2^{N}s|x-y|)\big{)}ds\Big{|}\bigg{)}\\ :=&\frac{2^{(3+2\alpha)N}}{1+|t|2^{2N}}\Big{(}\big{|} \mathcal{E}_{01,N}^{\pm}(t,x,y)\big{|}+\big{|}\mathcal{E}_{02,N}^{\pm}(t,x,y) \big{|}\Big{)}.\end{split} \tag{2.11}\] We first estimate \(\mathcal{E}^{\pm}_{02,N}(t,x,y)\). Let \(r=|x-y|\), since \[\partial_{s}F^{\pm}(2^{N}sr)=s^{-1}2^{N}sr(F^{\pm})^{\prime}(2^{N}sr):=e^{\pm i2 ^{N}sr}s^{-1}F^{\pm}_{1}(2^{N}sr),\] where \[F^{\pm}_{1}(p)=pe^{\mp ip}(F^{\pm})^{\prime}(p)=\frac{(\pm ip-1)+(p+1)e^{-p\mp ip }}{p}.\] Hence one has \[\mathcal{E}^{\pm}_{02,N}(t,x,y)=\int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{ N}s|x-y|}s^{2\alpha}\varphi_{0}(s)F^{\pm}_{1}(2^{N}s|x-y|)ds.\] Observe that for any \(x,y\), \[|\partial_{s}^{k}F^{\pm}_{1}(2^{N}s|x-y|)|\lesssim 1,\,k=0,1,\] by Lemma 2.1 with \(z=(x,y)\), \(\Psi(z)=|x-y|\) and \(\Phi(2^{N}s,z)=F^{\pm}_{1}(2^{N}s|x-y|)\), we obtain that \(\mathcal{E}^{\pm}_{02,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, we obtain the same bounds for \(\mathcal{E}^{\pm}_{01,N}\). Furthermore, by (2.11) we get that \(K^{\pm}_{0,N}\) is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\). Thus we obtain that the estimate (2.4) holds. Finally, in order to obtain (2.5), it's enough to show that for any \(x\neq y\) and \(-\frac{3}{2}<\alpha\leq 0\), \[\sum_{N=-\infty}^{+\infty}|K^{\pm}_{0,N}(t,x,y)|\lesssim|t|^{-\frac{3+2\alpha} {2}}. \tag{2.12}\] In fact, for \(t\neq 0\), there exists \(N_{0}^{\prime}\in\mathbb{Z}\) such that \(2^{N_{0}^{\prime}}\sim|t|^{-\frac{1}{2}}\). If \(-\frac{3}{2}<\alpha<0\), then we have for any \(x\neq y\), \[\begin{split}\sum_{N=-\infty}^{+\infty}|K^{\pm}_{0,N}(t,x,y)|& \lesssim\sum_{N=-\infty}^{+\infty}2^{(3+2\alpha)N}(1+|t|2^{2N})^{- \frac{3}{2}}\\ &\lesssim\sum_{N=-\infty}^{N_{0}^{\prime}}2^{(3+2\alpha)N}+\sum_{ N=N_{0}^{\prime}+1}^{+\infty}2^{(3+2\alpha)N}(|t|2^{2N})^{-\frac{3}{2}}\\ &\lesssim|t|^{-\frac{3+2\alpha}{2}}.\end{split} \tag{2.13}\] If \(\alpha=0\), then we have for any \(x\neq y\), \[\begin{split}\sum_{N=-\infty}^{+\infty}|K^{\pm}_{0,N}(t,x,y)|& \lesssim\sum_{|N-N_{0}|\leq 2}2^{3N}(1+|t|2^{2N})^{-\frac{3}{2}}+\sum_{|N-N_{0}|>2 }2^{3N}(1+|t|2^{2N})^{-2}\\ &\lesssim|t|^{-\frac{3}{2}}+\sum_{N=-\infty}^{N_{0}^{\prime}}2^{3 N}+\sum_{N=N_{0}^{\prime}+1}^{+\infty}2^{3N}(|t|2^{2N})^{-2}\\ &\lesssim|t|^{-\frac{3}{2}}.\end{split} \tag{2.14}\] Hence the estimate (2.12) is proved, which gives (2.5). By (2.2) it immediately follows that for \(-\frac{3}{2}<\alpha\leq 0\), \[\left\|(-\Delta)^{\frac{\alpha}{2}}e^{it\Delta}f\right\|_{L^{\infty}(\mathbb{R }^{3})}\lesssim|t|^{-\frac{3+2\alpha}{2}}\|f\|_{L^{1}(\mathbb{R}^{3})}.\] Furthermore, recall the identity (1.27), the desired estimates (2.7) and (2.8) are obtained. ## 3. Low energy decay estimates In this section, we are devote to establishing the low energy decay estimates of solution operators to high order wave equation (1.1). We first need study the asymptotic expansions of the perturbed resolvent \(R_{V}(\lambda^{4})\) as \(\lambda\) near zero (see [15]), then by Stone's formula, Littlewood-Paley method and oscillatory integral theory we obtain the decay bounds of Theorem 1.1 and Theorem 1.3 for low energy. ### Asymptotic expansions of resolvent near zero In this subsection, we study the asymptotic expansions of the perturbed resolvent \(R_{V}(\lambda^{4})\) in a neighborhood of zero threshold. By using the free resolvent kernel \(R_{0}^{\pm}(\lambda^{4})(x,y)\) in (1.26), we have the following expression when \(\lambda|x-y|<1\): \[\begin{split} R_{0}^{\pm}(\lambda^{4})(x,y)=&\frac {a^{\pm}}{\lambda}I(x,y)+G_{0}(x,y)+a_{1}^{\pm}\lambda G_{1}(x,y)+a_{3}^{\pm} \lambda^{3}G_{3}(x,y)\\ &+\lambda^{4}G_{4}(x,y)+\sum_{k=5}^{N}a_{k}^{\pm}\lambda^{k}G_{k} (x,y)+O\big{(}\lambda^{N+1}|x-y|^{N+2}\big{)},\end{split} \tag{3.1}\] where \[\begin{split} G_{0}(x,y)&=-\frac{|x-y|}{8\pi},\ G_ {1}(x,y)=|x-y|^{2},\ G_{3}(x,y)=|x-y|^{4},\\ G_{4}(x,y)&=-\frac{|x-y|^{5}}{4\pi\cdot 6!},\ G_{k} (x,y)=|x-y|^{k+1},\ k\geq 5,\end{split} \tag{3.2}\] and the coefficients \[a^{\pm}=\frac{1\pm i}{8\pi},\ a_{1}^{\pm}=\frac{1\mp i}{8\pi\cdot 3!},\ a_{3}^{ \pm}=\frac{1\pm i}{8\pi\cdot 5!},\ a_{k}^{\pm}=\frac{(-1)^{k+1}+(\pm i)^{k+2}}{8 \pi\cdot(k+2)!}(k\geq 5).\] In fact, the expansion remains valid when \(\lambda|x-y|\geq 1\). In the sequel, we also denote by \(G_{k}\) operators with the integral kernels \(G_{k}(x,y)\) above. In particular, \(G_{0}=(\Delta^{2})^{-1}\). Let \(U(x)=\operatorname{sign}\bigl{(}V(x)\bigr{)}\) and \(v(x)=|V(x)|^{1/2}\), then we have \(V=Uv^{2}\) and the following symmetric resolvent identity \[R_{V}^{\pm}(\lambda^{4})=R_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})v( M^{\pm}(\lambda))^{-1}vR_{0}^{\pm}(\lambda^{4}), \tag{3.3}\] where \(M^{\pm}(\lambda)=U+vR_{0}^{\pm}(\lambda^{4})v\). Hence, we need to obtain the expansions for \((M^{\pm}(\lambda))^{-1}\). Let \(T=U+vG_{0}v\), and \(P=\|V\|_{L^{1}}^{-1}v\langle v,\cdot\rangle\) denotes the orthogonal projection onto the space spanned by \(v\). By the expansions (3.1) of free resolvent \(R_{0}^{\pm}(\lambda^{4})\), we have the following expansions of \(M^{\pm}(\lambda)\). **Lemma 3.1**.: _Let \(|V(x)|\lesssim(1+|x|)^{-\beta}\) with some \(\beta>0\). Set \(\tilde{a}^{\pm}=a^{\pm}\|V\|_{L^{1}}\) and \(M^{\pm}(\lambda)=U+vR_{0}^{\pm}(\lambda^{4})v\). Then the following identities of \(M^{\pm}(\lambda)\) hold on \(\mathbf{B}(L^{2},L^{2})\) for \(\lambda>0\):_ * _If_ \(\beta>7\)_, then_ \[M^{\pm}(\lambda)=\frac{\tilde{a}^{\pm}}{\lambda}P+T+\Gamma_{1}(\lambda);\] (3.4) * _If_ \(\beta>11\)_, then_ \[M^{\pm}(\lambda)=\frac{\tilde{a}^{\pm}}{\lambda}P+T+a_{1}^{\pm}\lambda vG_{1}v +\Gamma_{3}(\lambda);\] (3.5) _._ 3. _If_ \(\beta>19\)_, then_ \[\begin{split} M^{\pm}(\lambda)=&\frac{\tilde{a}^{\pm}}{ \lambda}P+T+a_{1}^{\pm}\lambda vG_{1}v+a_{3}^{\pm}\lambda^{3}vG_{3}v\\ &+\lambda^{4}vG_{4}v+a_{5}^{\pm}\lambda^{5}vG_{5}v+a_{6}^{\pm} \lambda^{6}vG_{6}v+\Gamma_{7}(\lambda);\end{split}\] (3.6) 4. _If_ \(\beta>23\)_, then_ \[\begin{split} M^{\pm}(\lambda)=\frac{\tilde{a}^{\pm}}{\lambda}P& +T+a_{1}^{\pm}\lambda vG_{1}v+a_{3}^{\pm}\lambda^{3}vG_{3}v\\ &+\lambda^{4}vG_{4}v+\sum_{k=5}^{8}a_{k}^{\pm}\lambda^{k}vG_{k}v +\Gamma_{9}(\lambda);\end{split}\] (3.7) _where \(\Gamma_{k}(\lambda)(k=1,3,7,9)\) be \(\lambda\)-dependent operators satisfying that_ Now we introduce the type of resonances that may occur at the zero energy as follows: **Definition 3.2**.: _Let \(Q=I-P\) and \(T=U+vG_{0}v\)._ 1. _If_ \(QTQ\) _is invertible on_ \(QL^{2}\)_, then we say that zero is a regular point of_ \(H\)_. In this case, we define_ \(D_{0}=(QTQ)^{-1}\) _as an operator on_ \(QL^{2}\)_._ 2. _Assume that_ \(QTQ\) _is not invertible on_ \(QL^{2}.\) _Let_ \(S_{1}\) _be the Riesz projection onto the kernel of_ \(QTQ\)_. Then_ \(QTQ+S_{1}\) _is invertible on_ \(QL^{2}\)_. In this case, we define_ \(D_{0}=\left(QTQ+S_{1}\right)^{-1}\) _as an operator on_ \(QL^{2}\)_, which doesn't conflict with the previous definition since_ \(S_{1}=0\) _when zero is a regular point. We say that zero is the first kind resonance of_ \(H\) _if_ \[T_{1}:=S_{1}TPTS_{1}-\frac{\|V\|_{L^{1}}}{3\cdot(8\pi)^{2}}S_{1}vG_{1}vS_{1}\] (3.8) _is invertible on_ \(S_{1}L^{2}\)_. We define_ \(D_{1}=T_{1}^{-1}\) _as an operator on_ \(S_{1}L^{2}\)_._ 3. _Assume that_ \(T_{1}\) _is not invertible on_ \(S_{1}L^{2}.\) _Let_ \(S_{2}\) _be the Riesz projection onto the kernel of_ \(T_{1}\)_. Then_ \(T_{1}+S_{2}\) _is invertible on_ \(S_{1}L^{2}.\) _In this case, we define_ \(D_{1}=\left(T_{1}+S_{2}\right)^{-1}\) _as an operator on_ \(S_{1}L^{2}\)_, which doesn't conflict with previous definition since_ \(S_{2}=0\) _when zero is the first kind of resonance. We say that zero is the second kind resonance of_ \(H\) _if_ \[T_{2}:=S_{2}vG_{3}vS_{2}+\frac{10}{3\|V\|_{L^{1}}}S_{2}(vG_{1}v)^{2}S_{2}-\frac {10}{3\|V\|_{L^{1}}}S_{2}vG_{1}vTD_{1}TvG_{1}vS_{2}\] (3.9) _is invertible on_ \(S_{2}L^{2}\)_. We define_ \(D_{2}=T_{2}^{-1}\) _as an operator on_ \(S_{2}L^{2}\)_._ 4. _Finally if_ \(T_{2}\) _is not invertible on_ \(S_{2}L^{2}\)_, we say that zero is the third kind resonance of_ \(H\)_. In this case, the operator_ \(T_{3}:=S_{3}vG_{4}vS_{3}\) _is always invertible on_ \(S_{3}L^{2}\) _(see Lemma_ 5.2 _in Appendix) where_ \(S_{3}\) _be the Riesz projection onto the kernel of_ \(T_{2},\) _let_ \(D_{3}=T_{3}^{-1}\) _as an operator on_ \(S_{3}L^{2}\)_. We define_ \(D_{2}=(T_{2}+S_{3})^{-1}\) _as an operator on_ \(S_{2}L^{2}\)_._ From the definition above, we have \(S_{1}L^{2}\supseteq S_{2}L^{2}\supseteq S_{3}L^{2},\) which describe the zero energy resonance types of \(H\) as follows: * zero is a regular point of \(H\) if and only if \(S_{1}L^{2}=\{0\}\); * zero is the first kind resonance of \(H\) if and only if \(S_{1}L^{2}\neq\{0\}\) and \(S_{2}L^{2}=\{0\}\); * zero is the second kind resonance of \(H\) if and only if \(S_{2}L^{2}\neq\{0\}\) and \(S_{3}L^{2}=\{0\}\); * zero is an eigenvalue of \(H\) ( i.e. the third kind resonance ) if and only if \(S_{3}L^{2}\neq\{0\}\). Noting that Theorem 5.3 gives the characterizations of threshold spectral subspaces \(S_{j}L^{2}(j=1,2,3)\) by the distributional solution of \(H\phi=0\) in Appendix, hence we have the following statements: * zero is the first kind resonance of \(H\) if there exists some nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) for \(\sigma>\frac{3}{2}\) but no any nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) with \(\sigma>\frac{1}{2}\) such that \(H\phi=0\) in the distributional sense; * zero is the second kind resonance of \(H\) if there exists some nonzero \(\phi\in L^{2}_{-\sigma}(\mathbb{R}^{3})\) for \(\sigma>\frac{1}{2}\) but no any nonzero \(\phi\in L^{2}\) such that \(H\phi=0\) in the distributional sense; * zero is the third kind resonance (i.e. eigenvalue) of \(H\) if there exists some nonzero \(\phi\in L^{2}(\mathbb{R}^{3})\) such that \(H\phi=0\) in the distributional sense; * zero is a regular point of \(H\) if zero is neither a resonance nor an eigenvalue of \(H\). Furthermore, since \(vG_{0}v\) is a Hilbert-Schmidt operator, and \(T=U+vG_{0}v\) is the compact perturbation of \(U\) (see e.g. [14, 25]). Hence \(S_{1}\) is a finite-rank projection by the Fredholm alternative theorem. Notice that \(S_{3}\leq S_{2}\leq S_{1}\), then all \(S_{j}(j=1,2,3)\) are finite-rank operators. Moreover, by the definitions of \(S_{j}(j=1,2,3)\), we have that \(S_{i}D_{j}=D_{j}S_{i}=S_{i}(i\geq j)\) and \(S_{i}D_{j}=D_{j}S_{i}=D_{j}(i<j)\). **Definition 3.3**.: _We say an operator \(T:\,L^{2}(\mathbb{R}^{3})\to L^{2}(\mathbb{R}^{3})\) with kernel \(T(\cdot,\cdot)\) is absolutely bounded if the operator with the kernel \(|T(\cdot,\cdot)|\) is bounded from \(L^{2}(\mathbb{R}^{3})\) into itself._ We remark that Hilbert-Schmidt and finite-rank operators are absolutely bounded operators. Moreover, we have the following proposition, see Lemma 4.3 in [15]. **Proposition 3.4**.: _Let \(|V(x)|\leq(1+|x|)^{-7-}\). Then \(QD_{0}Q\) is absolutely bounded._ In the following, we will give the specific characterizations of projection spaces \(S_{j}L^{2}(j=1,2,3)\) by the orthogonality of these projection operators \(S_{j}(j=1,2,3)\). **Lemma 3.5**.: _Let \(S_{j}(j=1,2,3)\) be the projection operators given by Definition 3.2. Then_ * \(f\in S_{1}L^{2}\) _if and only if_ \(f\in\text{ker}(QTQ)\)_. Moreover,_ \(QTS_{1}=S_{1}TQ=0\)_._ * \(f\in S_{2}L^{2}\) _if and only if_ \[f\in\text{ker}(T_{1}) =\text{ker}(S_{1}TPTS_{1})\cap\text{ker}(S_{1}vG_{1}vS_{1})\] \[=\{f\in S_{1}L^{2}\big{|}PTf=0,\langle x_{i}v,f\rangle=0,\,j=1,2,3\}.\] _In particular,_ \(TS_{2}=S_{2}T=0\)_,_ \(QvG_{1}vS_{2}=S_{2}vG_{1}vQ=0\)_._ * \(f\in S_{3}L^{2}\) _if and only if_ \[f\in\text{ker}(T_{2})=\{f\in S_{2}L^{2}\big{|}\langle x_{i}x_{j}v,f\rangle=0,i, j=1,2,3\}.\] **Remark 3.6**.: _We remark that these spaces \((S_{1}-S_{2})L^{2}\), \((S_{2}-S_{3})L^{2}\) and \(S_{3}L^{2}\) correspond to each zero resonance type, respectively._ Now we will give asymptotic expansions of \(\left(M^{\pm}(\lambda)\right)^{-1}\) as follows: **Theorem 3.7**.: _Let \(S_{j}\)(j=1,2,3) be the operators defined in Definition 3.2. Assume that \(|V(x)|\lesssim(1+|x|)^{-\beta}\) with some \(\beta>0\). Then we have the following expansions of \(\left(M^{\pm}(\lambda)\right)^{-1}\) in \(L^{2}(\mathbb{R}^{3})\) when \(0<\lambda\ll 1\)._ 1. _If zero is a regular point of_ \(H\) _and_ \(\beta>7\)_, then_ \[\left(M^{\pm}(\lambda)\right)^{-1}= QA_{0,1}^{0}Q+\Gamma_{1}(\lambda);\] (3.10) 2. _If zero is the first kind resonance of_ \(H\) _and_ \(\beta>11\)_, then_ \[\left(M^{\pm}(\lambda)\right)^{-1}=\frac{S_{1}A_{-1,1}^{1}S_{1}}{\lambda}+ \left(S_{1}A_{0,1}^{1}+A_{0,2}^{1}S_{1}+QA_{0,3}^{1}Q\right)+\Gamma_{1}( \lambda);\] (3.11) 3. _If zero is the second kind resonance of_ \(H\) _and_ \(\beta>19\)_, then_ \[\left(M^{\pm}(\lambda)\right)^{-1}= \frac{S_{2}A_{-3,1}^{2}S_{2}}{\lambda^{3}}+\frac{S_{2}A_{-2,1}^{ 2}S_{1}+S_{1}A_{-2,2}^{2}S_{2}}{\lambda^{2}}+\frac{S_{2}A_{-1,1}^{2}+A_{-1,2}^ {2}S_{2}}{\lambda}\] (3.12) \[+\frac{S_{1}A_{-1,3}^{2}S_{1}}{\lambda}+\left(S_{1}A_{0,1}^{2}+A_ {0,2}^{2}S_{1}+QA_{0,3}^{2}Q\right)+\Gamma_{1}(\lambda);\] 4. _If zero is the third kind resonance of_ \(H\) _and_ \(\beta>23\)_, then_ \[\left(M^{\pm}(\lambda)\right)^{-1}= \frac{S_{3}D_{3}S_{3}}{\lambda^{4}}+\frac{S_{2}A_{-3,1}^{3}S_{2}} {\lambda^{3}}+\frac{S_{2}A_{-2,1}^{3}S_{1}+S_{1}A_{-2,2}^{3}S_{2}}{\lambda^{2} }+\frac{S_{2}A_{-1,1}^{3}+A_{-1,2}^{3}S_{2}}{\lambda}\] (3.13) \[+\frac{S_{1}A_{-1,3}^{3}S_{1}}{\lambda}+\left(S_{1}A_{0,1}^{3}+A_ {0,2}^{3}S_{1}+QA_{0,3}^{3}Q\right)+\Gamma_{1}(\lambda);\] _where \(A_{i,j}^{k}\) are \(\lambda\)-independent absolutely bounded operators in \(L^{2}(\mathbb{R}^{3})\); \(\Gamma_{1}(\lambda)\) be a \(\lambda\)-dependent operator which may vary from line to line, and it satisfies_ The asymptotic expansions of \(\left(M^{\pm}(\lambda)\right)^{-1}\) in \(L^{2}(\mathbb{R}^{3})\) above can be seen in [15], also see [19] for the regular case. In Theorem 3.7 we use some different notations for our applications. For the convenience of readers, we give details of the proof as an appendix below. ### Low energy decay estimates In this subsection, we are devoted to establishing the low energy decay bounds for Theorem 1.1 and Theorem 1.3. By identities (1.27) it suffices to establish low energy dispersive bounds of \(H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\) for \(\alpha=-1,0\). Below we use the smooth and even cut-off \(\chi\) given by \(\chi=1\) for \(|\lambda|<\lambda_{0}\ll 1\) and \(\chi=0\) for \(|\lambda|>2\lambda_{0}\), where \(\lambda_{0}\) is some sufficiently small positive constant. In analyzing the high energy later, we utilize the complementary cut-off \(\widetilde{\chi}(\lambda):=1-\chi(\lambda)\). By using Stone's formula, one has \[H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)f= \frac{2}{\pi i}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2 \alpha}[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]fd\lambda \tag{3.14}\] \[= \frac{2}{\pi i}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^{2}} \lambda^{3+2\alpha}[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]fd\lambda\] \[+\frac{2}{\pi i}\int_{0}^{\infty}\widetilde{\chi}(\lambda)e^{- it\lambda^{2}}\lambda^{3+2\alpha}[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]fd\lambda,\] where \(\chi(\lambda)=\sum\limits_{N=-\infty}^{N^{\prime}}\varphi_{0}(2^{-N}\lambda)\) and \(\widetilde{\chi}(\lambda)=\sum\limits_{N=N^{\prime}+1}^{+\infty}\varphi_{0}(2^{ -N}\lambda)\) for \(N^{\prime}<0\). We remark that the choice of the constant \(N^{\prime}\) depends on a sufficiently small neighborhood of \(\lambda=0\) in which the expansions of all resonance types in Theorem 3.7 hold. Hence in order to establish the low energy decay bounds for Theorem 1.1 and Theorem 1.3, by using (3.14), it suffices to prove the following theorem. **Theorem 3.8**.: _Let \(|V(x)|\lesssim(1+|x|)^{-\beta}\)\((x\in\mathbb{R}^{3})\) with some \(\beta>0\). Then_ * _If zero is a regular point of_ \(H\) _and_ \(\beta>7\)_, then for_ \(-\frac{3}{2}<\alpha\leq 0\)_,_ \[\big{\|}H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\chi(H)\big{\|}_{L^{1} \to L^{\infty}}\lesssim|t|^{-\frac{3+2\alpha}{2}}.\] (3.15) * _If zero is the first kind resonance of_ \(H\) _and_ \(\beta>11\)_, then for_ \(\ -\frac{3}{2}<\alpha\leq 0\)_,_ \[\big{\|}H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\chi(H)\big{\|}_{L^{1} \to L^{\infty}}\lesssim|t|^{-\frac{3+2\alpha}{2}}.\] (3.16) * _If zero is the second kind resonance of_ \(H\) _and_ \(\beta>19\)_, or the third kind resonance of_ \(H\) _and_ \(\beta>23\)_, then for_ \(\ -\frac{1}{2}<\alpha\leq 0\)_,_ \[\big{\|}H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\chi(H)\big{\|}_{L^{1} \to L^{\infty}}\lesssim|t|^{-\frac{1+2\alpha}{2}}.\] (3.17) _Moreover, there are two time-dependent operators_ \(F_{t}\) _and_ \(G_{t}\) _satisfying_ \[\|F_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}\ \ \text{and}\ \ \|G_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{\frac{1}{2}},\] _such that_ \[\big{\|}\cos(t\sqrt{H})P_{ac}(H)\chi(H)-F_{t}\big{\|}_{L^{1}\to L^{ \infty}}\lesssim|t|^{-\frac{3}{2}},\] (3.18) \[\Big{\|}\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\chi(H)-G_{t} \Big{\|}_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}.\] (3.19) Before proving Theorem 3.8, we first state the following lemma, which has a crucial role in making use of cancellations of projection operators \(Q,S_{j}(j=1,\cdots,5)\) in the asymptotical expansions of resolvent \(R_{V}(\lambda^{4})\) as \(\lambda\) near zero, and will be used frequently to obtain the low energy dispersive estimates for all cases. **Lemma 3.9**.: _Assume that \(x,y\in\mathbb{R}^{3}\) and \(\lambda>0\). We define \(w=w(x)=\dfrac{x}{|x|}\) for \(x\neq 0\) and \(w(x)=0\) for \(x=0\). Let \(\theta\in[0,1]\) and \(|y|\cos\alpha=\langle y,w(x-\theta y)\rangle\) where \(\alpha\equiv\alpha(x,y,\theta)\) is the angle between the vectors \(y\) and \(x-\theta y\)._ * _If_ \(F(p)\in C^{1}(\mathbb{R})\)_. Then_ \[F(\lambda|x-y|)=F(\lambda|x|)-\lambda|y|\int_{0}^{1}F^{\prime}(\lambda|x- \theta y|)\cos\alpha d\theta.\] * _If_ \(F(p)\in C^{2}(\mathbb{R})\) _and_ \(F^{\prime}(0)=0\)_. Then_ \[F(\lambda|x-y|)= F(\lambda|x|)-\lambda\big{\langle}y,w(x)\big{\rangle}F^{ \prime}(\lambda|x|)\] \[+\lambda^{2}|y|^{2}\int_{0}^{1}(1-\theta)\Big{(}\sin^{2}\alpha \dfrac{F^{\prime}(\lambda|x-\theta y|)}{\lambda|x-\theta y|}+\cos^{2}\alpha F^ {\prime\prime}(\lambda|x-\theta y|)\Big{)}d\theta.\] * _If_ \(F(p)\in C^{3}(\mathbb{R})\) _and_ \(F^{\prime}(0)=F^{\prime\prime}(0)=0\)_. Then_ \[F(\lambda|x-y|)= F(\lambda|x|)-\lambda\big{<}y,w(x)\big{>}F^{\prime}(\lambda|x|)+ \frac{\lambda^{2}}{2}\Big{[}\big{(}|y|^{2}-\big{<}y,w(x)\big{>}^{2}\big{)}\frac{ F^{\prime}(\lambda|x|)}{\lambda|x|}\] \[+\big{<}y,w(x)\big{>}^{2}F^{\prime\prime}(\lambda|x|)\Big{]}+\frac {\lambda^{3}|y|^{3}}{2}\int_{0}^{1}(1-\theta)^{2}\Big{[}3\cos\alpha\sin^{2} \alpha\Big{(}\frac{F^{\prime}(\lambda|x-\theta y|)}{\lambda^{2}|x-\theta y|^{ 2}}\] \[-\frac{F^{\prime\prime}(\lambda|x-\theta y|)}{\lambda|x-\theta y| }\Big{)}-\cos^{3}\alpha F^{(3)}(\lambda|x-\theta y|)\Big{]}d\theta.\] Proof.: By the same argument with the proof of Lemma 3.5 in [38], we can obtain this lemma. Here we omit the details. #### 3.2.1. **Regular case** In order to establish the lower energy estimate (3.15), recall that Stone's formula \[\begin{split} H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\chi( H)f=&\frac{2}{\pi i}\int_{0}^{\infty}e^{-it\lambda^{2}}\chi( \lambda)\lambda^{3+2\alpha}[R_{V}^{+}(\lambda^{4})-R_{V}^{-}(\lambda^{4})]fd \lambda\\ =&\sum_{N=-\infty}^{N^{\prime}}\sum_{\pm}\frac{2}{ \pi i}\int_{0}^{\infty}e^{-it\lambda^{2}}\varphi_{0}(2^{-N}\lambda)\lambda^{3 +2\alpha}R_{V}^{\pm}(\lambda^{4})fd\lambda.\end{split} \tag{3.20}\] If zero is a regular point of \(H\), using (3.3) and (3.10), we have \[R_{V}^{\pm}(\lambda^{4})=R_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})v \Big{(}QA_{0,1}^{0}Q\Big{)}vR_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4}) v\Gamma_{1}(\lambda)vR_{0}^{\pm}(\lambda^{4}). \tag{3.21}\] Combining with Proposition 2.2, in order to obtain (3.15), it suffices to prove the following Proposition 3.10 and Proposition 3.11. **Proposition 3.10**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-7-}\). Let \(\Theta_{N_{0},N}(t)\) be a function defined in (2.3) and \(N<N^{\prime}\). Then for each \(x,y\) and \(\frac{3}{2}<\alpha\leq 0\), we have_ \[\Big{|}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N }\lambda)\big{[}R_{0}^{\pm}(\lambda^{4})v(QA_{0,1}^{0}Q)vR_{0}^{\pm}(\lambda^{ 4})\big{]}(x,y)d\lambda\Big{|}\lesssim 2^{(3+2\alpha)N}\Theta_{N_{0},N}(t),\] _which gives that_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda ^{2}}\lambda^{3+2\alpha}\big{[}R_{0}^{\pm}(\lambda^{4})v(QA_{0,1}^{0}Q)vR_{0}^ {\pm}(\lambda^{4})\big{]}(x,y)d\lambda\Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2 }}. \tag{3.22}\] _As a consequence, we have_ \[\Big{\|}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^{2}}\lambda^{3+2\alpha} \big{[}R_{0}^{\pm}(\lambda^{4})v(QA_{0,1}^{0}Q)vR_{0}^{\pm}(\lambda^{4})\big{]} fd\lambda\Big{\|}_{L^{\infty}}\lesssim|t|^{-\frac{3+2\alpha}{2}}\big{\|}f \big{\|}_{L^{1}}.\] Proof.: We write \[K_{1,N}^{0,\pm}(t;x,y):=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha} \varphi_{0}(2^{-N}\lambda)\big{[}R_{0}^{\pm}(\lambda^{4})vQA_{0,1}^{1}QvR_{0}^ {\pm}(\lambda^{4})\big{]}(x,y)d\lambda.\] Let \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p},p\geq 0\). Then \(R_{0}^{\pm}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|)\). Using the orthogonality \(Qv(x)=0\) and Lemma 3.9(i), one has \[[R_{0}^{\pm}(\lambda^{4})vQA_{0,1}^{1}QvR_{0}^{\pm}(\lambda^{4})]( x,y)\] \[= \frac{1}{64\pi^{2}\lambda^{2}}\int_{\mathbb{R}^{6}}F^{\pm}( \lambda|x-u_{2}|)[vQA_{0,1}^{1}Qv](u_{2},u_{1})F^{\pm}(\lambda|y-u_{1}|)du_{1} du_{2}\] \[= \frac{1}{64\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1} \cos\alpha_{2}\cos\alpha_{1}(F^{\pm})^{\prime}(\lambda|x-\theta_{2}u_{2}|)(F^ {\pm})^{\prime}(\lambda|y-\theta_{1}u_{1}|)d\theta_{1}d\theta_{2}\] \[\quad\times|u_{1}||u_{2}|[vQA_{0,1}^{1}Qv](u_{2},u_{1})du_{1}du_{ 2},\] where \(\cos\alpha_{1}=\cos\alpha(y,u_{1},\theta_{1})\) and \(\cos\alpha_{2}=\cos\alpha(x,u_{2},\theta_{2})\). Furthermore, we have \[K_{1,N}^{0,\pm}(t;x,y)= \frac{1}{64\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1} \Big{(}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N }\lambda)(F^{\pm})^{\prime}(\lambda|x-\theta_{2}u_{2}|)\] \[(F^{\pm})^{\prime}(\lambda|y-\theta_{1}u_{1}|)d\lambda\Big{)}\cos \alpha_{2}\cos\alpha_{1}d\theta_{1}d\theta_{2}|u_{1}||u_{2}|[vQA_{0,1}^{1}Qv ](u_{2},u_{1})du_{1}du_{2}\] \[:= \frac{1}{64\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1} E_{1,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\cos\alpha_{2}\cos \alpha_{1}d\theta_{1}d\theta_{2}\] \[\quad\quad\quad\times|u_{1}||u_{2}|[vQA_{0,1}^{1}Qv](u_{2},u_{1} )du_{1}du_{2}. \tag{3.23}\] Now we begin to estimate \(E_{1,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\). In fact, let \(s=2^{-N}\lambda\), then \[E_{1,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\] \[=2^{(4+2\alpha)N}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{3+2\alpha} \varphi_{0}(s)(F^{\pm})^{\prime}(2^{N}s|x-\theta_{2}u_{2}|)(F^{\pm})^{\prime}( 2^{N}s|y-\theta_{1}u_{1}|)ds.\] Notice that \(s\in\mathrm{supp}\varphi_{0}\subset[\frac{1}{4},1]\), by using integration by parts we have \[|E_{1,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|\] \[\lesssim \frac{2^{(4+2\alpha)N}}{1+|t|2^{2N}}\bigg{(}\Big{|}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{2+2\alpha}\varphi_{0}(s)\big{)} (F^{\pm})^{\prime}(2^{N}s|x-\theta_{2}u_{2}|)(F^{\pm})^{\prime}(2^{N}s|y- \theta_{1}u_{1}|)ds\Big{|}\] \[+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2+2\alpha}\varphi_{0} (s)\partial_{s}\big{(}(F^{\pm})^{\prime}(2^{N}s|x-\theta_{2}u_{2}|)(F^{\pm})^{ \prime}(2^{N}s|y-\theta_{1}u_{1}|)\big{)}ds\Big{|}\bigg{)}\] \[:= \frac{2^{(4+2\alpha)N}}{1+|t|2^{2N}}\Big{(}|\mathcal{E}_{1,N}^{0, \pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|+|\mathcal{E}_{2,N}^{0,\pm}(t;x, y,\theta_{1},\theta_{2},u_{1},u_{2})|\Big{)}. \tag{3.24}\] We first compute the second term \(\mathcal{E}_{2,N}^{0,\pm}\). We have \[|\mathcal{E}_{2,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{ 2})|\] \[\lesssim \Big{(}\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha} \varphi_{0}(s)\cdot 2^{N}s|x-\theta_{2}u_{2}|(F^{\pm})^{(2)}(2^{N}s|x-\theta_{2}u_{2}|)(F^{ \pm})^{\prime}(2^{N}s|y-\theta_{1}u_{1}|)ds\Big{|}\] \[+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha}\varphi_{0} (s)\cdot 2^{N}s|y-\theta_{1}u_{1}|(F^{\pm})^{\prime}(2^{N}s|x-\theta_{2}u_{2}|)(F^{ \pm})^{(2)}(2^{N}s|y-\theta_{1}u_{1}|)ds\Big{|}\] \[:= |\mathcal{E}_{21,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2} )|+|\mathcal{E}_{22,N}^{0,\pm}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|.\] For the first term \(\mathcal{E}^{0,\pm}_{21,N}\). Let \[F^{\pm}_{1}(p)=e^{\mp ip}(F^{\pm})^{\prime}(p)=\frac{(\pm ip-1)+(p+1)e^{-p\mp ip} }{p^{2}},\] \[F^{\pm}_{2}(p)=pe^{\mp ip}(F^{\pm})^{(2)}(p)=\frac{(2\mp 2ip-p^{2})-(2+2p+p^{2})e ^{-p\mp ip}}{p^{2}}.\] Then \[2^{N}s|x-\theta_{2}u_{2}|(F^{\pm})^{(2)}(2^{N}s|x-\theta_{2}u_{2 }|)(F^{\pm})^{\prime}(2^{N}s|y-\theta_{1}u_{1}|)\] \[=e^{\pm i2^{N}s|x-\theta_{2}u_{2}|}e^{\pm i2^{N}s|y-\theta_{1}u_{ 1}|}F^{\pm}_{2}(2^{N}s|x-\theta_{2}u_{2}|)F^{\pm}_{1}(2^{N}s|y-\theta_{1}u_{1} |).\] Hence, we have \[\mathcal{E}^{0,\pm}_{21,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{ 2})\] \[= \int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{N}s(|x-\theta_{2}u _{2}|+|y-\theta_{1}u_{1}|)}s^{1+2\alpha}\varphi_{0}(s)F^{\pm}_{2}(2^{N}s|x- \theta_{2}u_{2}|)F^{\pm}_{1}(2^{N}s|y-\theta_{1}u_{1}|)ds.\] It is easy to check that \[\Big{|}\partial_{s}^{k}\Big{(}F^{\pm}_{2}(2^{N}s|x-\theta_{2}u_{2}|)F^{\pm}_{ 1}(2^{N}s|y-\theta_{1}u_{1}|)\Big{)}\Big{|}\lesssim 1,\,k=0,1.\] By Lemma 2.1 with \(z=(x,y,\theta_{1},\theta_{2},u_{1},u_{2})\) and \(\Psi(z)=|x-\theta_{2}u_{2}|+|y-\theta_{1}u_{1}|\),and \[\Phi(2^{N}s;z)=F^{\pm}_{2}(2^{N}s|x-\theta_{2}u_{2}|)F^{\pm}_{1}(2^{N}s|y- \theta_{1}u_{1}|),\] we obtain that \(\mathcal{E}^{0,\pm}_{21,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, we obtain that \(\mathcal{E}^{0,\pm}_{22,N}\) is controlled by the same bound. Hence we get that \(\mathcal{E}^{0,\pm}_{2,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, we obtain that \(\mathcal{E}^{0,\pm}_{1,N}\) is controlled by the same bound. By (3.24) we have \[|E^{0,\pm}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|\lesssim 2^{(4+2\alpha )N}\Theta_{N_{0},N}(t). \tag{3.25}\] Since \(|V(x)|\lesssim(1+|x|)^{-7-}\), by using (3.23), (3.25) and Holder's inequality, one has \[|K^{0,\pm}_{1,N}(t;x,y)| \lesssim 2^{(4+2\alpha)N}\Big{(}\|u_{1}v(u_{1})\|_{L^{2}}\|QA^{0}_{ 0,1}Q\|_{L^{2}\to L^{2}}\|u_{2}v(u_{2})\|_{L^{2}}\Big{)}\Theta_{N_{0},N}(t)\] \[\lesssim 2^{(3+2\alpha)N}\Theta_{N_{0},N}(t).\] Finally, by the same argument with the proof of (2.12), we immediately get the desired conclusions. **Proposition 3.11**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-7-}\). Let \(\Theta_{N_{0},N}(t)\) be a function defined in (2.3) and \(N<N^{\prime}\). Then for each \(x,y\) and \(-\frac{3}{2}<\alpha\leq 0\),_ \[\Big{|}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N }\lambda)\big{[}R^{\pm}_{0}(\lambda^{4})v\Gamma_{1}(\lambda)vR^{\pm}_{0}( \lambda^{4})\big{]}(x,y)d\lambda\Big{|}\lesssim 2^{(3+2\alpha)N}\Theta_{N_{0},N}(t). \tag{3.26}\] _Moreover,_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^ {2}}\lambda^{3+2\alpha}\big{[}R^{\pm}_{0}(\lambda^{4})v\Gamma_{1}(\lambda)vR^ {\pm}_{0}(\lambda^{4})\big{]}(x,y)d\lambda\Big{|}\lesssim|t|^{-\frac{3+2 \alpha}{2}}. \tag{3.27}\] Proof.: To get (3.26), it's equivalent to show that \[K^{0,\pm}_{2,N}(t;x,y):=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha} \varphi_{0}(2^{-N}\lambda)\Big{\langle}[v\Gamma_{1}(\lambda)v]\big{(}R^{\pm}_{ 0}(\lambda^{4})(*,y)\big{)}(\cdot),\ (R^{\pm}_{0})^{*}(\lambda^{4})(x,\cdot)\Big{\rangle}d\lambda\] is bounded by \(2^{(3+2\alpha)N}\mathcal{O}_{N_{0},N}(t)\). Let \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}(p\geq 0)\), then \(R^{\pm}_{0}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|)\). Hence one has \[\Big{\langle}[v\Gamma_{1}(\lambda)v]\big{(}R^{\pm}_{0}(\lambda^{4 })(*,y)\big{)}(\cdot),\ R^{\mp}_{0}(\lambda^{4})(x,\cdot)\Big{\rangle}\] \[=\frac{1}{64\pi^{2}\lambda^{2}}\Big{\langle}[v\Gamma_{1}(\lambda )v]\big{(}F^{\pm}(\lambda|*-y|)\big{)}(\cdot),\ F^{\mp}(\lambda|x-\cdot|) \Big{\rangle}:=\frac{1}{64\pi^{2}\lambda^{2}}E^{0,\pm}_{2}(\lambda;x,y).\] Let \(\lambda=2^{N}s\), then \[K^{0,\pm}_{2,N}(t;x,y)=\frac{2^{(2+2\alpha)N}}{64\pi^{2}}\int_{0}^{\infty}e^{ -it2^{2N}s^{2}}s^{1+2\alpha}\varphi_{0}(s)E^{0,\pm}_{2}(2^{N}s;x,y)ds.\] Notice that \(s\in\mathrm{supp}\varphi_{0}\subset[\frac{1}{4},1]\), by using integration by parts we have \[|K^{0,\pm}_{2,N}(t;x,y)|\lesssim \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\bigg{(}\Big{|}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{2\alpha}\varphi_{0}(s)\big{)}E^ {0,\pm}_{2}(2^{N}s;x,y)ds\Big{|} \tag{3.28}\] \[+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2}s^{2\alpha}\varphi _{0}(s)\partial_{s}\big{(}E^{0,\pm}_{2}(2^{N}s;x,y)\big{)}ds\Big{|}\bigg{)}\] \[:= \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\Big{(}|\mathcal{E}^{0,\pm}_ {1,N}(t;x,y)|+|\mathcal{E}^{0,\pm}_{2,N}(t;x,y)|\Big{)}.\] We first estimate \(\mathcal{E}^{0,\pm}_{2,N}\). Since \[\partial_{s}\big{(}E^{0,\pm}_{2}(2^{N}s;x,y)\big{)}= \big{\langle}[v\partial_{s}\Gamma_{1}(2^{N}s)v]\big{(}F^{\pm}(2^{ N}s|*-y|)\big{)}(\cdot),\ F^{\mp}(2^{N}s|x-\cdot|)\big{\rangle}\] \[+\big{\langle}[v\Gamma_{1}(2^{N}s)v]\big{(}\partial_{s}F^{\pm}(2 ^{N}s|*-y|)\big{)}(\cdot),\ F^{\mp}(2^{N}s|x-\cdot|)\big{\rangle}\] \[+\big{\langle}[v\Gamma_{1}(2^{N}s)v]\big{(}F^{\pm}(2^{N}s|*-y|) \big{)}(\cdot),\ \partial_{s}F^{\mp}(2^{N}s|x-\cdot|)\big{\rangle}\] \[:= E^{0,\pm}_{21}(2^{N}s;x,y)+E^{0,\pm}_{22}(2^{N}s;x,y)+E^{0,\pm} _{23}(2^{N}s;x,y).\] Then \[\mathcal{E}^{0,\pm}_{2,N}(t;x,y)= \int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2\alpha}\varphi_{0}(s) \big{(}E^{0,\pm}_{21}+E^{0,\pm}_{22}+E^{0,\pm}_{23}\big{)}(2^{N}s;x,y)\Big{)}ds\] \[:= \mathcal{E}^{0,\pm}_{21,N}(t;x,y)+\mathcal{E}^{0,\pm}_{22,N}(t;x, y)+\mathcal{E}^{0,\pm}_{23,N}(t;x,y).\] For the first term \(\mathcal{E}^{0,\pm}_{21,N}\). Since \[\lambda\|\partial_{\lambda}\Gamma_{1}(\lambda)\|_{L^{2}\to L^{2}}+\lambda^{2} \|\partial_{\lambda}^{2}\Gamma_{1}(\lambda)\|_{L^{2}\to L^{2}}\lesssim\lambda,\] then \[\big{\|}\partial_{s}^{k}\big{(}\partial_{s}\Gamma_{1}(2^{N}s)\big{)}\big{\|}_{L ^{2}\to L^{2}}\lesssim 2^{N},\,k=0,1.\] Similarly, \[\big{|}\partial_{s}^{k}\big{(}E^{0,\pm}_{21}(2^{N}s;x,y)\big{)}\big{|}\lesssim 2 ^{N},\,k=0,1.\] By integration by parts, we obtain that \(\mathcal{E}^{0,\pm}_{21,N}(t;x,y)\) is bounded by \(2^{N}(1+|t|2^{2N})^{-1}\). We turn to compute \(\mathcal{E}^{0,\pm}_{22,N}\). Let \(F_{0}^{\mp}(p)=\frac{1-e^{-p}e^{\pm ip}}{p}\), then \(F^{\mp}(p)=e^{\mp ip}F_{0}^{\mp}(p)\). Since \[\partial_{s}F^{\pm}(2^{N}s|*-y|)=2^{N}|*-y|(F^{\pm})^{\prime}(2^{N}s|*-y|):=e^{ \pm i2^{N}s|*-y|}s^{-1}F_{1}^{\pm}(2^{N}s|*-y|),\] where \[F_{1}^{\pm}(p)=pe^{\mp ip}(F^{\pm})^{\prime}(p)=\frac{(\pm ip-1)+(p+1)e^{-p}e^ {\mp ip}}{p},\] thus we have, \[\begin{split} E^{0,\pm}_{22}(2^{N}s;x,y)=& e^{\pm i2^{N}s(|x|+|y|)}s^{-1}\Big{\langle}[v\Gamma_{1}(2^{N}s)v] \big{(}e^{\pm i2^{N}s(|*-y|-|y|)}F_{1}^{\pm}(2^{N}s|*-y|)\big{)}(\cdot),\\ & e^{\mp i2^{N}s(|x-|-|x|)}F_{0}^{\mp}(2^{N}s|x-\cdot|)\Big{\rangle} :=e^{\pm i2^{N}s(|x|+|y|)}s^{-1}\widetilde{E}^{0,\pm}_{22}(2^{N}s;x,y).\end{split}\] Hence, we have \[\begin{split}\mathcal{E}^{0,\pm}_{22,N}(t;x,y)=&\int _{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{N}s(|x|+|y|)}s^{-1+2\alpha}\varphi_ {0}(s)\widetilde{E}^{0,\pm}_{22}(2^{N}s;x,y)ds.\end{split}\] Note that \[\begin{split}&\big{|}\partial_{s}^{k}\big{(}e^{\pm i2^{N}s(|*-y|-|y|) }F_{1}^{\pm}(2^{N}s|*-y|)\big{)}\big{|}\lesssim 2^{kN}\langle*\rangle,\,k=0,1,\\ &\big{|}\partial_{s}^{k}\big{(}e^{\mp i2^{N}s(|x-|-|x|)}F_{0}^{ \mp}(2^{N}s|x-\cdot|)\big{)}\big{|}\lesssim 2^{kN}\langle\cdot\rangle,\,k=0,1. \end{split}\] Since \(|V(x)|\lesssim(1+|x|)^{-7-}\), by Holder's inequality we have \[\big{|}\partial_{s}^{k}\widetilde{E}^{0,\pm}_{22}(2^{N}s;x,y)\big{|}\lesssim \sum_{k=0}^{1}\big{\|}v(\cdot)\langle\cdot\rangle^{1-k}\big{\|}_{L^{2}}^{2} \big{\|}\partial_{s}^{k}\Gamma_{1}(2^{N}s)\big{\|}_{L^{2}\to L^{2}} \lesssim 2^{N}.\] By Lemma 2.1 again with \(z=(x,y)\), \(\Psi(z)=|x|+|y|\) and \(\Phi(2^{N}s;z)=\widetilde{E}^{0,\pm}_{22}(2^{N}s;x,y)\), we obtain that \(\mathcal{E}^{0,\pm}_{22,N}\) is bounded by \(2^{N}(1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similar to get that \(\mathcal{E}^{0,\pm}_{23,N}\) is controlled by the same bound. Hence we obtain that \(\mathcal{E}^{0,\pm}_{2,N}\) is bounded by \(2^{N}(1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, we obtain that \(\mathcal{E}^{0,\pm}_{1,N}\) is controlled by the same bounds. By (3.28), we immediately obtain that \(K^{0,\pm}_{2,N}\) is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\). Hence we obtain that (3.26) holds. Finally, by the same argument with the proof of (2.12), we immediately get the desired conclusions. #### 3.2.2. **The first kind of resonance** If zero is the first kind of resonance of \(H\), then by using (3.3) and (3.11) one has \[\begin{split} R^{\pm}_{V}(\lambda^{4})=& R^{\pm}_{0}( \lambda^{4})-R^{\pm}_{0}(\lambda^{4})v\Big{(}\lambda^{-1}S_{1}A^{1}_{-1,1}S_{1 }\Big{)}vR^{\pm}_{0}(\lambda^{4})-R^{\pm}_{0}(\lambda^{4})v\\ &\times\Big{(}S_{1}A^{1}_{0,1}+A^{1}_{0,2}S_{1}+QA^{1}_{0,3}Q \Big{)}vR^{\pm}_{0}(\lambda^{4})-R^{\pm}_{0}(\lambda^{4})v\Gamma_{1}(\lambda )vR^{\pm}_{0}(\lambda^{4}).\end{split} \tag{3.29}\] In order to obtain the estimates (3.16), comparing with the analysis of regular case, it is enough to estimate to prove the following proposition. **Proposition 3.12**.: _Assume that \(|V(x|)\lesssim(1+|x|)^{-11-}\). Then for any \(-\frac{3}{2}<\alpha\leq 0\),_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{ -it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N}\lambda)\lambda^{-1}\big{[} R_{0}^{\pm}(\lambda^{4})vS_{1}A_{-1,1}^{1}S_{1}vR_{0}^{\pm}(\lambda^{4})\big{]}(x,y)d \lambda\Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}},\] \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e^ {-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N}\lambda)\big{[}R_{0}^{ \pm}(\lambda^{4})vS_{1}A_{0,1}^{1}vR_{0}^{\pm}(\lambda^{4})\big{]}(x,y)d \lambda\Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}}.\] Proof.: Here we use the orthogonality \(S_{1}v=0\), by the same argument with the proof of Proposition 3.10, we obtain the desired conclusions. #### 3.2.3. **The second kind of resonance** If zero is the second kind of resonance of \(H\), then using (3.3) and (3.12) one has \[R_{V}^{\pm}(\lambda^{4}) =R_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})v\Big{(}\lambda ^{-3}S_{2}A_{-3,1}^{2}S_{2}\Big{)}vR_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda ^{4})v\Big{(}\lambda^{-2}S_{2}A_{-2,1}^{2}S_{1}\] \[+\lambda^{-2}S_{1}A_{-2,2}^{2}S_{2}\Big{)}vR_{0}^{\pm}(\lambda^{4 })-R_{0}^{\pm}(\lambda^{4})v\Big{(}\lambda^{-1}S_{2}A_{-1,1}^{2}+\lambda^{-1}A _{-1,2}^{2}S_{2}+\lambda^{-1}S_{1}A_{-1,3}^{2}S_{1}\Big{)}\] \[\times vR_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})v\Big{(} S_{1}A_{0,1}^{2}+A_{0,2}^{2}S_{1}+QA_{0,3}^{2}Q\Big{)}vR_{0}^{\pm}(\lambda^{4 })-R_{0}^{\pm}(\lambda^{4})v\Gamma_{1}(\lambda)vR_{0}^{\pm}(\lambda^{4}).\] In order to prove Theorem 3.8(iii), combining with the proof of regular case and the first kind resonance, it is enough to estimate integrals with the following three terms: \[\Omega_{2,1}(\lambda):= R_{0}^{\pm}(\lambda^{4})v\big{(}\lambda^{-3}S_{2}A_{-3,1}^{2}S_{2} \big{)}vR_{0}^{\pm}(\lambda^{4}),\] \[\Omega_{2,2}(\lambda):= R_{0}^{\pm}(\lambda^{4})v\big{(}\lambda^{-2}S_{2}A_{-2,1}^{2}S_{1} \big{)}vR_{0}^{\pm}(\lambda^{4}), \tag{3.30}\] \[\Omega_{2,3}(\lambda):= R_{0}^{\pm}(\lambda^{4})v\big{(}\lambda^{-1}S_{2}A_{-1,1}^{2})vR_{0}^{ \pm}(\lambda^{4}).\] Since \((F^{\pm})^{\prime}(0)\neq 0\) where \(R_{0}^{\pm}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|)\), so it doesn't satisfy the condition of Lemma 3.9(ii), hence we can't make full use of the orthogonality of \(S_{2}\) (i.e. \(S_{2}x_{i}v=0,\ i=1,2,3\)). In order to using orthogonality \(S_{2}x_{j}v=0\) to improve the time decay of solution operator \(\cos(t\sqrt{H})\) and \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\), we will subtract a specific operator to satisfy the conditions of Lemma 3.9(ii). Then we can make full use of the orthogonality of \(S_{2}\). Recall that \(G_{0}=\frac{|x-y|}{8\pi}\), let \(\widetilde{F}^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}+p,\ p\in\mathbb{R}\), then \[R_{0}^{\pm}(\lambda^{4})(x,y)-G_{0}=\frac{1}{8\pi\lambda}\widetilde{F}^{\pm}( \lambda|x-y|), \tag{3.31}\] \(\widetilde{F}^{\pm}(p)\in C^{2}(\mathbb{R})\) and \((\widetilde{F}^{\pm})^{\prime}(0)=0\). Hence \(\widetilde{F}^{\pm}(p)\) satisfies the condition of Lemma 3.9(ii). We now begin to estimates the terms \(\Omega_{2,i}(\lambda)(i=1,2,3)\) in (3.30). Firstly, we deal with the first term \(\Omega_{2,1}(\lambda)\) in (3.30). We have \[\Omega_{2,1}(\lambda)= \big{(}R_{0}^{\pm}(\lambda^{4})-G_{0}\big{)}v(\lambda^{-3}S_{2}A_{ -3,1}^{2}S_{2})v\big{(}R_{0}^{\pm}(\lambda^{4})-G_{0}\big{)}+\big{(}R_{0}^{ \pm}(\lambda^{4})-G_{0}\big{)}v(\lambda^{-3}S_{2}A_{-3,1}^{2}S_{2})\] \[\times vG_{0}+G_{0}v(\lambda^{-3}S_{2}A_{-3,1}^{2}S_{2})v\big{(}R_ {0}^{\pm}(\lambda^{4})-G_{0}\big{)}+G_{0}v(\lambda^{-3}S_{2}A_{-3,1}^{2}S_{2})vG _{0}\] \[:= \Gamma_{-3,1}^{2}(\lambda)+\Gamma_{-3,2}^{2}(\lambda)+\Gamma_{-3,3 }^{2}(\lambda)+\Gamma_{-3,4}^{2}(\lambda). \tag{3.32}\] **Proposition 3.13**.: _Assume \(|V(x)|\lesssim(1+|x|)^{-19-}\). Let \(\Gamma^{2}_{-3,j}(\lambda)(j=1,2,3,4)\) be operators defined in (3.32). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{-it \lambda^{2}}\lambda^{3+2\alpha}\Gamma^{2}_{-3,1}(\lambda)(x,y)d\lambda\Big{|} \lesssim|t|^{-\frac{3+2\alpha}{2}},\,-\frac{3}{2}<\alpha\leq 0, \tag{3.33}\] \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma^{2}_{-3,j}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}},\,-1<\alpha\leq 0,\,j=2,3,\] (3.34) \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma^{2}_{-3,4}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{1+2\alpha}{2}},\,-\frac{1}{2}<\alpha\leq 0. \tag{3.35}\] Proof.: We first estimate the first term \(\Gamma^{2}_{-3,1}(\lambda)\). For each \(N\), we write \[\widetilde{K}^{2,\pm}_{1,N}(t;x,y)\] \[=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{ 0}(2^{-N}\lambda)\big{[}\big{(}R^{\pm}_{0}(\lambda^{4})-G_{0}\big{)}v(\lambda^ {-3}S_{2}A^{2}_{-3,1}S_{2})v\big{(}R^{\pm}_{0}(\lambda^{4})-G_{0}\big{)}\big{]} (x,y)d\lambda.\] Notice that \[R^{\pm}_{0}(\lambda^{4})(x,y)-G_{0}:=\frac{1}{8\pi\lambda}\widetilde{F}^{\pm}( \lambda|x-y|),\] by Lemma 3.9(ii) and the orthogonality \(S_{2}x_{j}v(x)=S_{2}v(x)=0(j=1,2,3)\), then \[\big{[}\big{(}R^{\pm}_{0}(\lambda^{4})-G_{0}\big{)}v(\lambda^{-3} S_{2}A^{2}_{-3,1}S_{2})v\big{(}R^{\pm}_{0}(\lambda^{4})-G_{0}\big{)}\big{]}(x,y)\] \[= \frac{1}{64\pi^{2}\lambda^{5}}\int_{\mathbb{R}^{6}}\widetilde{F}^ {\pm}(\lambda|x-u_{2}|)v(u_{2})(S_{2}A^{2}_{-3,1}S_{2})(u_{2},u_{1})v(u_{1}) \widetilde{F}^{\pm}(\lambda|y-u_{1}|)du_{1}du_{2}\] \[= \frac{1}{64\pi^{2}\lambda}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{ 0}^{1}(1-\theta_{1})(1-\theta_{2})\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime }(\lambda|x-\theta_{2}u_{2}|)}{\lambda|x-\theta_{2}u_{2}|}\sin^{2}\alpha_{2}+( \widetilde{F}^{\pm})^{(2)}(\lambda|x-\theta_{2}u_{2}|)\] \[\times\cos^{2}\alpha_{2}\Big{)}\Big{(}\frac{(\widetilde{F}^{\pm} )^{\prime}(\lambda|y-\theta_{1}u_{1}|)}{\lambda|y-\theta_{1}u_{1}|}\sin^{2} \alpha_{1}+(\widetilde{F}^{\pm})^{(2)}(\lambda|y-\theta_{1}u_{1}|)\cos^{2} \alpha_{1}\Big{)}d\theta_{1}d\theta_{2}\] \[\times|u_{1}|^{2}|u_{2}|^{2}v(u_{2})v(u_{1})(S_{2}A^{2}_{-3,1}S_{2 })(u_{2},u_{1})du_{1}du_{2}.\] Furthermore, we have \[\widetilde{K}^{2,\pm}_{1,N}(t;x,y)\] \[= \frac{1}{64\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1} \Big{[}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{2+2\alpha}\varphi_{0}(2^{-N }\lambda)\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(\lambda|x-\theta_{2}u_{2} |)}{\lambda|x-\theta_{2}u_{2}|}\sin^{2}\alpha_{2}\] \[+(\widetilde{F}^{\pm})^{(2)}(\lambda|x-\theta_{2}u_{2}|)\cos^{2} \alpha_{2}\Big{)}\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(\lambda|y- \theta_{1}u_{1}|)}{\lambda|y-\theta_{1}u_{1}|}\sin^{2}\alpha_{1}+(\widetilde {F}^{\pm})^{(2)}(\lambda|y-\theta_{1}u_{1}|)\] \[\times\cos^{2}\alpha_{1}\Big{)}d\lambda\Big{]}(1-\theta_{1})(1- \theta_{2})|u_{1}|^{2}|u_{2}|^{2}v(u_{2})v(u_{1})(S_{2}A^{2}_{-3,1}S_{2})(u_{2}, u_{1})d\theta_{1}d\theta_{2}du_{1}du_{2}.\] Let \[\widetilde{E}^{2,\pm}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{ 2})\] \[= \int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{2+2\alpha}\varphi_{0} (2^{-N}\lambda)\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(\lambda|x-\theta_{2} u_{2}|)}{\lambda|x-\theta_{2}u_{2}|}\sin^{2}\alpha_{2}+(\widetilde{F}^{\pm})^{(2)}( \lambda|x-\theta_{2}u_{2}|)\cos^{2}\alpha_{2}\Big{)}\] \[\times\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(\lambda|y- \theta_{1}u_{1}|)}{\lambda|y-\theta_{1}u_{1}|}\sin^{2}\alpha_{1}+(\widetilde{F} ^{\pm})^{(2)}(\lambda|y-\theta_{1}u_{1}|)\cos^{2}\alpha_{1}\Big{)}d\lambda,\] then we have \[\begin{split}\big{|}\widetilde{K}^{2,\pm}_{1,N}(t;x,y)\big{|}\lesssim& \int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1}\big{|}\widetilde{E}^{2,\pm}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|}|u_{1}|^{2}|u_{2}|^{2}|v(u_{ 2})v(u_{1})|\\ &\qquad\qquad\qquad\times|(S_{2}A^{2}_{-3,1}S_{2})(u_{2},u_{1})| d\theta_{1}d\theta_{2}du_{1}du_{2}.\end{split} \tag{3.36}\] Let \(s=2^{-N}\lambda\), \(r_{1}=2^{N}|y-\theta_{1}u_{1}|\) and \(r_{2}=2^{N}|x-\theta_{2}u_{2}|\), then \[\begin{split}&\widetilde{E}^{2,\pm}_{1,N}(t;x,y,\theta_{1},\theta_{2 },u_{1},u_{2})\\ =& 2^{(3+2\alpha)N}\int_{0}^{\infty}e^{-it2^{2N}s^{2}} s^{2+2\alpha}\varphi_{0}(s)\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(2^{N}s|x- \theta_{2}u_{2}|)}{2^{N}s|x-\theta_{2}u_{2}|}\sin^{2}\alpha_{2}+(\widetilde{F }^{\pm})^{(2)}(2^{N}s|x-\theta_{2}u_{2}|)\\ &\qquad\qquad\times\cos^{2}\alpha_{2}\Big{)}\Big{(}\frac{( \widetilde{F}^{\pm})^{\prime}(2^{N}s|y-\theta_{1}u_{1}|)}{2^{N}s|y-\theta_{1 }u_{1}|}\sin^{2}\alpha_{1}+(\widetilde{F}^{\pm})^{(2)}(2^{N}s|y-\theta_{1}u_{1 }|)\cos^{2}\alpha_{1}\Big{)}ds\\ =& 2^{(3+2\alpha)N}\int_{0}^{\infty}e^{-it2^{2N}s^{2}} s^{2+2\alpha}\varphi_{0}(s)\prod_{j=1}^{2}\Big{(}\frac{(\widetilde{F}^{\pm})^{ \prime}(r_{j}s)}{r_{j}s}\sin^{2}\alpha_{j}+(\widetilde{F}^{\pm})^{(2)}(r_{j}s) \cos^{2}\alpha_{j}\Big{)}ds.\end{split}\] By integration by parts, we obtain that \[\begin{split}&|\widetilde{E}^{2,\pm}_{1,N}(t;x,y,\theta_{1}, \theta_{2},u_{1},u_{2})|\\ \lesssim&\frac{2^{(3+2\alpha)N}}{1+|t|^{22N}}\bigg{(} \Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}\partial_{s}\Big{(}s^{1+2\alpha} \varphi_{0}(s)\Big{)}\prod_{j=1}^{2}\Big{(}\frac{(\widetilde{F}^{\pm})^{ \prime}(r_{j}s)}{r_{j}s}\sin^{2}\alpha_{j}+(\widetilde{F}^{\pm})^{(2)}(r_{j}s) \cos^{2}\alpha_{j}\Big{)}ds\Big{|}\\ &+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha}\varphi_{ 0}(s)\partial_{s}\prod_{j=1}^{2}\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}( r_{j}s)}{r_{j}s}\sin^{2}\alpha_{j}+(\widetilde{F}^{\pm})^{(2)}(r_{j}s)\cos^{2} \alpha_{j}\Big{)}ds\Big{|}\bigg{)}\\ :=&\frac{2^{(3+2\alpha)N}}{1+|t|^{22N}}\Big{(}| \mathcal{E}^{2,\pm}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|+|\mathcal{ E}^{2,\pm}_{2,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})|\Big{)}.\end{split} \tag{3.37}\] We first estimate the term \(\mathcal{E}^{2,\pm}_{2,N}\). Let \[\partial_{s}\prod_{j=1}^{2}\Big{(}\frac{(\widetilde{F}^{\pm})^{\prime}(r_{j}s)} {r_{j}s}\sin^{2}\alpha_{j}+(\widetilde{F}^{\pm})^{(2)}(r_{j}s)\cos^{2}\alpha_ {j}\Big{)}:=e^{\pm ir_{1}s}e^{\pm ir_{2}s}s^{-1}\widetilde{F}^{\pm}_{\alpha_{1},\alpha_{2}}(r_{1}s,r_{2}s),\] then \[\big{|}\mathcal{E}^{2,\pm}_{2,N}\big{|}\lesssim\Bigr{|}\int_{0}^{\infty}e^{- it2^{2N}s^{2}}e^{\pm 2^{N}s(|x-\theta_{2}u_{2}|+|y-\theta_{1}u_{1}|)}s^{2\alpha}\varphi_{0}(s)\times \widetilde{F}^{\pm}_{\alpha_{1},\alpha_{2}}(2^{N}s|x-\theta_{2}u_{2}|,2^{N}s| y-\theta_{1}u_{1}|)\Big{|}.\] Note that \[\big{|}\partial_{s}^{k}\widetilde{F}^{\pm}_{\alpha_{1},\alpha_{2}}(2^{N}s|x- \theta_{2}u_{2}|,2^{N}s|y-\theta_{1}u_{1}|)\lesssim 1,\,k=0,1,\] by Lemma 2.1 with \(z=(x,y,\theta_{1},\theta_{2},u_{1},u_{2})\), \(\Psi(z)=|x-\theta_{2}u_{2}|+|y-\theta_{1}u_{1}|\), and \[\Phi(2^{N}s,z)=\widetilde{F}^{\pm}_{\alpha_{1},\alpha_{2}}(2^{N}s|x-\theta_{2 }u_{2}|,2^{N}s|y-\theta_{1}u_{1}|),\] we obtain that \(\mathcal{E}^{2,\pm}_{2,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, we get that \(\mathcal{E}^{2,\pm}_{1,N}\) is controlled by the same bound. Hence \(\widetilde{E}^{2,\pm}_{1,N}\) is bounded by by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). By (3.37) and Holder's inequality we obtain that \(\widetilde{K}^{2,\pm}_{1,N}\) is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\). By the same summing way with the proof of (2.12), we immediately obtain (3.33). For the term \(\Gamma^{2}_{-3,2}(\lambda)\). We use the orthogonality \(S_{2}x_{i}v=0,\ i=1,2,3\) for the left hand of \(\Gamma^{2}_{-3,2}(\lambda)\) and \(S_{2}v=0\) for the right hand of \(\Gamma^{2}_{-3,2}(\lambda)\), by the same argument with the proof of the term \(\Gamma^{2}_{-3,1}(\lambda)\), we immediately obtain the desired conclusion. Similarly, we also get the desired integral estimate for the term \(\Gamma^{2}_{-3,3}(\lambda)\). For the term \(\Gamma^{2}_{-3,4}(\lambda)\). We use the orthogonality \(S_{2}v=0\), by the same argument with the proof of Proposition 3.10, we immediately obtain (3.35). Notice that the integral estimates (3.34) and (3.35) doesn't hold for \(\alpha=-1\), we can not reduce the solution operator \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) into the form \(H^{-\frac{1}{2}}e^{-it\sqrt{H}}\), which is worse in computing the integral. In order to obtain the decay estimate of solution operator \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\), we need directly estimate the following integral, \[K_{t,i}(x,y):=\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2})\lambda\Gamma^{2 }_{-3,i}(\lambda)(x,y)d\lambda,\,i=2,3,4. \tag{3.38}\] **Proposition 3.14**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-19-}\). Let \(K_{t,i}(x,y)(i=2,3,4)\) be the integrals defined in (3.38). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\big{|}K_{t,i}(x,y)\big{|}\lesssim 1,\,i=2,3,\quad \text{and}\quad\sup_{x,y\in\mathbb{R}^{3}}\big{|}K_{t,4}(x,y)\big{|}\lesssim| t|^{\frac{1}{2}}. \tag{3.39}\] Proof.: We only estimate the bound of \(K_{t,4}(x,y)\), similar to get the bounds of \(K_{t,i}(x,y),i=2,3\). Since \(\Gamma^{2}_{-3,4}(\lambda)=G_{0}v(\lambda^{-3}S_{2}A^{2}_{-3,1}S_{2})vG_{0}\) and \(G_{0}=-\frac{|x-y|}{8\pi}\), then \[K_{t,4}(x,y)= \int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2})\lambda^{-2}[G_{0 }vS_{2}A^{2}_{-3,1}S_{2}vG_{0}](x,y)d\lambda\] \[= \frac{1}{64\pi^{2}}\Big{(}\int_{0}^{\infty}\chi(\lambda)\sin(t \lambda^{2})\lambda^{-2}d\lambda\Big{)}\int_{\mathbb{R}^{6}}|x-u_{1}|v(u_{1})( S_{2}A^{2}_{-3,1}S_{2})(u_{1},u_{2})\] \[\times v(u_{2})|y-u_{2}|du_{1}du_{2}.\] Notice that \[\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2})\lambda^{-2}d\lambda=\sqrt{|t |}\int_{0}^{\infty}\chi\Big{(}\sqrt{\frac{u}{t}}\Big{)}\frac{\sin u}{u}\frac{ 1}{2\sqrt{u}}du,\] hence \[\Big{|}\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2})\lambda^{-2}d\lambda \Big{|}\lesssim|t|^{\frac{1}{2}}.\] For the kernel \([G_{0}vS_{2}A^{2}_{-3,1}S_{2}vG_{0}](x,y)\), by the orthogonality \(S_{2}v=0\) and by Holder's inequality, we obtain that \[\big{|}[G_{0}vS_{2}A^{2}_{-3,1}S_{2}vG_{0}](x,y)\big{|}=\frac{1}{64\pi^{2}} \Big{|}\int_{\mathbb{R}^{6}}(|x-u_{1}|-|x|)v(u_{1})(S_{2}A^{2}_{-3,1}S_{2})(u_ {1},u_{2})v(u_{2})\] Hence, \[\sup_{x,y\in\mathbb{R}^{3}}\big{|}K_{t,4}(x,y)\big{|}\lesssim|t|^{\frac{1}{2}},\] which gives \[\|K_{t,4}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{\frac{1}{2}}.\] The proof of this proposition is completed. Secondly, we estimate the second term \(\Omega_{2,2}(\lambda)\) in (3.30). We have \[\begin{split}\Omega_{2,2}(\lambda)=&(R_{0}^{\pm}( \lambda^{4})-G_{0})v(\lambda^{-2}S_{2}A_{-2,1}^{2}S_{1})vR_{0}^{\pm}(\lambda^{4} )+G_{0}v(\lambda^{-2}S_{2}A_{-2,1}^{2}S_{1})vR_{0}^{\pm}(\lambda^{4})\\ :&=\Gamma_{-2,1}^{2}(\lambda)+\Gamma_{-2,2}^{2}( \lambda).\end{split} \tag{3.40}\] **Proposition 3.15**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-19-}\). Let \(\Gamma_{-2,j}^{2}(\lambda)(j=1,2)\) be operators defined in (3.40). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma_{-2,1}^{2}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}},\,-\frac{3}{2}<\alpha\leq 0, \tag{3.41}\] \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma_{-2,2}^{2}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}},\,-1<\alpha\leq 0. \tag{3.42}\] Proof.: By Lemma 3.9(i)(ii), using the same method with the proofs of Proposition 3.10 and Proposition 3.13, we obtain (3.41). By using Lemma 3.9(i), by the same argument with the proof of Proposition 3.10, we obtain (3.42). Finally, we deal with the term \(\Omega_{2,3}(\lambda)\) in (3.30). We have \[\begin{split}\Omega_{2,3}(\lambda)=&(R_{0}^{\pm}( \lambda^{4})-G_{0})v(\lambda^{-1}S_{2}A_{-1,1}^{2})vR_{0}^{\pm}(\lambda^{4})+G _{0}v(\lambda^{-1}S_{2}A_{-1,1}^{2})vR_{0}^{\pm}(\lambda^{4})\\ :&=\Gamma_{-1,1}^{2}(\lambda)+\Gamma_{-1,2}^{2}( \lambda).\end{split} \tag{3.43}\] Similar to the proof of Proposition 3.15, we obtain the following proposition. **Proposition 3.16**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-19-}\). Let \(\Gamma_{-1,j}^{2}(\lambda)(j=1,2)\) be operators defined in (3.43). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma_{-1,1}^{2}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}},\,-\frac{3}{2}<\alpha\leq 0, \tag{3.44}\] \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma_{-1,2}^{2}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}},\,-1<\alpha\leq 0. \tag{3.45}\] Notice that the integral estimates (3.42) and (3.45) doesn't hold for \(\alpha=-1\). Hence in order to obtain the decay estimate of solution operator \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\), we need compute the following integral, \[K_{t,5}(x,y):=\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2}) \lambda\Gamma_{-2,2}^{2}(\lambda)(x,y)d\lambda, \tag{3.46}\] \[K_{t,6}(x,y):=\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2}) \lambda\Gamma_{-1,2}^{2}(\lambda)(x,y)d\lambda. \tag{3.47}\] Similar to the proof of Proposition 3.14, we have the following proposition. **Proposition 3.17**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-19-}\). Let \(K_{t,5}(x,y)\) and \(K_{t,6}(x,y)\) be integrals defined in (3.46) and (3.47), respectively. Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\big{|}K_{t,i}(x,y)\big{|}\lesssim 1,\,i=5,6. \tag{3.48}\] **The proof of Theorem 3.8 in the second kind resonance case.** Combining with Proposition 3.13-Proposition 3.17 and the proof of the first kind resonance, we immediately obtain for \(-\frac{1}{2}\leq\alpha\leq 0\), \[\big{\|}H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\chi(H)\big{\|}_{L^{1} \to L^{\infty}}\lesssim|t|^{-\frac{1+2\alpha}{2}}.\] Let \(F_{t}\) be an operator with the integral kernel \[F_{t}(x,y):=\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^{2}}\lambda\big{[} \Gamma_{-3,2}^{2}(\lambda)+\Gamma_{-3,3}^{2}(\lambda)+\Gamma_{-3,4}^{2}( \lambda)+\Gamma_{-2,2}^{2}(\lambda)+\Gamma_{-1,2}^{2}(\lambda)\big{]}(x,y)d\lambda,\] where \(\Gamma_{-3,i}^{2}(\lambda)(i=2,3,4),\Gamma_{-2,2}^{2}(\lambda)\) and \(\Gamma_{-1,2}^{2}(\lambda)\) are operators defined in (3.32), (3.40) and (3.43). Combining with the integral estimates (3.34), (3.35), (3.42) and (3.45), we have \[\|F_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}. \tag{3.49}\] Moreover, by using Proposition 3.13-Proposition 3.17 again, we have \[\big{\|}\cos(t\sqrt{H})P_{ac}(H)\chi(H)-F_{t}\big{\|}_{L^{1}\to L^{ \infty}}\lesssim|t|^{-\frac{3}{2}}.\] Let \(G_{t}\) be an operator with the integral kernel \[G_{t}(x,y):=\sum_{i=1}^{6}K_{t,i}(x,y),\] where \(K_{t,i}(x,y)(i=1,\cdots,6)\) are integrals defined in (3.38), (3.46) and (3.47). By using Proposition 3.14 and Proposition 3.17 again, we obtain that \[\|G_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{\frac{1}{2}}. \tag{3.50}\] Combining with Proposition 3.13-Proposition 3.17 again, we immediately obtain \[\Big{\|}\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\chi(H)-G_{t}\Big{\|}_{L^{1} \to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}.\] Thus the proof of Theorem 3.8 is completed in the second kind resonance case. \(\Box\) #### 3.2.4. **The third kind of resonance** If zero is the third kind of resonance of \(H\), then using (3.3) and (3.13) one has \[R_{V}^{\pm}(\lambda^{4})= R_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})v\Big{(} \lambda^{-4}S_{3}D_{3}S_{3}\Big{)}vR_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}( \lambda^{4})v\Big{(}\lambda^{-3}S_{2}A_{-3,1}^{3}S_{2}\Big{)}vR_{0}^{\pm}( \lambda^{4})\] \[-R_{0}^{\pm}(\lambda^{4})v\Big{(}\lambda^{-2}S_{2}A_{-2,1}^{3}S_ {1}+\lambda^{-2}S_{1}A_{-2,2}^{3}S_{2}\Big{)}vR_{0}^{\pm}(\lambda^{4})-R_{0}^ {\pm}(\lambda^{4})v\Big{(}\lambda^{-1}S_{2}A_{-1,1}^{3}\] \[+QA_{0,3}^{3}Q\Big{)}vR_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}( \lambda^{4})v\Gamma_{1}(\lambda)vR_{0}^{\pm}(\lambda^{4}). \tag{3.51}\] In order to get the estimates in Theorem 3.8(iii) in the second kind resonnace, we need to analyse what influence the term \(\lambda^{-4}S_{3}D_{3}S_{3}\) has on Stone's formula (3.14). By a simple calculation, we obtain that \[R_{0}^{+}(\lambda^{4})v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)} vR_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})v\big{(}\lambda^{-4}S_{3}D_{3}S_{3} \big{)}vR_{0}^{-}(\lambda^{4})\] \[= \big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}v \big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}vR_{0}^{+}(\lambda^{4})+R_{0}^{-}( \lambda^{4})v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}v\big{(}R_{0}^{+}( \lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}.\] **Proposition 3.18**.: _Let \(|V(x)|\lesssim(1+|x|)^{-23-}\). Then for any \(x,y\in\mathbb{R}^{3}\) and \(-1<\alpha\leq 0\),_ \[\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^{2}}\lambda^{3+2 \alpha}\big{[}\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}v\big{(} \lambda^{-4}S_{3}D_{3}S_{3}\big{)}vR_{0}^{+}(\lambda^{4})\big{]}(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}},\] \[\Big{|}\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda^{2}}\lambda^{3+2 \alpha}\big{[}R_{0}^{-}(\lambda^{4})v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}v \big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}\big{]}(x,y)d \lambda\Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}}.\] Proof.: We only estimate the first integral, similar to get the second integral estimate. Let \[K_{1,N}^{3,\pm}(t;x,y) \tag{3.52}\] \[:= \int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0 }(2^{-N}\lambda)\times\big{[}\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^ {4})\big{)}v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}vR_{0}^{+}(\lambda^{4}) \big{]}(x,y)d\lambda.\] Let \(\bar{R}_{0}(p)=\frac{e^{ip}-e^{-ip}}{p}\) and \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}.\) Then \(R_{0}^{\pm}(\lambda^{4})=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|)\), and \[R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})=\frac{1}{8\pi\lambda}\bar{R}_{0} (\lambda|x-y|).\] Note that \(\bar{R}_{0}\in C^{5}(\mathbb{R})\) and \((\bar{R}_{0})^{\prime}(p)=0\), but \((\bar{R}_{0})^{(2)}(p)=-\frac{2i}{3}\neq 0\). Let \[\bar{F}(p)=\bar{R}_{0}(p)+\frac{2i}{3}p^{2},\ \bar{F}(p)=\frac{e^{ip}-e^{-ip}}{p}+ \frac{ip^{2}}{3}.\] Hence, we have \(\bar{F}\in C^{5}(\mathbb{R})\) and \((\bar{F})^{(k)}(p)=0,\,k=1,2\). By the orthogonality \(S_{3}v=S_{3}x_{i}v=S_{3}x_{i}x_{j}v=0(i,j=1,2,3)\), one has \[\big{[}\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)} v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}vR_{0}^{+}(\lambda^{4})\big{]}(x,y)\] \[= \frac{1}{64\pi^{2}\lambda^{6}}\int_{\mathbb{R}^{6}}\bar{R}_{0}( \lambda|x-u_{2}|)v(u_{2})(S_{3}D_{3}S_{3})(u_{2},u_{1})v(u_{1})F^{+}(\lambda| y-u_{1}|)du_{1}du_{2}\] \[= \frac{1}{64\pi^{2}\lambda^{6}}\int_{\mathbb{R}^{6}}\big{(}\bar{R} _{0}(\lambda|x-u_{2}|)+\frac{2i}{3}\lambda^{2}|x-u_{2}|^{2}\big{)}v(u_{2})(S_ {3}D_{3}S_{3})(u_{2},u_{1})v(u_{1})\] \[\qquad\times F^{+}(\lambda|y-u_{1}|)du_{1}du_{2}\] \[= \frac{1}{64\pi^{2}\lambda^{6}}\int_{\mathbb{R}^{6}}\bar{F}( \lambda|x-u_{2}|)v(u_{2})(S_{3}D_{3}S_{3})(u_{2},u_{1})v(u_{1})F^{+}(\lambda| y-u_{1}|)du_{1}du_{2}.\] By Lemma 3.9(iii), one has \[\big{[}\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)} v\big{(}\lambda^{-4}S_{3}D_{3}S_{3}\big{)}vR_{0}^{+}(\lambda^{4})\big{]}(x,y)\] \[= -\frac{1}{128\pi^{2}\lambda^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{ 1}\int_{0}^{1}(1-\theta_{2})^{2}\Big{[}\Big{(}\frac{\bar{F}^{\prime}(\lambda|x- \theta_{2}u_{2}|)}{\lambda^{2}|x-\theta_{2}u_{2}|^{2}}-\frac{\bar{F}^{(2)}( \lambda|x-\theta_{2}u_{2}|)}{\lambda|x-\theta_{2}u_{2}|}\Big{)}3\cos\alpha_{2} \sin^{2}\alpha_{2}\] \[\qquad\qquad\qquad-\bar{F}^{(3)}(\lambda|x-\theta_{2}u_{2}|)\cos^ {3}\alpha_{2}\Big{]}(F^{+})^{\prime}(\lambda|y-\theta_{1}u_{1}|)\cos\alpha_{1}d \theta_{1}\theta_{2}\] \[\qquad\qquad\qquad\times|u_{2}|^{3}v(u_{2})v(u_{1})|u_{1}|(S_{3}D _{3}S_{3})(u_{2},u_{1})du_{1}du_{2}.\] Furthermore, we have \[K^{3,\pm}_{1,N}(t;x,y)\] \[=-\frac{1}{128\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0}^{1} \bigg{[}\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{1+2\alpha}\varphi_{0}(2^{-N} \lambda)\Big{[}\Big{(}\frac{\bar{F}^{\prime}(\lambda|x-\theta_{2}u_{2}|)}{ \lambda^{2}|x-\theta_{2}u_{2}|^{2}}-\frac{\bar{F}^{(2)}(\lambda|x-\theta_{2}u_ {2}|)}{\lambda|x-\theta_{2}u_{2}|}\Big{)}\] \[\times 3\cos\alpha_{2}\sin^{2}\alpha_{2}-\bar{F}^{(3)}(\lambda|x- \theta_{2}u_{2}|)\cos^{3}\alpha_{2}\Big{]}(F^{+})^{\prime}(\lambda|y-\theta_{ 1}u_{1}|)\cos\alpha_{1}d\lambda\bigg{]}(1-\theta_{2})^{2}d\theta_{1}\theta_{2}\] \[\times|u_{2}|^{3}|u_{1}|v(u_{2})v(u_{1})(S_{3}D_{3}S_{3})(u_{2}, u_{1})du_{1}du_{2}\] \[:=-\frac{1}{128\pi^{2}}\int_{\mathbb{R}^{6}}\int_{0}^{1}\int_{0} ^{1}E^{3,+}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})(1-\theta_{2})^{2}d \theta_{1}\theta_{2}|u_{2}|^{3}|u_{1}|v(u_{2})v(u_{1})\] \[\times(S_{3}D_{3}S_{3})(u_{2},u_{1})du_{1}du_{2}. \tag{3.53}\] Let \(s=2^{-N}\lambda\), then \[E^{3,+}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\] \[= 2^{(2+2\alpha)N}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha} \varphi_{0}(s)\Big{[}\Big{(}\frac{\bar{F}^{\prime}(2^{N}s|x-\theta_{2}u_{2}|)} {(2^{N}s)^{2}|x-\theta_{2}u_{2}|^{2}}-\frac{\bar{F}^{(2)}(2^{N}s|x-\theta_{2}u _{2}|)}{2^{N}s|x-\theta_{2}u_{2}|}\Big{)}\] \[\times 3\cos\alpha_{2}\sin^{2}\alpha_{2}-\bar{F}^{(3)}(2^{N}s|x- \theta_{2}u_{2}|)\cos^{3}\alpha_{2}\Big{]}(F^{+})^{\prime}(2^{N}s|y-\theta_{1} u_{1}|)\cos\alpha_{1}ds.\] Let \(r_{1}=2^{N}|y-\theta_{1}u_{1}|\) and \(r_{2}=2^{N}|x-\theta_{2}u_{2}|\), by using integration by parts we have \[\big{|}E^{3,+}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|}\] \[\lesssim \frac{2^{(2+2\alpha)N}N}{1+|t|2^{2N}}\bigg{(}\Big{|}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{2\alpha}\varphi_{0}(s)\big{)} \Big{(}\big{(}\frac{\bar{F}^{\prime}(r_{2}s)}{(r_{2}s)^{2}}-\frac{\bar{F}^{(2) }(r_{2}s)}{r_{2}s}\big{)}3\cos\alpha_{2}\sin^{2}\alpha_{2}\] \[-\bar{F}^{(3)}(r_{2}s)\cos^{3}\alpha_{2}\Big{)}(F^{+})^{\prime}(r _{1}s)\cos\alpha_{1}ds\Big{|}+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2 \alpha}\varphi_{0}(s)\] \[\times\partial_{s}\Big{[}\Big{(}\Big{(}\frac{\bar{F}^{\prime}(r_ {2}s)}{(r_{2}s)^{2}}-\frac{\bar{F}^{(2)}(r_{2}s)}{r_{2}s}\Big{)}3\cos\alpha_{2} \sin^{2}\alpha_{2}-\bar{F}^{(3)}(r_{2}s)\cos^{3}\alpha_{2}\Big{)}(F^{+})^{ \prime}(r_{1}s)\cos\alpha_{1}\Big{]}ds\Big{|}\bigg{)}\] \[:= \frac{2^{(2+2\alpha)N}N}{1+|t|2^{2N}}\Big{(}\big{|}\mathcal{E}^{3, +}_{1,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|}+\big{|}\mathcal{E}^{3, +}_{2,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|}\Big{)}.\] For \(\big{|}\mathcal{E}^{3,+}_{2,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|}\). Note that \[\partial_{s}\Big{[}\Big{(}\big{(}\frac{\bar{F}^{\prime}(r_{2}s)} {(r_{2}s)^{2}}-\frac{\bar{F}^{(2)}(r_{2}s)}{r_{2}s}\big{)}3\cos\alpha_{2}\sin^{ 2}\alpha_{2}-\bar{F}^{(3)}(r_{2}s)\cos^{3}\alpha_{2}\Big{)}(F^{+})^{\prime}(r_{ 1}s)\cos\alpha_{1}\Big{]}\] \[:=e^{ir_{1}s}e^{ir_{2}s}s^{-1}\bar{F}_{\alpha_{1},\alpha_{2}}(r_{ 1}s,r_{2}s),\] then we have \[\big{|}\mathcal{E}^{3,+}_{2,N}(t;x,y,\theta_{1},\theta_{2},u_{1},u_{2})\big{|} \lesssim\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{i(r_{1}+r_{2})s}s^{-1+2 \alpha}\varphi_{0}(s)\bar{F}_{\alpha_{1},\alpha_{2}}(r_{1}s,r_{2}s)ds\Big{|}.\] Since \[\big{|}\partial_{s}^{k}\bar{F}_{\alpha_{1},\alpha_{2}}(2^{N}s|x-\theta_{2}u_{2} |,2^{N}s|y-\theta_{1}u_{1}|)\big{|}\lesssim 1,\,k=0,1,\] by Lemma 2.1 with \(z=(x,y,\theta_{1},\theta_{2},u_{1},u_{2})\) and \[\Psi(z)=r_{1}+r_{2}=|x-\theta_{2}u_{2}|+|y-\theta_{1}u_{1}|,\ \ \Phi(2^{N}s,z)=\bar{F}_{\alpha_{1},\alpha_{2}}(2^{N}s|x-\theta_{2}u_{2}|,2^{N}s|y- \theta_{1}u_{1}|),\] we obtain that \(\mathcal{E}^{3,+}_{2,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly \(\mathcal{E}^{3,+}_{1,N}\) is controlled by the same bound. Hence \(E^{3,+}_{1,N}\) is bounded by \(2^{(2+2\alpha)N}\Theta_{N_{0},N}(t)\). By (3.53) and Holder's inequality we obtain that \(K^{3,\pm}_{1,N}\) is bounded by \(2^{(2+2\alpha)N}\Theta_{N_{0},N}(t)\). In the proof of Proposition 3.18, for the projection operator \(S_{3}\) on the right hand of the integral (3.52), we only use the orthogonality \(S_{3}v=0\). As a result, we don't get a decay estimate as well as the regular case and the first kind of resonance case. Analogous to the way to deal with the second kind of resonance case, we aim to subtract some specific operator to get the same decay estimate rate as the regular case and the first kind of resonance case. Note that \[R_{0}^{+}(\lambda^{4})v(\lambda^{-4}S_{3}D_{3}S_{3})vR_{0}^{+}( \lambda^{4})-R_{0}^{-}(\lambda^{4})v(\lambda^{-4}S_{3}D_{3}S_{3})vR_{0}^{-}( \lambda^{4})\] \[= \big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}v( \lambda^{-4}S_{3}D_{3}S_{3})vR_{0}^{+}(\lambda^{4})+R_{0}^{-}(\lambda^{4})v( \lambda^{-4}S_{3}D_{3}S_{3})v\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^ {4})\big{)}\] \[= \big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}v( \lambda^{-4}S_{3}D_{3}S_{3})v\big{(}R_{0}^{+}(\lambda^{4})-G_{0}\big{)}+(R_{0 }^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4}))v(\lambda^{-4}S_{3}D_{3}S_{3})vG_{0}\] \[+\big{(}R_{0}^{-}(\lambda^{4})-G_{0}\big{)}v(\lambda^{-4}S_{3}D_{ 3}S_{3})v\big{(}R_{0}^{+}(\lambda^{4})-R_{0}^{-}(\lambda^{4})\big{)}+G_{0}v( \lambda^{-4}S_{3}D_{3}S_{3})v\big{(}R_{0}^{+}(\lambda^{4})\] \[-R_{0}^{-}(\lambda^{4})\big{)}:=\Gamma^{3}_{-4,1}(\lambda)+ \Gamma^{3}_{-4,2}(\lambda)+\Gamma^{3}_{-4,3}(\lambda)+\Gamma^{3}_{-4,4}( \lambda). \tag{3.54}\] Hence, in order to complete the proof of Theorem 3.8(iii) in the third kind resonance, combining with the analysis of the second kind of resonance case, it suffices to show the following proposition. **Proposition 3.19**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-23-}\). Let \(\Gamma^{3}_{-3,j}(\lambda)(j=1,\cdots,4)\) be operators defined in (3.54). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma^{3}_{-4,j}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}},-\frac{3}{2}<\alpha\leq 0,\,j=1,3,\] \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\chi(\lambda)e ^{-it\lambda^{2}}\lambda^{3+2\alpha}\Gamma^{3}_{-4,j}(\lambda)(x,y)d\lambda \Big{|}\lesssim|t|^{-\frac{2+2\alpha}{2}},-1<\alpha\leq 0,\,j=2,4.\] Proof.: By the same argument with the proof of Proposition 3.13 and Proposition 3.18, we obtain that this proposition holds. Here we omit these details. In order to obtain that the estimate of solution operator \(\frac{\sin(t\sqrt{H})}{\sqrt{H}}\) in the third resonance, we need compute the following integrals, \[K_{t,7}(x,y):=\int_{0}^{\infty}\chi(\lambda)\sin(t\lambda^{2})\lambda\big{[} \Gamma^{3}_{-4,2}(\lambda)+\Gamma^{3}_{-4,4}(\lambda)\big{]}(x,y)d\lambda. \tag{3.55}\] Similar to the proof of Proposition 3.14, we have the following proposition. **Proposition 3.20**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-23-}\). Let \(K_{t,7}(x,y)\) be integral defined in (3.55). Then_ \[\sup_{x,y\in\mathbb{R}^{3}}\big{|}K_{t,7}(x,y)\big{|}\lesssim 1. \tag{3.56}\] **The proof of Theorem 3.8 in the third kind resonance case.** Let \(\widetilde{F}_{t}\) be an operator with the following integral kernel \[\widetilde{F}_{t}(x,y):=F_{t}(x,y)+\int_{0}^{\infty}\chi(\lambda)e^{-it\lambda ^{2}}\lambda\big{[}\Gamma^{3}_{-3,2}(\lambda)+\Gamma^{3}_{-3,4}(\lambda)\big{]} (x,y)d\lambda.\] By using the estimate (3.49) and Proposition 3.19, we obtain that \[\|\widetilde{F}_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}.\] Let \(\widetilde{G}_{t}(x,y)=\sum_{i=1}^{8}K_{t,i}(x,y).\) By using the estimate (3.50) and Proposition 3.20, we obtain that that \[\|\widetilde{G}_{t}\|_{L^{1}\to L^{\infty}}\lesssim|t|^{\frac{1}{2}}.\] Combining with Proposition 3.18-Proposition 3.20 and the proof of the second kind resonance again, we immediately obtain that \[\big{\|}\cos(t\sqrt{H})P_{ac}(H)\chi(H)-\widetilde{F}_{t}\big{\|}_{L^{1}\to L ^{\infty}}\lesssim|t|^{-\frac{3}{2}},\] \[\Big{\|}\frac{\sin(t\sqrt{H})}{\sqrt{H}}P_{ac}(H)\chi(H)-\widetilde{G}_{t} \Big{\|}_{L^{1}\to L^{\infty}}\lesssim|t|^{-\frac{1}{2}}.\] Thus the proof of Theorem 3.8(iii) is completed. ## 4. High energy dispersive estimates In this subsection, we are devoted to establishing the decay bounds of Theorem 1.1 and Theorem 1.3 for high energy. By the identities (1.27) it suffices to establish high energy dispersive bounds of \(H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\) for \(\alpha=-1,0\). Furthermore, it suffices to prove the following theorem. **Theorem 4.1**.: _Let \(|V(x)|\lesssim\langle x\rangle^{-4-}\). Then_ \[\|H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\widetilde{\chi}(H)\|_{L^{1}\to L ^{\infty}}\lesssim|t|^{-\frac{3+2\alpha}{2}},\ \alpha\leq 0. \tag{4.1}\] To complete the proof of Theorem 4.1, we need to use the following Stone's formula, \[H^{\frac{\alpha}{2}}e^{-it\sqrt{H}}P_{ac}(H)\widetilde{\chi}(H)f=\sum_{N=N^{ \prime}+1}^{+\infty}\sum_{\pm}\frac{2}{\pi i}\int_{0}^{\infty}e^{-it\lambda^{2 }}\varphi_{0}(2^{-N}\lambda)\lambda^{3+2\alpha}R_{V}^{\pm}(\lambda^{4})fd\lambda, \tag{4.2}\] and the resolvent identity, \[R_{V}^{\pm}(\lambda^{4})=R_{0}^{\pm}(\lambda^{4})-R_{0}^{\pm}(\lambda^{4})VR_{ 0}^{\pm}(\lambda^{4})+R_{0}^{\pm}(\lambda^{4})VR_{V}^{\pm}(\lambda^{4})VR_{0}^ {\pm}(\lambda^{4}). \tag{4.3}\] Combining with Proposition 2.2, it is enough to prove the following Proposition 4.2 and Proposition 4.4. **Proposition 4.2**.: _Let \(|V(x)|\lesssim(1+|x|)^{-3-}\). Then for \(\alpha\leq 0\),_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\widetilde{\chi}(\lambda) e^{-it\lambda^{2}}\lambda^{3+2\alpha}\big{[}R_{0}^{\pm}(\lambda^{4})VR_{0}^{\pm}( \lambda^{4})\big{]}(x,y)d\lambda\Big{|}\lesssim|t|^{-\frac{3+2\alpha}{2}}.\] Proof.: We write \[L_{1,N}^{\pm}(t;x,y):=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha} \varphi_{0}(2^{-N}\lambda)\big{[}R_{0}^{\pm}(\lambda^{4})VR_{0}^{\pm}( \lambda^{4})\big{]}(x,y)d\lambda.\] Then \[\int_{0}^{\infty}\widetilde{\chi}(\lambda)e^{-it\lambda^{2}}\lambda^{3+2 \alpha}\big{[}R_{0}^{\pm}(\lambda^{4})VR_{0}^{\pm}(\lambda^{4})\big{]}(x,y)d \lambda=\sum_{N=N^{\prime}+1}^{\infty}L_{1,N}^{\pm}(t;x,y).\] Let \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}\), then \(R_{0}^{\pm}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|).\) Set \(s=2^{-N}\lambda\), one has \[L_{1,N}^{\pm}(t;x,y)=\int_{\mathbb{R}^{3}}\int_{0}^{\infty}e^{- it\lambda^{2}}\lambda^{3+2\alpha}\varphi_{0}(2^{-N}\lambda)R_{0}^{\pm}(\lambda^{4}) (x,u_{1})V(u_{1})R_{0}^{\pm}(\lambda^{4})(u_{1},y)d\lambda du_{1}\] \[= \frac{2^{(2+2\alpha)N}}{64\pi^{2}}\int_{\mathbb{R}^{3}}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}s^{1+2\alpha}\varphi_{0}(s)F^{\pm}(2^{N}s|x-u_{1}|)F^ {\pm}(2^{N}s|y-u_{1}|)dsV(u_{1})du_{1}\] \[:= \frac{1}{64\pi^{2}}\int_{\mathbb{R}^{3}}E_{1,N}^{\pm}(t;x,y,u_{1} )V(u_{1})du_{1}.\] By using integration by parts, we have \[\big{|}E_{1,N}^{\pm}(t;x,y,u_{1})\big{|}\lesssim \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\bigg{(}\Big{|}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{2\alpha}\varphi_{0}(s)\big{)}F^ {\pm}(2^{N}s|x-u_{1}|)F^{\pm}(2^{N}s|y-u_{1}|)ds\] \[+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2\alpha}\varphi_{0}( s)\partial_{s}\Big{(}F^{\pm}(2^{N}s|x-u_{1}|)F^{\pm}(2^{N}s|y-u_{1}|)\Big{)}ds \Big{|}\bigg{)}\] \[:= \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\Big{(}|\mathcal{E}_{1,N}^{ \pm}(t;x,y,u_{1})|+|\mathcal{E}_{2,N}^{\pm}(t;x,y,u_{1})|\Big{)}. \tag{4.4}\] For \(\mathcal{E}_{2,N}^{\pm}\). Since \[\partial_{s} \Big{(}F^{\pm}(2^{N}s|x-u_{1}|)F^{\pm}(2^{N}s|y-u_{1}|)\Big{)}:=s^ {-1}e^{\pm i2^{N}s(|x-u_{1}|+|y-u_{1}|)}\] \[\times\Big{(}F_{1}^{\pm}(2^{N}s|x-u_{1}|)F_{0}^{\pm}(2^{N}s|y-u_{ 1}|)+F_{0}^{\pm}(2^{N}s|x-u_{1}|)F_{1}^{\pm}(2^{N}s|y-u_{1}|)\Big{)},\] where \[F_{1}^{\pm}(p)=pe^{\mp ip}(F^{\pm})^{\prime}(p)=\frac{(\pm ip-1)+(p+1)e^{-p \mp ip}}{p},\ \ F_{0}^{\pm}(p)=\frac{1-e^{-p\mp ip}}{p}.\] Then we have \[|\mathcal{E}_{2,N}^{\pm}(t;x,y,u_{1})|\lesssim \Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{N}s(|x-u_{1 }|+|y-u_{1}|)}s^{-1+2\alpha}\varphi_{0}(s)\Big{(}F_{1}^{\pm}(2^{N}s|x-u_{1}|)\] \[\times F_{0}^{\pm}(2^{N}s|y-u_{1}|)+F_{0}^{\pm}(2^{N}s|x-u_{1}|) F_{1}^{\pm}(2^{N}s|y-u_{1}|)\Big{)}ds\Big{|}.\] Since \(N>N_{0}^{\prime}\), then \[|\mathcal{E}_{2,N}^{\pm}(t;x,y)|\lesssim \Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}e^{\pm i2^{N}s(|x-u_{1}| +|y-u_{1}|)}s^{-1+2\alpha}\varphi_{0}(s)\Big{(}F_{1}^{\pm}(2^{N}s|x-u_{1}|)\] \[\times F_{0}^{\pm}(2^{N}s|y-u_{1}|)+F_{0}^{\pm}(2^{N}s|x-u_{1}|)F_ {1}^{\pm}(2^{N}s|y-u_{1}|)\Big{)}ds\Big{|}.\] Noting that for \(k=0,1\), \[\big{|}\partial_{s}^{k}\big{(}F_{1}^{\pm}(2^{N}s|x-u_{1}|)F_{0}^ {\pm}(2^{N}s|y-u_{1}|)\big{)}\big{|}\lesssim 1,\] \[\big{|}\partial_{s}^{k}\big{(}F_{0}^{\pm}(2^{N}s|x-u_{1}|)F_{1}^{ \pm}(2^{N}s|y-u_{1}|)\big{)}\big{|}\lesssim 1,\] then by Lemma 2.1 with \(z=(x,y,u_{1})\), \(\Psi(z)=|x-u_{1}|+|y-u_{1}|\), and \[\Phi(2^{N}s;z)=\Big{(}F_{1}^{\pm}(2^{N}s|x-u_{1}|)F_{0}^{\pm}(2^{N}s|y-u_{1}|)+ F_{0}^{\pm}(2^{N}s|x-u_{1}|)F_{1}^{\pm}(2^{N}s|y-u_{1}|)\Big{)},\] we obtain that \(\mathcal{E}^{\pm}_{2,N}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). Similarly, \(\mathcal{E}^{\pm}_{1,N}\) is controlled by the same bound. By (4.4), we obtain that \(E^{\pm}_{1,N}\) is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\). Hence we have \[|L^{\pm}_{1,N}(t;x,y)|\lesssim 2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\int_{\mathbb{R }^{3}}|V(u_{1})|du_{1}\lesssim 2^{(3+2\alpha)N}\Theta_{N_{0},N}(t).\] Finally, taking the sum the same way we used in the proof of (2.12), we immediately obtain the desired conclusion. In order to deal with the term \(R^{\pm}_{0}(\lambda^{4})VR^{\pm}_{V}(\lambda^{4})VR^{\pm}_{0}(\lambda^{4})\), we need to give a lemma as follows, see [19]. **Lemma 4.3**.: _Let \(k\geq 0\) and \(|V(x)|\lesssim(1+|x|)^{-k-1-}\) such that \(H=\Delta^{2}+V\) has no embedded positive eigenvalues. Then for any \(\sigma>k+\frac{1}{2}\), \(R^{\pm}_{V}(\lambda)\in\mathcal{B}\big{(}L^{2}_{\sigma}(\mathbb{R}^{d}),L^{2} _{-\sigma}(\mathbb{R}^{d})\big{)}\) are \(C^{k}\)-continuous for all \(\lambda>0\). Furthermore,_ \[\big{\|}\partial_{\lambda}R^{\pm}_{V}(\lambda)\big{\|}_{L^{2}_{\sigma}( \mathbb{R}^{d})\to L^{2}_{-\sigma}(\mathbb{R}^{d})}=O\big{(}|\lambda|^{ \frac{-3(k+1)}{4}}\big{)},k=0,1,\text{ as }\lambda\to+\infty.\] **Proposition 4.4**.: _Let \(|V(x)|\lesssim(1+|x|)^{-4-}\). Then for \(\alpha\leq 0\),_ \[\sup_{x,y\in\mathbb{R}^{3}}\Big{|}\int_{0}^{\infty}\widetilde{\chi}(\lambda) e^{-it\lambda^{2}}\lambda^{3+2\alpha}\big{[}R^{\pm}_{0}(\lambda^{4})VR^{\pm}_{V}( \lambda^{4})VR^{\pm}_{0}(\lambda^{4})\big{]}(x,y)d\lambda\Big{|}\lesssim|t|^{ -\frac{3+2\alpha}{2}}. \tag{4.5}\] Proof.: In order to get (4.5), it's equivalent to prove that the integral \[L^{\pm}_{2,N}(t;x,y):=\int_{0}^{\infty}e^{-it\lambda^{2}}\lambda^{3+2\alpha} \varphi_{0}(2^{-N}\lambda)\Big{\langle}VR^{\pm}_{V}(\lambda^{4})V\big{(}R^{ \pm}_{0}(\lambda^{4})(*,y)\big{)}(\cdot),\big{(}R^{\pm}_{0}(\lambda^{4}) \big{)}^{*}(x,\cdot)\Big{\rangle}d\lambda\] is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\) for \(N>N^{\prime}\). In fact, let \(F^{\pm}(p)=\frac{e^{\pm ip}-e^{-p}}{p}\), then \(R^{\pm}_{0}(\lambda^{4})(x,y)=\frac{1}{8\pi\lambda}F^{\pm}(\lambda|x-y|)\). Hence \[\big{\langle}VR^{\pm}_{V}(\lambda^{4})V(R^{\pm}_{0}(\lambda^{4})( *,y))(\cdot),\ R^{\mp}_{0}(\lambda)(x,\cdot)\big{\rangle}\] \[= \frac{1}{64\pi^{2}\lambda^{2}}\big{\langle}VR^{\pm}_{V}(\lambda^ {4})V\big{(}F^{\pm}(\lambda|*-y|)\big{)}(\cdot),\ F^{\mp}(\lambda|x-\cdot|) \big{\rangle}:=\frac{1}{64\pi^{2}\lambda^{2}}E^{L,\pm}_{2}(\lambda;x,y).\] Let \(s=2^{-N}\lambda\), then \[L^{\pm}_{2,N}(t;x,y):=\frac{2^{(2+2\alpha)N}}{64\pi^{2}}\int_{0}^{\infty}e^{- it2^{2N}s^{2}}s^{1+2\alpha}\varphi_{0}(s)E^{L,\pm}_{2}(2^{N}s;x,y)ds.\] By using integration by parts, we have \[|L^{\pm}_{2,N}(t;x,y)|\lesssim \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\bigg{(}\Big{|}\int_{0}^{ \infty}e^{-it2^{2N}s^{2}}\partial_{s}\big{(}s^{2\alpha}\varphi_{0}(s)\big{)} \partial_{s}E^{L,\pm}_{2}(2^{N}s;x,y)ds\Big{|} \tag{4.6}\] \[+\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2\alpha}\varphi_{0} (s)\partial_{s}\big{(}E^{L,\pm}_{2}(2^{N}s;x,y)\big{)}ds\Big{|}\bigg{)}\] \[:= \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\Big{(}|\mathcal{E}^{L,\pm}_{ 1,N}(t;x,y)|+|\mathcal{E}^{L,\pm}_{2,N}(t;x,y)|\Big{)}.\] For \(\mathcal{E}^{L,\pm}_{2,N}\). Since \[\partial_{s}\big{(}E^{L,\pm}_{2,N}(2^{N}s;x,y)\big{)}= \Big{\langle}V\partial_{s}\big{(}R_{V}(2^{N}s^{4})\big{)}V\big{(}F^ {\pm}(2^{N}s|*-y|)\big{)}(\cdot),\ F^{\mp}(2^{N}s|x-\cdot|)\Big{\rangle}\] \[+\Big{\langle}VR_{V}(2^{N}s^{4})V\big{(}\partial_{s}F^{\pm}(2^{N} s|*-y|)\big{)}(\cdot),\ F^{\mp}(2^{N}s|x-\cdot|)\Big{\rangle}\] \[+\Big{\langle}VR_{V}(2^{N}s^{4})V\big{(}F^{\pm}(2^{N}s|*-y|)\big{)} (\cdot),\ \partial_{s}F^{\mp}(2^{N}s|x-\cdot|)\Big{\rangle}\] \[:= E^{L,\pm}_{21}(2^{N}s;x,y)+E^{L,\pm}_{22}(2^{N}s;x,y)+E^{L,\pm}_ {23}(2^{N}s;x,y),\] then we have \[\mathcal{E}^{L,\pm}_{2,N}(t;x,y)= \int_{0}^{\infty}e^{-it2^{2N}s^{2}}s^{2\alpha}\varphi_{0}(s) \big{(}E^{L,\pm}_{21}+E^{L,\pm}_{22}+E^{L,\pm}_{23}\big{)}(2^{N}s;x,y)ds \tag{4.7}\] \[:= \mathcal{E}^{\pm,L}_{21,N}(t;x,y)+\mathcal{E}^{\pm,L}_{22,N}(t;x,y)+\mathcal{E}^{\pm,L}_{23,N}(t;x,y).\] We first deal with the first term \(\mathcal{E}^{\pm,L}_{21,N}(t;x,y)\). Let \(\sigma>k+1+\frac{1}{2}\), then \[\big{|}\partial_{\lambda}E^{L,\pm}_{21}(2^{N}s;x,y)\big{|}\lesssim\sum_{k=0}^{ 1}\big{\|}V(\cdot)\langle\cdot\rangle^{\sigma}\big{\|}^{2}_{L^{2}}\big{\|} \partial_{s}^{k+1}R_{V}^{\pm}(2^{4N}s^{4})\big{\|}_{L^{2}_{\sigma}\to L^{2}_{- \sigma}}\lesssim 2^{-N}\lesssim 1,\,k=0,1.\] Note that \(s\in\mathrm{supp}\varphi_{0}\subset[\frac{1}{4},1]\), by using integration by parts again, we obtain that \[\big{|}\mathcal{E}^{\pm,L}_{21,N}(t;x,y)\big{|}\lesssim \frac{1}{1+|t|2^{2N}}\Big{|}\int_{0}^{\infty}e^{-it2^{2N}s^{2}} \partial_{s}\Big{(}s^{-1+2\alpha}\varphi_{0}(s)E^{L,\pm}_{21}(2^{N}s;x,y) \Big{)}ds\Big{|}\] \[\lesssim \frac{1}{1+|t|2^{2N}}.\] Next, we deal with the second term \(\mathcal{E}^{\pm,L}_{22,N}\). Let \[F^{\mp}(p):=e^{\mp ip}F^{\mp}_{0}(p),\ F^{\mp}_{0}(p)=\frac{1-e^{-p}e^{\pm ip}} {p},\] then \[\partial_{s}F^{\pm}(2^{N}s|*-y|)=2^{N}|*-y|(F^{\pm})^{\prime}(2^{N}s|*-y|):=e^ {\pm i2^{N}s|*-y|}s^{-1}F^{\pm}_{1}(2^{N}s|*-y|),\] where \[F^{\pm}_{1}(p)=pe^{\mp ip}(F^{\pm})^{\prime}(p)=\frac{(\pm ip-1)+(p+1)e^{-p}e ^{\mp ip}}{p}.\] Thus \[E^{L,\pm}_{22}(2^{N}s;x,y)= e^{\pm i2^{N}s(|x|+|y|)}s^{-1}\Big{\langle}VR_{V}(2^{N}s)V \big{(}e^{\pm i2^{N}s(|*-y|-|y|)}F^{\pm}_{1}(2^{N}s|*-y|)\big{)}(\cdot),\] \[e^{\mp i2^{N}s(|x-|-|x|)}F^{\mp}_{0}(2^{N}s|x-\cdot|)\Big{\rangle} :=e^{\pm i2^{N}s(|x|+|y|)}s^{-1}\widetilde{E}^{L,\pm}_{22}(2^{N}s;x,y).\] Furthermore, \[\mathcal{E}^{\pm,L}_{22,N}(t;x,y)= \frac{2^{(2+2\alpha)N}}{1+|t|2^{2N}}\int_{0}^{\infty}e^{-it2^{2N} s^{2}}e^{\pm i2^{N}s(|x|+|y|)}s^{-1+2\alpha}\varphi_{0}(s)\widetilde{E}^{L,\pm}_{22 }(2^{N}s;x,y)ds.\] Note that \[\big{|}\partial_{s}^{k}\big{(}e^{\pm i2^{N}s(|*-y|-|y|)}F^{\pm}_{1 }(2^{N}s|*-y|)\big{)}\big{|}\lesssim 2^{kN}\langle*\rangle,\,k=0,1,\] \[\big{|}\partial_{s}^{k}\big{(}e^{\mp i2^{N}s(|x-|-|x|)}F^{\mp}_{0 }(2^{N}s|x-\cdot|)\big{)}\big{|}\lesssim 2^{kN}\langle\cdot\rangle,\,k=0,1.\] Since \(|V(x)|\lesssim(1+|x|)^{-4-}\), then by Holder's inequality, we have \[\big{|}\partial_{s}\widetilde{E}_{22}^{L,\pm}(2^{N}s;x,y)\big{|}\lesssim\sum_{k=0 }^{1}2^{(1-k)N}\big{\|}V(\cdot)\langle\cdot\rangle^{\sigma+1-k}\big{\|}_{L^{2} }^{2}\big{\|}\partial_{s}^{k}R_{V}^{\pm}(2^{4N}s^{4})\big{\|}_{L^{2}_{\sigma} \to L^{2}_{-\sigma}}\lesssim 2^{-2N}\lesssim 1.\] By Lemma 2.1 with \(z=(x,y)\), \(\Psi(z)=|x|+|y|\) and \(\Phi(2^{N}s;z)=\widetilde{E}_{22,N}^{L,\pm}(2^{N}s;x,y)\), we get that \(\mathcal{E}_{22,N}^{\pm,L}(t;x,y)\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). By the same argument we obtain that \(\mathcal{E}_{23,N}^{\pm,L}(t;x,y)\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). By (4.7), we have \[|\mathcal{E}_{2,N}^{\pm,L}(t;x,y)|\lesssim\frac{1}{1+|t|2^{2N}}+(1+|t|2^{2N}) \Theta_{N_{0},N}(t)\lesssim(1+|t|2^{2N})\Theta_{N_{0},N}(t).\] Similarly, we obtain that \(\mathcal{E}_{1,N}^{\pm,L}\) is bounded by \((1+|t|2^{2N})\Theta_{N_{0},N}(t)\). By (4.6), we obtain that \(L^{\pm}_{2,N}\) is bounded by \(2^{(2+2\alpha)N}\Theta_{N_{0},N}(t)\). Thus \(L^{\pm}_{2,N}\) is bounded by \(2^{(3+2\alpha)N}\Theta_{N_{0},N}(t)\). Finally, by the same summing way with the proof of (2.12), we immediately obtain the desired conclusion. ## 5. Appendix In this appendix, we first prove Theorem 3.7, then we give the characterization of zero resonance subspaces \(S_{i}L^{2}(\mathbb{R}^{3})(i=1,2,3)\) according to the distributional solutions to \(H\phi=0\). ### The resolvent expansions of \(\big{(}M^{\pm}(\lambda)\big{)}^{-1}\) for \(\lambda\) near zero In this subsection, we prove Theorem 3.7, i.e. computing the expansions of \(\big{(}M^{\pm}(\lambda)\big{)}^{-1}\) for \(\lambda\) near zero case by case, see [15] and also [19] for regular case. For convenience, we shall use notations \(A^{k}_{i,j}\) denote general \(\lambda\)-independent absolutely bounded operators on \(L^{2}\), and \(O(\lambda^{k})\) denote some \(\lambda\)-dependent operators \(\Gamma(\lambda)\) which satisfy that \[\big{\|}\Gamma(\lambda)\big{\|}_{L^{2}\to L^{2}}+\lambda\big{\|}\partial_{ \lambda}\Gamma(\lambda)\big{\|}_{L^{2}\to L^{2}}+\lambda^{2}\big{\|}\partial_ {\lambda}^{2}\Gamma(\lambda)\big{\|}_{L^{2}\to L^{2}}\lesssim\lambda^{k},\ \lambda>0.\] We emphasize that these notations of operators \(A^{k}_{i,j}\) and \(O(\lambda^{k})\) may vary from line to line. Before computing the expansions of \(\big{(}M^{\pm}(\lambda)\big{)}^{-1}\) as \(\lambda\to 0\), we first state the following lemma used frequently, see e.g. [32]. **Lemma 5.1**.: _Let \(A\) be a closed operator and \(S\) be a projection. Suppose \(A+S\) has a bounded inverse. Then \(A\) has a bounded inverse if and only if_ \[a:=S-S(A+S)^{-1}S\] _has a bounded inverse in \(SH\), and in this case_ \[A^{-1}=(A+S)^{-1}+(A+S)^{-1}Sa^{-1}S(A+S)^{-1}.\] Now we begin to compute the asymptotic expansions of \(\big{(}M^{\pm}(\lambda)\big{)}^{-1}\) when \(\lambda\) near zero. Let \(M^{\pm}(\lambda)=\dfrac{\tilde{a}^{\pm}}{\lambda}\widetilde{M}^{\pm}(\lambda)\). By (3.7) one has \[\widetilde{M}^{\pm}(\lambda)= P+\dfrac{\lambda}{\tilde{a}^{\pm}}T+\dfrac{a_{1}^{\pm}}{ \tilde{a}^{\pm}}\lambda^{2}vG_{1}v+\dfrac{a_{3}^{\pm}}{\tilde{a}^{\pm}}\lambda^ {4}vG_{3}v \tag{5.1}\] \[+\dfrac{1}{\tilde{a}^{\pm}}\lambda^{5}vG_{4}v+\sum_{k=5}^{8}\dfrac {a_{k}^{\pm}}{\tilde{a}^{\pm}}\lambda^{k+1}vG_{k}v+O(\lambda^{10}).\] Thus, it suffices to compute the asymptotic expansions of \(\big{(}\widetilde{M}^{\pm}(\lambda)\big{)}^{-1}\) when \(\lambda\) near zero. Let \(Q=I-P\). Since \(vG_{i}v(i=0,1,\cdots,8)\) are bounded operators on \(L^{2}\), then by using (5.1) and Neumann series expansion one has \[\big{(}\widetilde{M}^{\pm}(\lambda)+Q\big{)}^{-1}=I-\sum_{k=1}^{9}\lambda^{k} B_{k}^{\pm}+O(\lambda^{10}), \tag{5.2}\] where \(B_{k}^{\pm}(1\leq k\leq 9)\) are bounded operators in \(L^{2}\) as follows: \[B_{1}^{\pm}= \dfrac{1}{\tilde{a}^{\pm}}T,\,B_{2}^{\pm}=\dfrac{a_{1}^{\pm}}{ \tilde{a}^{\pm}}vG_{1}v-\dfrac{1}{(\tilde{a}^{\pm})^{2}}T^{2},\,B_{3}^{\pm}=- \dfrac{a_{1}^{\pm}}{(\tilde{a}^{\pm})^{2}}(TvG_{1}v+vG_{1}vT)+\dfrac{1}{(\tilde {a}^{\pm})^{3}}T^{3},\] \[B_{4}^{\pm}= \dfrac{a_{3}^{\pm}}{\tilde{a}^{\pm}}vG_{3}v-\dfrac{(a_{1}^{\pm}) ^{2}}{(\tilde{a}^{\pm})^{2}}(vG_{1}v)^{2}+\dfrac{a_{1}^{\pm}}{(\tilde{a}^{\pm })^{3}}(T^{2}vG_{1}v+TvG_{1}vT+vG_{1}vT^{2})-\dfrac{1}{(\tilde{a}^{\pm})^{4}}T^ {4},\] \[B_{5}^{\pm}= \dfrac{1}{\tilde{a}^{\pm}}vG_{4}v-\dfrac{a_{3}^{\pm}}{(\tilde{a}^ {\pm})^{2}}(TvG_{3}v+vG_{3}vT)+\dfrac{(a_{1}^{\pm})^{2}}{(\tilde{a}^{\pm})^{3 }}\big{(}T(vG_{1}v)^{2}+(vG_{1}v)^{2}T+vG_{1}vTvG_{1}v\big{)}\] \[\qquad-\dfrac{a_{1}^{\pm}}{(\tilde{a}^{\pm})^{4}}\big{(}T^{3}vG_{ 1}v+vG_{1}vT^{3}+T^{2}vG_{1}vT+TvG_{1}vT^{2}\big{)}+\dfrac{1}{(\tilde{a}^{\pm })^{5}}T^{5}.\] For the convenience, we list the orthogonality relations of various operators and projections which are used frequently later. \[QD_{0}=D_{0}Q=D_{0}, \tag{5.3}\] \[S_{i}D_{j}=D_{j}S_{i}=S_{i},\,i>j,\,S_{i}D_{j}=D_{j}S_{i}=D_{j},\, i\leq j,\] \[D_{i}D_{j}=D_{j}D_{i}=D_{i},\,i>j,\] \[S_{2}TP=PTS_{2}=S_{2}T=TS_{2}=QvG_{1}vS_{2}=S_{2}vG_{1}vQ=0,\] (5.4) \[QTS_{1}=S_{1}TQ=0,\,vG_{1}vS_{3}=S_{3}vG_{1}v=0,\,S_{2}vG_{3}vS_ {3}=S_{3}vG_{3}vS_{2}=0,\] \[B_{1}^{\pm}S_{2}=S_{2}B_{1}^{\pm}=0,\,QB_{2}^{\pm}S_{2}=S_{2}B_{2 }^{\pm}Q=B_{2}^{\pm}S_{3}=S_{3}B_{2}^{\pm}=0,\] \[S_{2}B_{3}^{\pm}S_{2}=B_{3}^{\pm}S_{3}=S_{3}B_{3}^{\pm}=0,\,S_{2} B_{4}^{\pm}S_{3}=S_{3}B_{4}^{\pm}S_{2}=0.\] Now we turn to the proof of Theorem 3.7 case by case. Here we prove only the assertion for the case \(+\) sign, since the case \(-\) sign proceeds identically. By Lemma 5.1, we know that \(\widetilde{M}^{+}(\lambda)\) is invertible on \(L^{2}\) if and only if \[M_{1}^{+}(\lambda):=Q-Q(\widetilde{M}^{+}(\lambda)+Q)^{-1}Q=\dfrac{\lambda}{ \tilde{a}^{+}}QTQ+\sum_{i=2}^{9}\lambda^{k}QB_{k}^{+}Q+O(\lambda^{10}) \tag{5.6}\] is invertible on \(QL^{2}\). **The proof of Theorem 3.7(i)-(iii).** (i) If zero is a regular point of the spectrum of \(H\), then \(QTQ\) is invertible on \(QL^{2}\). Let \(D_{0}=(QTQ)^{-1}\) be an operator on \(QL^{2}\). If \(|V(x)|\lesssim(1+|x|)^{-7-}\), by using (5.6) one has \[M_{1}^{+}(\lambda)=\frac{\lambda}{\tilde{a}^{+}}QTQ+O(\lambda^{2})=\frac{ \lambda}{\tilde{a}^{+}}\Big{(}QTQ+O(\lambda)\Big{)}:=\frac{\lambda}{\tilde{a} ^{+}}\widetilde{M}_{1}^{+}(\lambda).\] By Neumann series, one has \[\big{(}\widetilde{M}_{1}^{+}(\lambda)\big{)}^{-1}=QA_{0,1}^{0}Q+O(\lambda),\] thus, \[\big{(}M_{1}^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}QA_{-1,1}^{0}Q+O(1).\] By using Lemma 5.1, we have \[\big{(}\widetilde{M}^{+}(\lambda)\big{)}^{-1}= \big{(}\widetilde{M}^{+}(\lambda)+Q\big{)}^{-1}+\big{(} \widetilde{M}^{+}(\lambda)+Q\big{)}^{-1}Q\big{(}M_{1}^{+}(\lambda)\big{)}^{- 1}Q\big{(}\widetilde{M}^{+}(\lambda)+Q\big{)}^{-1}\] \[= \lambda^{-1}QA_{-1,1}^{0}Q+O(1).\] Since \(\big{(}M^{+}(\lambda)\big{)}^{-1}=\frac{\lambda}{\tilde{a}^{+}}\big{(} \widetilde{M}^{+}(\lambda)\big{)}^{-1}\), one has \[\big{(}M^{+}(\lambda)\big{)}^{-1}=QA_{0,1}^{0}Q+O(\lambda).\] (ii) If zero is the first kind resonance, then by (5.6) one has \[M_{1}^{+}(\lambda)= \frac{\lambda}{\tilde{a}^{+}}QTQ+\lambda^{2}QB_{2}^{+}Q+\lambda^ {3}QB_{3}^{+}Q+O(\lambda^{4})\] \[= \frac{\lambda}{\tilde{a}^{+}}\Big{(}QTQ+\widetilde{a}^{+}\lambda Q B _{2}^{+}Q+\widetilde{a}^{+}\lambda^{2}QB_{3}^{+}Q+O(\lambda^{3})\Big{)}:= \frac{\lambda}{\tilde{a}^{+}}\widetilde{M}_{1}^{+}(\lambda).\] By the definition of the first kind of resonance, then \(QTQ\) is not invertible on \(QL^{2}\). Let \(S_{1}\) be the Riesz projection onto the kernel of \(QTQ\). Then it is easy to check that \(QTQ+S_{1}\) is invertible on \(QL^{2}\). In this case, we define \(D_{0}=(QTQ+S_{1})^{-1}\) as a bounded operator on \(QL^{2}\). By Neumann series, one has \[\big{(}\widetilde{M}_{1}^{+}(\lambda)+S_{1}\big{)}^{-1}=D_{0}- \lambda B_{1}^{0}-\lambda^{2}B_{2}^{0}+O(\lambda^{3}), \tag{5.7}\] where \(B_{1}^{0}=\tilde{a}^{+}D_{0}B_{2}^{+}D_{0}\) and \(B_{2}^{0}=\widetilde{a}^{+}D_{0}B_{3}^{+}D_{0}-(\tilde{a}^{+})^{2}D_{0}(B_{2}^ {+}D_{0})^{2}\). According to Lemma 5.1, \(\widetilde{M}_{1}^{+}(\lambda)\) has bounded inverse on \(QL^{2}\) if and only if \[M_{2}^{+}(\lambda):= S_{1}-S_{1}\big{(}\widetilde{M}_{1}^{+}(\lambda)+S_{1} \big{)}^{-1}S_{1}=\lambda S_{1}B_{1}^{0}S_{1}+\lambda^{2}S_{1}B_{2}^{0}S_{1} +O(\lambda^{3}) \tag{5.8}\] has bounded inverse on \(S_{1}L^{2}\). Note that \(S_{1}T^{2}S_{1}=S_{1}T(P+Q)TS_{1}=S_{1}TPTS_{1}\), then \[S_{1}B_{1}^{0}S_{1}=-\frac{1}{\tilde{a}^{+}}\Big{(}S_{1}TPTS_{1}-\frac{\|V\|_ {L^{1}}}{3\cdot(8\pi)^{2}}S_{1}vG_{1}vS_{1}\Big{)}:=-\frac{1}{\tilde{a}^{+}}T_ {1}. \tag{5.9}\] Then \[M_{2}^{+}(\lambda)=-\frac{\lambda}{\tilde{a}^{+}}\Big{(}T_{1}- \tilde{a}^{+}\lambda S_{1}B_{2}^{0}S_{1}+O(\lambda^{2})\Big{)}:=-\frac{\lambda} {\tilde{a}^{+}}\widetilde{M}_{2}^{+}(\lambda). \tag{5.10}\] By the definition of the first kind resonance, then \(T_{1}\) is invertible on \(S_{1}L^{2}\). Let \(D_{1}=T_{1}^{-1}\) be an operator on \(S_{1}L^{2}\), then \(S_{1}D_{1}=D_{1}S_{1}=D_{1}\). By using Neumann series, one has \[\big{(}\widetilde{M}_{2}^{+}(\lambda)\big{)}^{-1}=S_{1}A_{0,1}^{1}S_{1}+ \lambda S_{1}A_{1,1}^{1}S_{1}+O(\lambda^{2}).\] By (5.10), then we have \[\big{(}M_{2}^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}S_{1}A_{-1,1}^{1}S_{1}+S_{1}A_{ 0,1}^{1}S_{1}+O(\lambda).\] By using Lemma 5.1, one has \[\big{(}\widetilde{M}_{1}^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}S_{1}A_{-1,1}^{1 }S_{1}+QA_{0,1}^{1}Q+O(\lambda).\] By the same argument with the proof of the regular case, we obtain that \[\big{(}M^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}S_{1}A_{-1,1}^{1}S_{1}+\big{(}S_ {1}A_{0,1}^{1}+A_{0,2}^{1}S_{1}+QA_{0,3}^{1}Q\big{)}+O(\lambda).\] (iii) If there is a resonance of second kind at zero, then by (5.6) one has \[M_{1}^{+}(\lambda)= \frac{\lambda}{\widetilde{a}^{+}}QTQ+\sum_{k=2}^{7}\lambda^{k}QB _{k}^{+}Q+O(\lambda^{8})\] \[= \frac{\lambda}{\widetilde{a}^{+}}\Big{(}QTQ+\sum_{k=2}^{7} \widetilde{a}^{+}\lambda^{k-1}QB_{k}^{+}Q+O(\lambda^{7})\Big{)}:=\frac{ \lambda}{\widetilde{a}^{+}}\widetilde{M}_{1}^{+}(\lambda).\] By Neumann series, then \[\big{(}\widetilde{M}_{1}^{+}(\lambda)+S_{1}\big{)}^{-1}=D_{0}-\sum_{k=1}^{6} \lambda^{k}B_{k}^{0}+O(\lambda^{7}),\] where \(B_{k}^{0}(k=1,\cdots,6)\) are bounded operators in \(QL^{2}\) as follows: \[B_{1}^{0}= \tilde{a}^{+}D_{0}B_{2}^{+}D_{0},\,B_{2}^{0}=\widetilde{a}^{+}D_ {0}B_{3}^{+}D_{0}-(\tilde{a}^{+})^{2}D_{0}(B_{2}^{+}D_{0})^{2},\] \[B_{3}^{0}= \widetilde{a}^{+}D_{0}B_{4}^{+}D_{0}-(\widetilde{a}^{+})^{2}(D_{ 0}B_{2}^{+}D_{0}B_{3}^{+}D_{0}+D_{0}B_{3}^{+}D_{0}B_{2}^{+}D_{0})+(\widetilde{ a}^{+})^{3}D_{0}(B_{2}^{+}D_{0})^{3},\] \[B_{4}^{0}= \widetilde{a}^{+}D_{0}B_{5}^{+}D_{0}-(\widetilde{a}^{+})^{2}\big{(} D_{0}B_{2}^{+}D_{0}B_{4}^{+}D_{0}+D_{0}B_{4}^{+}D_{0}B_{2}^{+}D_{0}+D_{0}(B_{3}^{+ }D_{0})^{2}\big{)}\] \[+(\widetilde{a}^{+})^{3}\big{(}D_{0}(B_{2}^{+}D_{0})^{2}B_{3}^{+} D_{0}+D_{0}B_{2}^{+}D_{0}B_{3}^{+}D_{0}B_{2}^{+}D_{0}+D_{0}B_{3}^{+}D_{0}(B_{2}^{+ }D_{0})^{2}\big{)}\] \[-(\widetilde{a}^{+})^{4}D_{0}(B_{2}^{+}D_{0})^{4}.\] Furthermore, we obtain the more detail expansion of \(M_{2}^{+}(\lambda)\) as follows: \[M_{2}^{+}(\lambda)= -\frac{\lambda}{\tilde{a}^{+}}T_{1}+\sum_{k=2}^{6}\lambda^{k}S_{1 }B_{k}^{0}S_{1}+O(\lambda^{7})\] \[= -\frac{\lambda}{\tilde{a}^{+}}\Big{(}T_{1}-\tilde{a}^{+}\sum_{k=2} ^{6}\lambda^{k-1}S_{1}B_{k}^{0}S_{1}+O(\lambda^{6})\Big{)}:=-\frac{\lambda}{ \widetilde{a}^{+}}\widetilde{M}_{2}^{+}(\lambda).\] By the definition of the second kind resonance of \(H\), then \(T_{1}\) is not invertible on on \(S_{1}L^{2}\). Let \(S_{2}\) is the Riesz projection onto the kernel of \(T_{1}\), then \(T_{1}+S_{2}\) is invertible on \(S_{1}L^{2}\). In this case, let \(D_{1}=(T_{1}+S_{2})^{-1}\) be an operator on \(S_{1}L^{2}\). By Neumann series, one has \[\big{(}\widetilde{M}_{2}^{+}(\lambda)+S_{2}\big{)}^{-1}=D_{1}-\sum_{k=1}^{5} \lambda^{k}B_{k}^{1}+O(\lambda^{6}), \tag{5.11}\] where \(B_{k}^{1}(k=1,\cdots,5)\) are bounded operators in \(S_{1}L^{2}\) as follows: \[B_{1}^{1}= -\tilde{a}^{+}D_{1}B_{2}^{0}D_{1},\,B_{2}^{1}=-\widetilde{a}^{+}D_ {1}B_{3}^{0}D_{1}-(\tilde{a}^{+})^{2}D_{1}(B_{2}^{0}D_{1})^{2},\] \[B_{3}^{1}= -\widetilde{a}^{+}D_{1}B_{4}^{0}D_{1}-(\widetilde{a}^{+})^{2}(D_ {1}B_{2}^{0}D_{1}B_{3}^{0}D_{1}+D_{1}B_{3}^{0}D_{1}B_{2}^{0}D_{1})-(\widetilde{ a}^{+})^{3}D_{1}(B_{2}^{0}D_{1})^{3}.\] According to Lemma 5.1, \(\widetilde{M}_{2}^{+}(\lambda)\) has bounded inverse on \(S_{1}L^{2}\) if and only if \[M_{3}^{+}(\lambda):=S_{2}-S_{2}\big{(}\widetilde{M}_{2}^{+}(\lambda)+S_{2}\big{)} ^{-1}S_{2}=\sum_{k=1}^{5}\lambda^{k}S_{2}B_{k}^{1}S_{2}+O(\lambda^{6}) \tag{5.12}\] has bounded inverse on \(S_{2}L^{2}\). Using the orthogonality (5.3)-(5.5), we obtain that \(S_{2}B_{1}^{1}S_{2}=0\) and \[\begin{split} S_{2}B_{2}^{1}S_{2}=&-\tilde{a}^{+} a_{3}^{+}\Big{(}S_{2}vG_{3}vS_{2}+\frac{10}{3\|V\|_{L^{1}}}S_{2}(vG_{1}v)^{2}S_{2} \\ &-\frac{10}{3\|V\|_{L^{1}}}S_{2}vG_{1}vTD_{1}TvG_{1}vS_{2}\Big{)} :=-\tilde{a}^{+}a_{3}^{+}T_{2}.\end{split} \tag{5.13}\] Furthermore, one has \[\begin{split} M_{3}^{+}(\lambda)=&-\tilde{a}^{+}a_{ 3}^{+}\lambda^{2}T_{2}+\sum_{k=3}^{5}\lambda^{k}S_{2}B_{k}^{1}S_{2}+O(\lambda^ {6})\\ =&-\tilde{a}^{+}a_{3}^{+}\lambda^{2}\Big{(}T_{2}- \frac{1}{\tilde{a}^{+}a_{3}^{+}}\sum_{k=3}^{5}\lambda^{k-2}S_{2}B_{k}^{1}S_{2 }+O(\lambda^{4})\Big{)}:=-\tilde{a}^{+}a_{3}^{+}\lambda^{2}\widetilde{M}_{3}^ {+}(\lambda).\end{split} \tag{5.14}\] By the definition of the second kind resonance of the spectrum of \(H\), then \(T_{2}\) is invertible on \(S_{2}L^{2}\). In this case, we define \(D_{2}=T_{2}^{-1}\) as an operator on \(S_{2}L^{2}\), then \(S_{2}D_{2}=D_{2}S_{2}=D_{2}\). Using Neumann series, one has \[\big{(}\widetilde{M}_{3}^{+}(\lambda)\big{)}^{-1}= S_{2}A_{0,1}^{2}S_{2}+\lambda S_{2}A_{1,1}^{2}S_{2}+\lambda^{2}S_{2}A_{2,1 }^{2}S_{2}+\lambda^{3}S_{2}A_{3,1}^{2}S_{2}+O(\lambda^{4}).\] Moreover, \[\big{(}M_{3}^{+}(\lambda)\big{)}^{-1}= \lambda^{-2}S_{2}A_{-2,1}^{2}S_{2}+\lambda^{-1}S_{2}A_{-1,1}^{2} S_{2}+S_{2}A_{0,1}^{2}S_{2}+\lambda S_{2}A_{1,1}^{2}S_{2}+O(\lambda^{2}).\] By using Lemma 5.1, one has \[\big{(}\widetilde{M}_{2}^{+}(\lambda)\big{)}^{-1}=\lambda^{-2}S_{2}A_{-2,1}^{2 }S_{2}+\lambda^{-1}\big{(}S_{2}A_{-1,1}^{2}S_{1}+S_{1}A_{-1,2}^{2}S_{2}\big{)} +S_{1}A_{0,1}^{2}S_{1}+\lambda S_{1}A_{1,1}^{2}S_{1}+O(\lambda^{2}).\] By the same argument with the proof of the first kind resonance, we obtain that \[\begin{split}\big{(}M^{+}(\lambda)\big{)}^{-1}=& \lambda^{-3}S_{2}A_{-3,1}^{2}S_{2}+\lambda^{-2}\big{(}S_{2}A_{-2,1 }^{2}S_{1}+S_{1}A_{-2,2}^{2}S_{2}\big{)}+\lambda^{-1}\big{(}S_{2}A_{-1,1}^{2} +A_{-1,2}^{2}S_{2}\\ &+S_{1}A_{-1,3}^{2}S_{1}\big{)}+\big{(}S_{1}A_{0,1}^{2}+A_{0,1}^{ 2}S_{1}+QA_{0,3}^{2}Q\big{)}+O(\lambda).\end{split}\] \(\Box\) Before proving Theorem 3.7(iv), we give a lemma as follows, see [15]. **Lemma 5.2**.: _If \(V(x)\lesssim(1+|x|)^{-23-}\), then ker\((S_{3}vG_{4}vS_{3})=\{0\}\). As a result, \(T_{3}=S_{3}vG_{4}vS_{3}\) is invertible on \(S_{3}L^{2}\)._ **The proof of Theorem 3.7(iv)** If there is a resonance of the third kind at zero, then by the same argument, we obtain the more detail expansion of \(M_{3}^{+}(\lambda)\) as follows: \[\begin{split} M_{3}^{+}(\lambda)=&-\tilde{a}^{+}a_{ 3}^{+}\lambda^{2}T_{2}+\sum_{k=3}^{7}\lambda^{k}S_{2}B_{k}^{1}S_{2}+O(\lambda^ {8})\\ =&-\tilde{a}^{+}a_{3}^{+}\lambda^{2}\Big{(}T_{2}- \frac{1}{\tilde{a}^{+}a_{3}^{+}}\sum_{k=3}^{7}\lambda^{k-2}S_{2}B_{k}^{1}S_{2 }+O(\lambda^{6})\Big{)}:=-\tilde{a}^{+}a_{3}^{+}\lambda^{2}\widetilde{M}_{3}^ {+}(\lambda).\end{split}\] By the definition of the third kind resonance of the spectrum of \(H\), \(T_{2}\) is not invertible on on \(S_{2}L^{2}\). Let \(S_{3}\) is the Riesz projection onto the kernel of \(T_{2}\), then \(T_{2}+S_{3}\) is invertible on \(S_{2}L^{2}\). In this case, let \(D_{2}=(T_{2}+S_{3})^{-1}\) be an operator on \(S_{2}L^{2}\). By Neumann series, one has \[\big{(}\widetilde{M}_{3}^{+}(\lambda)+S_{3}\big{)}^{-1}=D_{2}-\sum_{k=1}^{5} \lambda^{k}B_{k}^{2}+O(\lambda^{6}), \tag{5.15}\] where \(B_{k}^{2}(k=1,\cdots,5)\) are bounded operators in \(S_{2}L^{2}\), and \(B_{1}^{2}=-\frac{1}{\tilde{a}^{+}a_{3}^{+}}D_{2}B_{3}^{1}D_{2}\). According to Lemma 5.1, \(\widetilde{M}_{3}^{+}(\lambda)\) has bounded inverse on \(S_{2}L^{2}\) if and only if \[M_{4}^{+}(\lambda):=S_{3}-S_{3}\big{(}\widetilde{M}_{3}^{+}(\lambda)+S_{3} \big{)}^{-1}S_{3}=\sum_{k=1}^{5}\lambda^{k}S_{3}B_{k}^{2}S_{3}+O(\lambda^{6}) \tag{5.16}\] has bounded inverse on \(S_{3}L^{2}\). Using orthogonality (5.3)-(5.5), we have \[S_{3}B_{1}^{2}S_{3}=\frac{1}{a_{3}^{+}}S_{3}vG_{4}vS_{3}:=\frac{1}{a_{3}^{+}} T_{3}. \tag{5.17}\] Moreover, we have \[\begin{split} M_{4}^{+}(\lambda)=&\frac{\lambda}{a_ {3}^{+}}T_{3}+\sum_{k=2}^{5}\lambda^{k}S_{3}B_{k}^{2}S_{3}+O(\lambda^{6})\\ =&\frac{\lambda}{a_{3}^{+}}\Big{(}T_{3}+a_{3}^{+} \sum_{k=2}^{5}\lambda^{k-1}S_{3}B_{k}^{2}S_{3}+O(\lambda^{5})\Big{)}:=\frac{ \lambda}{a_{3}^{+}}\widetilde{M}_{4}^{+}(\lambda).\end{split} \tag{5.18}\] By Lemma 5.2, then \(T_{3}\) is always invertible on \(S_{3}L^{2}\), let \(D_{3}=T_{3}^{-1}\) be an operator on \(S_{3}L^{2}\), then \(S_{3}D_{3}=D_{3}S_{3}=D_{3}\). By Neumann series, one has \[\big{(}\widetilde{M}_{4}^{+}(\lambda)\big{)}^{-1}=S_{3}D_{3}S_{3}+\sum_{k=1}^ {4}\lambda^{k}S_{3}A_{k,1}^{3}S_{3}+O(\lambda^{5}).\] Moreover, we have \[\big{(}M_{4}^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}S_{3}A_{-1,1}^{3}S_{3}+S_{3} A_{0,1}^{3}S_{3}+\lambda S_{3}A_{1,1}^{3}S_{3}+\lambda^{2}S_{3}A_{2,1}^{3}S_{3}+ \lambda^{3}S_{3}A_{3,1}^{3}S_{3}+O(\lambda^{4}).\] By using Lemma 5.1, one has \[\big{(}\widetilde{M}_{3}^{+}(\lambda)\big{)}^{-1}=\lambda^{-1}S_{3}A_{-1,1}^{3 }S_{3}+S_{2}A_{0,1}^{3}S_{2}+\lambda S_{2}A_{1,1}^{3}S_{2}+\lambda^{2}S_{2}A_ {2,1}^{3}S_{2}+\lambda^{3}S_{2}A_{3,1}^{3}S_{2}+O(\lambda^{4})\] By the same argument with the proof of the second kind resonance, we obtain that \[\begin{split}\big{(}M^{+}(\lambda)\big{)}^{-1}=& \lambda^{-4}S_{3}A_{-4,1}^{3}S_{3}+\lambda^{-3}S_{2}A_{-3,1}^{3}S_{ 2}+\lambda^{-2}\big{(}S_{2}A_{-2,1}^{3}S_{1}+S_{1}A_{-2,2}^{3}S_{2}\big{)}\\ &+\lambda^{-1}\big{(}S_{2}A_{-1,1}^{3}+A_{-1,2}^{3}S_{2}+S_{1}A_ {-1,3}^{3}S_{1}\big{)}+\big{(}S_{1}A_{0,1}^{3}+A_{0,1}^{3}S_{1}+QA_{0,3}^{3}Q \big{)}+O(\lambda).\end{split}\] Here, it is easy check that \(A_{-4,1}^{3}=(S_{3}vG_{4}vS_{3})^{-1}=D_{3}\) which doesn't depend on \(\pm\)sign. Thus, the proof of Theorem 3.7 is completed. ### The classification of threshold spectral subspaces In this subsection, we give the characterizations of the zero resonance subspaces \(S_{i}L^{2}(\mathbb{R}^{3})(i=1,2,3)\) according to the distributional solutions to \(H\phi=0\) in the weighted \(L^{2}\) space. Such characterizations of resonances first be obtained in terms of the \(L^{p}\) spaces, see e.g. Erdogan, Green, Toprak [15]. Since its proof is similar, we just list these results. For \(s\in\mathbb{R}\), we define \(W_{\sigma}(\mathbb{R}^{3}):=\bigcap_{s>\sigma}L^{2}_{-s}(\mathbb{R}^{3})\), which is increasing in \(\sigma\) and satisfies \(L^{2}_{-\sigma}(\mathbb{R}^{3})\subset W_{\sigma}(\mathbb{R}^{3})\). In particular, \(f\in W_{\frac{3}{2}}(\mathbb{R}^{3})\) and if \(f\in L^{\infty}(\mathbb{R}^{3})\). **Theorem 5.3**.: _Assume that \(|V(x)|\lesssim(1+|x|)^{-\beta}\) with \(\beta>0\)._ * _If_ \(\beta>11\)_, then_ \(f\in S_{1}L^{2}\setminus\{0\}\) _if and only if_ \(f=Uv\phi\) _with_ \(0\neq\phi\in W_{\frac{3}{2}}\) _( or_ \(\phi\in L^{\infty}\) _) such that_ \(H\phi=0\) _in the distributional sense, and_ \(\phi=-G_{0}vf+c_{0},\) _where_ \(c_{0}=\|V\|_{L^{1}}^{-1}\langle v,Tf\rangle\)_._ * _If_ \(\beta>19\)_, then_ \(f\in S_{2}L^{2}\setminus\{0\}\) _if and only if_ \(f=Uv\phi\) _with_ \(\phi\in W_{\frac{1}{2}}\) _( or_ \(\phi\in L^{p}\bigcap L^{\infty}\) _for any_ \(p>3\) _) such that_ \(H\phi=0\) _in the distributional sense, and_ \(\phi=-G_{0}vf\)_._ * _If_ \(\beta>23\)_, then_ \(f\in S_{3}L^{2}\setminus\{0\}\) _if and only if_ \(f=Uv\phi\) _with_ \(\phi\in L^{2}\bigcap L^{\infty}\) _such that_ \(H\phi=0\) _in the distributional sense, and_ \(\phi=-G_{0}vf\)_._ **Acknowledgments:** Avy Soffer is partially supported by NSF-DMS (No. 2205931) and the Simon's Foundation (No. 395767). Xiaohua Yao are partially supported by NSFC (No.11771165, 12171182). We would like to thank Dr. Zijun Wan for her useful discussions.
2309.06397
Compositional Separation of Control Flow and Data Flow
Every Model of High-Level Computation (MHC) has an underlying composition mechanism for combining simple computation devices into more complex ones. Composition can be done by (explicitly or implicitly) defining control flow, data flow or any combination thereof. Control flow specifies the order in which individual computations are activated, whereas data flow defines how data is exchanged among them. Unfortunately, traditional MHCs either mix data and control or only consider one dimension explicitly, which makes it difficult to reason about data flow and control flow separately. Reasoning about these dimensions orthogonally is a crucial desideratum for optimisation, maintainability and verification purposes. In this paper, we introduce a novel MHC that explicitly treats data flow and control flow as separate dimensions, while providing modularity. As the model is rooted in category theory, it provides category-theoretic operations for compositionally constructing sequential, parallel, branchial or iterative composites. Compositionality entails that a composite exhibits the same properties as its respective constituents, including separation of concerns and modularity. We conclude the paper by demonstrating how our proposal can be used to model high-level computations in two different application domains: software engineering and artificial intelligence.
Damian Arellanes
2023-09-12T16:59:40Z
http://arxiv.org/abs/2309.06397v3
# Compositional Separation of Control Flow and Data Flow ###### Abstract Every constructive model of computation (CMC) has an underlying composition mechanism for combining simple computation devices into more complex ones. Composition can be done by (explicitly or implicitly) defining control flow, data flow or any combination thereof. Control flow specifies the order in which individual computation devices are activated, whereas data flow defines how data is exchanged among them. Unfortunately, traditional CMCs either mix data and control or only consider one dimension explicitly, which makes it difficult to reason about data flow and control flow separately. Reasoning about these dimensions orthogonally is a crucial desideratum for optimisation, maintainability and verification purposes. In this paper, we introduce a novel model that explicitly treats data flow and control flow as separate dimensions, while providing modularity. As the model is rooted in category theory, it provides category-theoretic operations for compositionally constructing sequential or parallel composites. Compositionality entails that a composite exhibits the same properties as its respective constituents, including separation of concerns and modularity. ## 1 Introduction In the context of theoretical computer science, _compositionality_ refers to the property of constructive models of computation (CMCs) that allows the inductive definition of complex computation devices from simpler ones [1]. Examples of such models, which are specifically designed to be useful in the actual construction of computing systems, include the von Neumann model, the actor model and process algebras [2]. When composition is done algebraically, the resulting computation structures (known as _composites_) exhibit the same characteristics as their constituents [3]. Algebraic compositionality can be realised by the composition of _control flow_[4] or _data flow_[5]. Control flow defines the order in which individual computation devices are computed, whereas data flow defines how data is passed among them. Traditionally, CMCs do not support algebraic composition and they allow the definition of computations in which data follows control. This sort of coupling makes it difficult (i) to (formally) reason about computation order and data production/consumption separately and (ii) to explicitly distinguish between control and data dependencies [6]. Consequently, it is hard to (i) verify these dimensions independently [7; 8], or (ii) modify/optimise control flow without affecting data flow (or viceversa) [9; 10]. For these reasons, enabling separation of control flow and data flow within the foundational composition semantics of any constructive model of computation is a crucial desideratum. While _control-based composition_ approaches define explicit control flow for the coordination of computation devices, _data-based composition_ defines implicit control in the collaborative exchange of data [11]. Thus, the notion of control flow has higher precedence than data flow because it is always present in any composition mechanism (and not the other way round).1 In fact, it is possible to compose complex computations by control flow only and without the need of passing data at all (cf. actuator composition [12]).2 Footnote 2: Even if we argue that control is a piece of information/data, the act of sending it from one computation device to another is still governed by the grandiose, apparently unavoidable, notion of control flow (cf. interleaving semantics for global execution traces in concurrent systems [13]). For the above reasons, we believe that the right way of constructing complex computational behaviours is through a control-based composition mechanism that does not neglect the role of data passing. Accordingly, in this paper we propose a model in which fundamental units of composition (known as _computons_) are passive open systems able to interact with their environment via an interface which consists of input and output ports. As a port is a structural construct that can exclusively buffer either data or control, computons exchange data and control separately. Our model is compositional in the sense that computons can be inductively composed into larger ones via well-defined control-based composition operations. Such operations are rooted in category theory and allow the formal construction of sequential or parallel computons from simpler ones. As a result of the compositional semantics, composites preserve the structure of the composed entities, so they also separate data and control and their port-based interface is inductively constructed from the composed computons. Remarkably, unlike existing compositional approaches, any two computons can always be composed either sequentially or in parallel. Apart from being compositional and separating concerns, our model enables modularity and control flow encapsulation, which allows treating computons unifiedly. Encapsulation is realised by the fact that a composite defines some precise, explicit control flow structure (e.g., sequencing or parallelising) that can only be accessed through the respective computon interface. Thus, a computon can be perceived as an encapsulated black-box that can only interact with other computons via its ports. Hiding internal details in this way enable us to build computons of considerable complexity. The rest of the paper is structured as follows. Section 2 presents the definition of computons by treating them as set-valued functors in a category that we introduce which we refer to as the _category of computons and computon morphisms_. Although this paper is focused on the compositional construction of such entities and not on their operational semantics, Section 3 briefly discusses the notion of computon computation by borrowing ideas from the well-known _token game_ that has been used in the context of P/T Petri Nets for many years. Sections 4 and 5 present the most elementary classes of computons that serve as building blocks for constructing complex composites. Section 6 describes formal category-theoretic operations to form either sequential or parallel composites. Section 7 presents and analyses related work, and Section 8 outlines the conclusions and future directions of our work. ## 2 Computons Intuitively, a _computon_ is a bipartite graph with two types of nodes: _computation units_ and _ports_.3 A computation unit is a construct that receives information in ports, performs some computation and produces new information in other ports. A port that is connected between two computation units is known as _internal port_ (or _i-port_), whereas a port that is exclusively connected to or from a computation unit is called _external_ (or _e-port_). As ports and computation units are connected via edges, edges represent information flow ranging from control signals to complex data values. Footnote 3: The word _computon_ derives from the Latin root for computation (i.e., _computus_) and the Greek suffix _-on_[14]. In Physics, such a suffix is traditionally used to designate subatomic particle names. The interface of a computon towards the outside world is determined by its collection of e-ports each being an external control inport (_ec-inport_), an external control output (_ec-outport_), an external data inport (_ed-inport_) or an external data output (_ed-outport_). An ec-inport is where control flow originates, an ec-outport is where control-flow terminates, an ed-inport stores data coming from the external world whereas an ed-output stores data resulting from the computon's operation. Just as there are _ec-ports_ and _ed-ports_, there are also internal control ports (_ic-ports_) and internal data ports (_id-ports_). The dichotomy between data and control in our proposal entails that a computon is a unit of control-driven computation wherein control signals and data values travel independently via edges. Ports are differentiated by colours which, in practice, can be abstract data types such as the type of Booleans or the type of Integers. In this paper, in order to provide a general "type distinction framework", ports are deliberatively coloured with natural numbers.4 The number zero is reserved for control ports, whereas data ports are coloured with natural numbers greater than zero. Footnote 4: From a programmer’s perspective, a colour can correspond to a component interface in the context of component-based software development [3]. Thus, in a user-oriented programming language or in a component model, every natural number can be associated with an actual type which can be defined as a many-sorted sigma algebra. Formally, a computon is a functor from **Comp** to **Set** (see Definition 1) where **Set** is the category of sets and total functions and **Comp** is the free category generated by the following diagram:5 Footnote 5: Following the notation of function abstraction in Lambda Calculus, we use \(\lambda\) to denote computons. which consists of five objects and twelve morphisms (including identity and composite morphisms given by trivial paths and path concatenation, respectively). **Definition 1** (Computation).: A computon \(\lambda\) is a functor \(\mathbf{Comp}\rightarrow\mathbf{Set}\) that maps: * \(U\) to a (possibly empty) set \(\lambda(U)\) of computation units, * \(P\) to a (non-empty) set \(\lambda(P)\) of ports, * \(E\) to a (possibly empty) set \(\lambda(E)\) of edges, * \(F\) to a (possibly empty) set \(\lambda(F)\) of edges, * \(\Sigma\) to a (non-empty) set \(\lambda(\Sigma)\subset\mathbb{N}\) of colours, * \(\sigma\) to a total surjective function \(\lambda(\sigma):\lambda(E)\twoheadrightarrow\lambda(U)\) that specifies the outgoing edges of each computation unit, * \(\tau\) to a total surjective function \(\lambda(\tau):\lambda(F)\twoheadrightarrow\lambda(U)\) that specifies the incoming edges of each computation unit, * \(t\) to a total function \(\lambda(t):\lambda(E)\rightarrow\lambda(P)\) that specifies the incoming edges of each port, * \(s\) to a total function \(\lambda(s):\lambda(F)\rightarrow\lambda(P)\) that specifies the outgoing edges of each port, and * \(c\) to a total surjective function \(\lambda(c):\lambda(P)\twoheadrightarrow\lambda(\Sigma)\) that assigns to each port a colour, such that there is: * an identity function \(1_{\lambda(x)}\) in **Set** for each object \(x\) of **Comp**, * a composite function \(\lambda(g)\circ\lambda(f)\) in \(\mathbf{Set}\) for each pair \((f,g)\) of composable morphisms in \(\mathbf{Comp}\), * at least one port \(p\in[\lambda(P)\setminus Im(\lambda(s))]\) with \(\lambda(c)(p)=0\) and * at least one port \(q\in[\lambda(P)\setminus Im(\lambda(t))]\) with \(\lambda(c)(q)=0\). As a computon \(\lambda\) is a set-valued functor, it can be expressed in the form of a tuple \((U,P,E,F,\Sigma,\sigma,\tau,t,s,c)\). Without loss of generality, we took the liberty of simplifying the expression in order to reduce clutter, e.g., we write \(U\) for \(\lambda(U)\). For the rest of the paper, the reader must bear in mind that each component of \(\lambda\) is an actual set or a function, not an object or a morphism in \(\mathbf{Comp}\). To distinguish between computons, we use natural numbers as subscripts which carry over computon components. If the symbol for a computon has no subscript, we assume that the computon components have no subcript either. The surjectivity condition of Definition 1 entails that every computation unit (if any) has at least one incoming edge and at least one outgoing edge. As every function we deal with is a total function, we also have that every edge goes from a port to a computation unit or viceversa. That is, a computon has neither dangling edges nor dangling computation units. **Definition 2** (Computation Interface).: The interface of a computon \(\lambda\) towards the external world is a tuple \((P^{+},P^{-})\) where \(P^{+}:=\{p\in P\mid(\nexists e\in E)[t(e)=p]\}\) is the set of e-inports of \(\lambda\) and \(P^{-}:=\{p\in P\mid(\nexists f\in F)[s(f)=p]\}\) is the set of e-outports of \(\lambda\). A port \(p\in Im(s)\cap Im(t)\) is called an i-port of \(\lambda\). **Notation 1**.: Given a computon \(\lambda\), \(Q^{+}\) denotes its set of ec-inports, \(Q^{-}\) its set of ec-outports, \(D^{+}\) its set of ed-inports and \(D^{-}\) its set of ed-outports. These sets are defined as follows: \[Q^{\square} :=\{p\in P^{\square}\mid c(p)=0\}\text{ with }\square\in\{+,-\}\] \[D^{\square} :=\{p\in P^{\square}\mid c(p)>0\}\text{ with }\square\in\{+,-\}\] Notice in Definition 2 that the sets \(P^{+}\) and \(P^{-}\) are not necessarily disjoint so that a port can be e-inport and e-outport at the same time. A port \(p\) of this sort is called _e-inoutport_ and, like the others, it can also buffer control or data. If \(p\in Q^{+}\cap Q^{-}\), then it is called _ec-inoutport_. If \(p\in D^{+}\cap D^{-}\), it is called _ed-inoutport_. As control ports and data ports are identified as separate entities, information movement within a computon corresponds to either data flow or control flow. Particularly, we say that any control port is connected to or from a computation unit via a control flow edge, whereas a data port is connected analogously but with a data flow edge (see Definitions 3 and 4). The collection of ports receiving and sending information from/to a computation unit \(u\) are denoted \(u\bullet\) and \(\bullet u\), respectively. Similarly, \(\bullet p\) and \(p\bullet\) denote the source and target computation units of a port \(p\), respectively (see Definition 5). When there is information flow from every e-inport to every e-outport, we say that the computon is connected (see Definition 6). As per Proposition 1, a computon of this sort always has computation units. **Definition 3** (Information Flow).: Given a computon \(\lambda\), let \(p\in P\) and \(u\in U\). We say that there is information flow from \(p\) to \(u\) if there is an edge \(f\in F\) such that \(s(f)=p\) and \(\tau(f)=u\). This is denoted \(p\xrightarrow{f}u\). If there is an edge \(e\in E\) with \(\sigma(e)=u\) and \(t(e)=p\), we say that there is information flow from \(u\) to \(p\), written \(u\xrightarrow{e}p\). We use \(p_{1}\xrightarrow{*}p_{n}\) to denote the existence of \(p_{1}\xrightarrow{f_{1}}u_{1}\xrightarrow{e_{1}}p_{2}\xrightarrow{f_{2}}u_{2} \xrightarrow{e_{2}}\ldots\xrightarrow{f_{n-1}}u_{n-1}\xrightarrow{e_{n-1}}p_{n}\) where \(p_{1},\ldots,p_{n}\in P\), \(u_{1},\ldots,u_{n-1}\in U\), \(e_{1},\ldots,e_{n-1}\in E\) and \(f_{1},\ldots,f_{n-1}\in F\). **Definition 4** (Control Flow and Data Flow Edges).: Given a computon \(\lambda\) and an edge \(e\in E\cup F\), we say that \(e\) represents control flow if \(c(s(e))=0\) or \(c(t(e))=0\); otherwise, it represents data flow. **Definition 5** (Pre- and Post-Sets).: For a computation unit \(u\in U\) of a computon \(\lambda\), \(\bullet u\) and \(u\bullet\) denote the sets \(\{p\in P\mid(\exists f\in F)(p\xrightarrow{f}u)\}\) and \(\{p\in P\mid(\exists e\in E)(u\xrightarrow{e}p)\}\), respectively. Similarly, for a port \(p\in P\), \(\bullet p\) and \(p\bullet\) denote the sets \(\{u\in U\mid(\exists e\in E)(u\xrightarrow{e}p)\}\) and \(\{u\in U\mid(\exists f\in F)(p\xrightarrow{f}u)\}\), respectively. **Definition 6** (Connected Computon).: We say that a computon \(\lambda\) is connected if and only if \((\forall p\in P^{+})(\forall q\in P^{-})[p\xrightarrow{\star}q]\). **Proposition 1**.: Every connected computon has at least one computation unit. Proof.: Assume for contrapositive that \(\lambda\) is a computon with \(U=\emptyset\), which means that \(\sigma\) and \(\tau\) are well-defined only if \(E=\emptyset=F\). Since \(\lambda\) has no edges and no computation units, we have that there is no \(f\in F\) and no \(e\in E\) where \(p\xrightarrow{f}\cdots\xrightarrow{e}q\) holds for some \(p\in P^{+}\) and some \(q\in P^{-}\), i.e., \(\lambda\) is not a connected computon because \(p\xrightarrow{\star}q\) never holds (see Definition 6). By logical equivalence, we conclude that the proposition is true. At this stage, we have provided sufficient details about the general structure of computons by treating them as set-valued functors. Defining computons in this way gives rise to a functor category \(\mathbf{Set}^{\mathbf{Comp}}\) whose objects are computons and whose morphisms are computon morphisms. **Definition 7** (Computation morphism).: If \(\lambda_{1}\) and \(\lambda_{2}\) are two computons, a computon morphism \(\alpha:\lambda_{1}\rightarrow\lambda_{2}\) is a natural transformation whose components are the total functions \(\alpha_{U}:U_{1}\rightarrow U_{2}\), \(\alpha_{P}:P_{1}\rightarrowtail P_{2}\), \(\alpha_{E}:E_{1}\rightarrowtail E_{2}\), \(\alpha_{F}:F_{1}\rightarrowtail E_{2}\) and \(\alpha_{\Sigma}:\Sigma_{1}\hookrightarrow\Sigma_{2}\) such that the diagrams of Figure 1 commute and \(\vec{i}(\alpha)\cup\vec{\sigma}(\alpha)\subseteq P_{1}^{+}\cup P_{1}^{-}\). Here, \(\vec{i}(\alpha)\) and \(\vec{\sigma}(\alpha)\) denote \(\{p_{1}\in P_{1}\mid\bullet\alpha(p_{1})\setminus\alpha(\bullet p_{1}) \neq\emptyset\}\) and \(\{p_{1}\in P_{1}\mid\alpha(p_{1})\bullet\alpha(p_{1}\bullet)\neq\emptyset\}\), respectively. **Notation 2**.: To simplify notation when referring to the components of a computon morphism \(\alpha:\lambda_{1}\rightarrow\lambda_{2}\), we write \(\alpha(u)\) for \(\alpha_{U}(u)\), \(\alpha(p)\) for \(\alpha_{P}(p)\), \(\alpha(e)\) for \(\alpha_{E}(e)\) and \(\alpha(f)\) for \(\alpha_{F}(f)\). For the rest of the paper, we also write \(\alpha(A)\) to denote \(Im(\alpha_{P}|_{A})\) if \(A\subseteq P_{1}\) or \(Im(\alpha_{U}|_{A})\) if \(A\subseteq U_{1}\). Likewise, we use \(\alpha(B)^{-1}\) to denote \(\{p_{1}\in P_{1}\mid\alpha(p_{1})\in B\}\) if \(B\subseteq P_{2}\) or \(\{u_{1}\in U_{1}\mid\alpha(u_{1})\in B\}\) if \(B\subseteq U_{2}\). **Remark 1**.: From Definition 7, it is easy to see that a computon morphism is a monomorphism because all its components are injective (even the \(\Sigma\)-component which is an inclusion map). Naturally, composition of computon morphisms \(\alpha\) and \(\beta\) is defined component-wise: \[(\beta_{U},\beta_{P},\beta_{E},\beta_{F},\beta_{\Sigma})\circ(\alpha_{U}, \alpha_{P},\alpha_{E},\alpha_{F},\alpha_{\Sigma})=(\beta_{U}\circ\alpha_{U}, \beta_{P}\circ\alpha_{P},\beta_{E}\circ\alpha_{E},\beta_{F}\circ\alpha_{F}, \beta_{\Sigma}\circ\alpha_{\Sigma})\] Figure 1: A computon morphism is a natural transformation \(\alpha:\lambda_{1}\rightarrow\lambda_{2}\). A monomorphism avoids mapping multiple ports of \(\lambda_{1}\) to the same port in \(\lambda_{2}\). As this restriction also applies to computation units and edges, it is true that \(\lambda_{2}\) has the same or more complex structure than \(\lambda_{1}\), i.e., a computon morphism does not describe a process of simplifying a computon. Intuitively, a computon morphism \(\alpha:\lambda_{1}\to\lambda_{2}\) is an embedding (or an insertion) of a computon \(\lambda_{1}\) into a (potentially more complex) computon \(\lambda_{2}\), which preserves ports (with their respective colours, incoming edges and outgoing edges) and computation units (with their respective incoming and outgoing edges). As a result of this preservation, an e-inport of \(\lambda_{1}\) is either an e-inport in \(\lambda_{2}\) or an i-port \(p_{2}\in P_{2}\) provided that \((\exists e_{2}\in E_{2})(\exists u_{2}\in U_{2})[u_{2}\xrightarrow{e_{2}}p_{2}]\) -- see Propositions 2, 3 and 4. Similarly, an e-outport of \(\lambda_{1}\) is either an e-outport in \(\lambda_{2}\) or an i-port \(q_{2}\in P_{2}\) provided that \((\exists f_{2}\in F_{2})(\exists u_{2}\in U_{2})[q_{2}\xrightarrow{f_{2}}u_{2}]\) -- see Propositions 2, 3 and 4. While external ports can be demoted to internal ports in \(\lambda_{2}\), internal ports can never be promoted to external ones due to the commutative diagrams presented in Figure 1. **Proposition 2**.: If \(\alpha:\lambda_{1}\to\lambda_{2}\) is a computon morphism, \(\alpha^{-1}(P_{2}^{+})\subseteq P_{1}^{+}\) and \(\alpha^{-1}(P_{2}^{-})\subseteq P_{1}^{-}\). Proof.: By letting \(\alpha:\lambda_{1}\to\lambda_{2}\) be a computon morphism, we only prove \(\alpha^{-1}(P_{2}^{+})\subseteq P_{1}^{+}\) by contrapositive, since the proof of \(\alpha^{-1}(P_{2}^{-})\subseteq P_{1}^{-}\) is completely analogous. If \(p_{1}\in P\setminus P_{1}^{+}\), then \((\exists e_{1}\in E_{1})[t_{1}(e_{1})=p_{1}]\) (see Definition 2). By commutativity, we know that \(t_{2}(\alpha(e_{1}))=\alpha(t_{1}(e_{1}))=\alpha(p_{1})\) which implies that \(\alpha(p_{1})\notin P_{2}^{+}\) (see Definition 2) and, consequently, that \(p_{1}\notin\alpha^{-1}(P_{2}^{+})\). As the implication \(p_{1}\notin P_{1}^{+}\implies p_{1}\notin\alpha^{-1}(P_{2}^{+})\) is logically equivalent to \(p_{1}\in\alpha^{-1}(P_{2}^{+})\implies p_{1}\in P_{1}^{+}\), we conclude that \(\alpha^{-1}(P_{2}^{+})\subseteq P_{1}^{+}\), as required. **Proposition 3**.: If \(\alpha:\lambda_{1}\to\lambda_{2}\) is a computon morphism, \(P_{1}^{+}\cap\vec{i}(\alpha)=\emptyset\implies\alpha^{-1}(P_{2}^{+})=P_{1}^{+}\) and \(P_{1}^{-}\cap\vec{\alpha}(\alpha)=\emptyset\implies\alpha^{-1}(P_{2}^{-})=P_{1}^ {-}\). Proof.: Let \(\alpha:\lambda_{1}\to\lambda_{2}\) be a computon morphism and assume that \(P_{1}^{+}\cap\vec{i}(\alpha)=\emptyset\). This assumption says that if \(p_{1}\in P_{1}^{+}\) then \(p_{1}\notin\vec{i}(\alpha)\) so that \(\bullet\alpha(p_{1})\setminus\alpha(\bullet p_{1})=\emptyset\) which is true when \(\bullet\alpha(p_{1})=\alpha(\bullet p_{1})\). As \(\bullet p_{1}=\emptyset\) because \(p_{1}\in P_{1}^{+}\), we have that \(\bullet\alpha(p_{1})=\emptyset=\alpha(\bullet p_{1})\). The fact \(\bullet\alpha(p_{1})=\emptyset\) implies that \(\alpha(p_{1})\in P_{2}^{+}\), i.e., \(p_{1}\in\alpha^{-1}(P_{2}^{+})\). Thus, proving that \(P_{1}^{+}\subseteq\alpha^{-1}(P_{2}^{+})\). Since \(\alpha^{-1}(P_{2}^{+})\subseteq P_{1}^{+}\) also holds by Proposition 2, we conclude that \(P_{1}^{+}\cap\vec{i}(\alpha)=\emptyset\implies\alpha^{-1}(P_{2}^{+})=P_{1}^ {+}\). The proof of \(P_{1}^{-}\cap\vec{\alpha}(\alpha)=\emptyset\implies\alpha^{-1}(P_{2}^{-})=P_{1}^ {-}\) follows analogously. **Proposition 4**.: If \(\lambda_{1}\) is a connected computon and \(\alpha:\lambda_{1}\to\lambda_{2}\) is a computon morphism, \((P_{1}^{+}\cap\vec{i}(\alpha))\cup(P_{1}^{-}\cap\vec{\alpha}(\alpha))\subseteq \alpha^{-1}(Im(t_{2})\cap Im(s_{2}))\). Proof.: Let \(\alpha:\lambda_{1}\to\lambda_{2}\) be a computon morphism. If \(p_{1}\in P_{1}^{+}\cap\vec{i}(\alpha)\), then there is some \(u_{2}\in\bullet\alpha(p_{1})\setminus\alpha(\bullet p_{1})\). By Definition 5, there must also be some \(e_{2}\in E_{2}\) where \(\sigma_{2}(e_{2})=u_{2}\) and \(t_{2}(e_{2})=\alpha(p_{1})\). That is, \(\alpha(p_{1})\in Im(t_{2})\). Now, since \(\lambda_{1}\) is a connected computon and \(p_{1}\in P_{1}^{+}\), there is some \(f_{1}\in F_{1}\) and some \(u_{1}\in U_{1}\) where \(p_{1}\xrightarrow{f_{1}}u_{1}\) holds (see Proposition 1). By commutativity and because \(s_{1}(f_{1})=p_{1}\), \(s_{2}(\alpha(f_{1}))=\alpha(s_{1}(f_{1}))=\alpha(p_{1})\). That is, \(\alpha(p_{1})\in Im(s_{2})\). Having \(\alpha(p_{1})\in Im(t_{2})\cap Im(s_{2})\) implies \(p_{1}\in\alpha^{-1}(Im(t_{2})\cap Im(s_{2}))\); thus, proving that \(P_{1}^{+}\cap\vec{i}(\alpha)\subseteq\alpha^{-1}(Im(t_{2})\cap Im(s_{2}))\). The proof of \(P_{1}^{-}\cap\vec{\alpha}(\alpha)\subseteq\alpha^{-1}(Im(t_{2})\cap Im(s_{2}))\) is completely analogous. ## 3 Operational Semantics The operational semantics of a computon can be described as a _token game_ because the structure of a computon can be expressed as a whole-grain Petri net with typed tokens (which we refer to as a _TWP_). To construct concrete TWPs, in this paper we consider a variation to the scheme presented in [15], which provides support not only to colour tokens but also places. This variant scheme, which fits into the _individual token philosophy_, is formalised below. **Definition 8**.: **TWPetri** is the free category generated by the following diagram: where identity morphisms and composite morphisms are given by trivial paths and path concatenation, respectively. Naturally, there is an obvious functor \(X:\mathbf{Comp}\to\mathbf{TWPetri}\) sending \(U\) to \(T\), \(P\) to \(S\), \(E\) to \(O\), \(F\) to \(I\), \(\Sigma\) to \(L\), \(\sigma\) to \(\sigma^{\prime}\), \(\tau\) to \(\tau^{\prime}\), \(t\) to \(t^{\prime}\), \(s\) to \(s^{\prime}\) and \(c\) to \(c^{\prime}\), whose existence entails that a computon can be extended with the notion of port marking. If \(N:\mathbf{TWPetri}\to\mathbf{Set}\) is a functor that specifies a concrete TWP, then the composite functor \(N\circ X:\mathbf{Comp}\to\mathbf{Set}\) defines a marked computon (see Definition 9). From now on, we use traditional Petri net notation whenever we need to discuss computon executions/computations. The mapping between computa syntax and Petri net syntax is presented in A. **Definition 9** (Marked Computon).: A marked computon is a composite functor \(N\circ X:\mathbf{Comp}\to\mathbf{Set}\) where \(N\) is a functor \(\mathbf{TWPetri}\to\mathbf{Set}\) mapping: * \(T\) to a (possibly empty) set \(N(T)\) of transitions, * \(O\) to a (possibly empty) set \(N(O)\) of outputs, * \(I\) to a (possibly empty) set \(N(I)\) of inputs, * \(S\) to a (non-empty) set \(N(S)\) of places, * \(R\) to a (non-empty) set \(N(R)\) of tokens, * \(L\) to a (non-empty) set \(N(L)\subset\mathbb{N}\), * \(\sigma^{\prime}\) to a function \(N(\sigma^{\prime}):N(O)\to N(T)\) that assigns outputs to transitions, * \(\tau^{\prime}\) to a function \(N(\tau^{\prime}):N(I)\to N(T)\) that assigns inputs to transitions, * \(t^{\prime}\) to a function \(N(t^{\prime}):N(O)\to N(S)\) that assigns outputs to places, * \(s^{\prime}\) to a function \(N(s^{\prime}):N(I)\to N(S)\) that assigns inputs to places, * \(c^{\prime}\) to a function \(N(c^{\prime}):N(S)\to N(L)\) that assigns to each place a label, * \(\mu\) to a function \(N(\mu):N(R)\to N(S)\), called a marking function, which assigns tokens to places, and * \(\nu\) to a function \(N(\nu):N(R)\to N(L)\) that assigns to each token a label, such that there is: * an identity function \(1_{\lambda(x)}\) in \(\mathbf{Set}\) for each object \(x\) of \(\mathbf{TWPetri}\) and * a composite function \(\lambda(g)\circ\lambda(f)\) in \(\mathbf{Set}\) for each pair \((f,g)\) of composable morphisms in \(\mathbf{TWPetri}\). **Remark 2**.: The equation \(N(c^{\prime})\circ N(\mu)=N(\nu)\) deduced from Definition 9 entails that tokens are labelled in the same way as the places they are mapped to. That is, places admit tokens of one type only. **Notation 3**.: Given a TWP \(N:\mathbf{TWPetri}\rightarrow\mathbf{Set}\), a token \(r\in N(R)\) is called a control token if \(N(\nu)(r)=0\); otherwise, it is called a data token. If \(\lambda\) is a computon and \(N\circ X\) is a marked computon constructed from \(\lambda\), then there is a natural transformation \(\gamma:\lambda\to N\) whose components are the total functions \(\gamma_{S}:P\to N(S)\), \(\gamma_{T}:U\to N(T)\), \(\gamma_{I}:F\to N(I)\), \(\gamma_{O}:E\to N(O)\) and \(\gamma_{L}:\Sigma\to N(L)\): \(\gamma\)\(\gamma\ The main focus of this paper is on computon compositionality. Above we just presented a general overview of the _token game for computons_. The fact that the operational semantics of computons is given as a composite functor along **TWPetri** entails that all the theory of processes presented in [16] can be applicable to our work. In the future, we intend to explore computon processes in more detail. For now, we would just like to highlight that, due to the inherent concurrent nature of Petri nets, control tokens can arrive before data tokens (or viceversa). By having control ports separated from data ports, a transition only fires when both control tokens and data tokens are placed in their respective inports. This means that a computation unit is a passive construct with blocking behaviour which synchronises data and control before firing. After firing, a computation unit produces exactly one token in each of its outports. By the above, the e-inports and e-outports of a computon can be deemed as pre- and post-conditions, respectively. So, we can say that a computon is activated in some state of the world in which all pre-conditions hold, and its computation results in a change of the state of the world in which all post-conditions hold. ## 4 Trivial Computons A _trivial computon_ has no computation units, no edges and no i-ports at all, but just a number of e-inoutports. Up to isomorphism, it is the only object in \(\mathbf{Set}^{\mathbf{Comp}}\) with no computation units (see Proposition 5). **Definition 10** (Trivial Computon).: A trivial computon \(\lambda\) is a computon whose diagram in \(\mathbf{Set}\) has the form: **Proposition 5**.: A computon has no computation units if and only if it is a trivial computon. Proof.: (\(\implies\)) Let \(\lambda\) be a computon with \(U=\emptyset\). By the definition of empty function, we have that \(U=\emptyset\) only if \(E=\emptyset=F\) so that \(\sigma\) and \(\tau\) are surjective. Definition 1 states that any computon is required to have at least one coloured port so that \(|P|\geq 1\) and \(|\Sigma|\geq 1\). As \(s\) and \(t\) are not surjective by the definition of empty function, we have that \(p\in P^{+}\cap P^{-}\) for all \(p\in P\) (see Definition 2). Particularly, if \(c(p)=0\), then \(p\in Q^{+}\cap Q^{-}\); otherwise, \(p\in D^{+}\cap D^{-}\). As the function \(c\) is surjective by Definition 1, we conclude that \(\lambda\) is a trivial computon. (\(\iff\)) This follows directly from Definition 10. The general structure of a trivial computon with \(i\) ec-inoutports and \(j\) ed-inoutports is depicted in Figure 2. Figure 2: Syntactic representations of a trivial computon \(\lambda\) where \(q_{1},\ldots,q_{i}\in(Q^{+}\cap Q^{-})\) and \(d_{1},\ldots,d_{j}\in(D^{+}\cap D^{-})\). Figure 2 shows two equivalent syntactic representations of a trivial computon. One of them is based on Petri net syntax, whereas the other is based on a syntax specifically designed to represent computons (see A), which is much simpler and will be relevant to discuss the algebraic properties of computons, apart from providing extra clarity when depicting complex structures. Computon syntax also enables a clear separation of control and data ports. For convenience, in our graphical notation we label ports by colours and not by port identifiers. From now on, we omit the colour of control ports, which is always zero. As a reminder, control ports include ec-inports, ic-ports, ec-outports and ec-inoutports. In our theory, there is a distinguished trivial computon consisting of a single ec-inoutport, which we refer to as the _unit computon_. **Definition 11** (Unit Computon).: Up to isomorphism, the _unit computon_ is a trivial computon with \(|P|=|\Sigma|=1\). We use \(\Lambda\) to denote it. The existence of \(\Lambda\) can be proven by the fact that, according to Definition 1, the set of ports and the set of colours are never empty. By the same definition, we can observe that a computon can have no computation units and no edges at all. It is easy to show that \(\Lambda\) is unique up to unique isomorphism. Since a computon is required to have at least one coloured port, \(\Lambda\) can be perceived as the "simplest" object in \(\mathbf{Set}^{\mathbf{Comp}}\). In spite of this structural feature, \(\Lambda\) is not an initial object in such a category since there are \(k\geq 1\) computon morphisms from it to any other computon \(\lambda\), where \(k\) is the number of control ports in \(\lambda\).7 Under this premise, it follows that \(\mathbf{Set}^{\mathbf{Comp}}\) has no initial objects. Footnote 7: This can be easily proved by induction on the number of control ports of an arbitrary computon. ## 5 Primitive Computons Like a trivial computon, a primitive one has no i-ports. The difference is that there is exactly one computation unit to which all ports are attached via edges. So, every port can be either e-inport or e-outport, never both (see Proposition 6). This entails that a primitive computon is connected; in other words, it has neither dangling ports nor dangling computation units (see Proposition 7). In this paper, we consider three classes of primitive computons, namely _fork computons_ (see Subsection 5.1), _join computons_ (see Subsection 5.2) and _functional computons_ (see Subsection 5.3). **Definition 12**.: A primitive computon is a computon whose diagram in \(\mathbf{Set}\) has the form:8 Footnote 8: We use 1 to denote a singleton set. \[\xy(0,0)*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0 }*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}*{0,0}* {0,0} By the above cases and by the definition of symmetric difference, we have that \(p\in P\iff p\in Im(s)\triangle Im(t)\iff p\in(P^{+}\setminus P^{-})\cup(P^{-} \setminus P^{+})\iff p\in P^{+}\triangle P^{-}\). Hence, we conclude that \(P=Im(s)\triangle Im(t)=(P^{+}\setminus P^{-})\cup(P^{-}\setminus P^{+})=P^{+} \triangle P^{-}\), as required. **Proposition 7**.: Every primitive computon is a connected computon. Proof.: Let \(\lambda\) be a primitive computon and \(p\) be an e-inport of \(\lambda\). By Definition 2, there is no \(e_{1}\in E\) such that \(t(e_{1})=p\) so that \(p\notin Im(t)\). Consequently, we have that that \(p\in Im(s)\) because \(P=Im(s)\triangle Im(t)\). This implies that there exists some \(f_{1}\in F\) where \(s(f_{1})=p\). If \(u\) is the only computation unit in \(U\), we have that \(\tau(f_{1})=u\) by the fact that \(\tau\) is surjective. As \(\sigma\) is also surjective, there exists some \(e_{2}\in E\) such that \(\sigma(e_{2})=u\). By the totality of \(t\), it follows that there exists some \(q\in P\) such that \(t(e_{2})=q\) so that \(q\in Im(t)\). Again, by the property \(P=Im(s)\triangle Im(t)\), we have that \(q\notin Im(s)\) so there is no \(f_{2}\in F\) such that \(s(f_{2})=q\), i.e., \(q\in P^{-}\) by Definition 2. As \(p\xrightarrow{*}q\) holds, we conclude that \(\lambda\) is a connected computon. ### Fork Computons A fork computon has exactly one ec-inport and two ec-outports, i.e., it has no data ports at all. Intuitively, it just duplicates the control received in its ec-inport into all its ec-outports. **Definition 13**.: A fork computon \(\lambda\) is a primitive computon with \(|E|=2\) and \(|F|=|\Sigma|=1\). From Definition 13, it is easy to deduce that a fork computon \(\lambda\) has exactly three ports because \(P=Im(s)\triangle Im(t)\), \(|E|=2\) and \(|F|=1\). Specifically, \(|P^{-}|=2\) and \(|P^{+}|=1\) because \(P=P^{+}\triangle P^{-}\) (see Definition 2 and Proposition 6). As \(|\Sigma|=1\) and \(c\) is total and \(Q^{+}\neq\emptyset\neq Q^{-}\) (see Definition 1), the set of colours of \(\lambda\) is \(\{0\}\) so \(c(p)=0\) for all \(p\in P\). The general structure of a fork computon and its equivalent Petri net representation are depicted in Figure 3. ### Join Computons A join computon can be thought of as the dual of a fork computon since it has exactly two ec-inports and one ec-outport. Intuitively, it merges the control received in its ec-inports into its unique ec-outport. **Definition 14**.: A join computon \(\lambda\) is a primitive computon with \(|E|=|\Sigma|=1\) and \(|F|=2\). The properties of a join computon \(\lambda\) are almost identical to that of a fork computon so it is true that \(c(p)=0\) for all \(p\in P\). The only difference is in terms of the number of ec-inports and ec-outports. The general structure of a join computon and its equivalent Petri net representation are depicted in Figure 4. Figure 4: Syntactic representations of a join computon. Figure 3: Syntactic representations of a fork computon. ### Functional Computons A functional computon has exactly one ec-inport, one ec-outport, any number of ed-inports and any number of ed-outports. Intuitively, the unique computation unit is a high-level representation of a (potentially halting) computation that is triggered after receiving a control signal and a number of input data values. The successful termination of such computation results in a single control signal and a number of output data values. **Definition 15**.: A functional computon \(\lambda\) is a primitive computon with \(|E|,|F|\geq 1\) such that \(\exists!p\in P^{+}[c(p)=0]\) and \(\exists!q\in P^{-}[c(q)=0]\). The general structure of a functional computon with \(i\) ed-inports and \(j\) ed-outports, together with its equivalent Petri net representation, are illustrated in Figure 5.9 Footnote 9: We acknowledge that the definition of a computon does not include labels for computation units. However, for increased clarity, we took the liberty of using the symbol of a functional computon for labelling its unique computation unit. Up to isomorphism, there is a particular functional computon which possesses only one ec-inport and only one ec-outport. This sort of computon, referred to as _the glue computon_, is explicitly defined when \(|E|=|F|=1\). Intuitively, its behaviour is akin to a "do nothing" function because it just echoes the control signal received in its unique ec-inport into its only ec-outport. ## 6 Composite Computons A composite computon is algebraically formed via a composition operator which defines control flow for the execution of sub-computons (i.e., operands) in some order. As a composite cannot be a sub-computon of itself, such operators enable control-based exogenous composition [17]. Exogenous means that the composition process is done outside the internal structure of sub-computons and entails that sub-computons are agnostic of the composition structure built upon them. More formally, a composite computon is canonically the colimit of other computons, and it is constructed via a composition operator. A composition operator defines a certain operation in \(\mathbf{Set}^{\mathbf{Comp}}\), depending on the composite being constructed. In this paper, we provide operators to form either sequential or parallel computons. ### Sequential Computons Sequential composition is an operation that we characterise as a particular pushout in \(\mathbf{Set}^{\mathbf{Comp}}\). It is particular because, intuitively, the common object needs to be a trivial computon \(\lambda_{0}\) that can be "embedded" into some or all the e-outports of a computon \(\lambda_{1}\) and into some or all the e-inports of a computon \(\lambda_{2}\). Particularly, every port \(p_{0}\in P_{0}\) that is embedded into an output \(p_{1}\in P_{1}^{-}\) needs to be embedded into an input \(p_{2}\in P_{2}^{+}\) and viceversa. This Figure 5: Syntactic representations of a functional computon \(\lambda\) with \(p_{1},\dots,p_{i}\in D^{+}\) and \(q_{1},\dots,q_{j}\in D^{-}\). restriction enables a strict sequence in which \(\lambda_{1}\) is computed before \(\lambda_{2}\). A converse computation is possible and requires a different embedding since sequencing is a non-commutative operation in which order matters. To compute \(\lambda_{2}\) before \(\lambda_{1}\), it suffices to reverse the way we insert \(\lambda_{0}\) into \(\lambda_{1}\) and into \(\lambda_{2}\). Before providing the formal definition of a sequential computon, it is necessary to introduce the general notion of a pushout construction in \(\mathbf{Set}^{\mathbf{Comp}}\) (see Definition 16), which can be computed if and only if there is a pushable span of computon morphisms (see Definition 17 and Proposition 8). **Definition 16** (Pushout Construction).: Given a span \(\lambda_{1}\stackrel{{\alpha_{1}}}{{\leftarrow}}\lambda_{0} \stackrel{{\alpha_{2}}}{{\rightarrow}}\lambda_{2}\) of computon morphisms, the pushout of the corresponding diagram in \(\mathbf{Set}^{\mathbf{Comp}}\): denoted \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2} \rightarrow\lambda_{3})\), is obtained by computing the pushout in \(\mathbf{Set}\) of each individual computon component: \[P_{3} =P_{1}+_{P_{0}}P_{2}\] \[U_{3} =U_{1}+_{U_{0}}U_{2}\] \[E_{3} =E_{1}+_{E_{0}}E_{2}\] \[F_{3} =F_{1}+_{F_{0}}F_{2}\] \[\Sigma_{3} =\Sigma_{1}+_{\Sigma_{0}}\Sigma_{2}\] with \(\sigma_{3}\), \(\tau_{3}\), \(s_{3}\), \(t_{3}\) and \(c_{3}\) being defined in the obvious way: \[\sigma_{3} :E_{1}+_{E_{0}}E_{2}\twoheadrightarrow U_{1}+_{U_{0}}U_{2}\] \[\tau_{3} :F_{1}+_{F_{0}}F_{2}\twoheadrightarrow U_{1}+_{U_{0}}U_{2}\] \[s_{3} :F_{1}+_{F_{0}}F_{2}\to P_{1}+_{P_{0}}P_{2}\] \[t_{3} :E_{1}+_{E_{0}}E_{2}\to P_{1}+_{P_{0}}P_{2}\] \[c_{3} :P_{1}+_{P_{0}}P_{2}\twoheadrightarrow\Sigma_{1}+_{\Sigma_{0}} \Sigma_{2}\] **Definition 17** (Pushable Span).: A span \(\lambda_{1}\stackrel{{\alpha_{1}}}{{\leftarrow}}\lambda_{0} \stackrel{{\alpha_{2}}}{{\rightarrow}}\lambda_{2}\) of computon morphisms is pushable if \(\alpha_{1}(\vec{i}(\alpha_{2}))\cup\alpha_{1}(\vec{\sigma}(\alpha_{2}))\subseteq P _{1}^{+}\cup P_{1}^{-}\) and \(\alpha_{2}(\vec{i}(\alpha_{1}))\cup\alpha_{2}(\vec{\sigma}(\alpha_{1})) \subseteq P_{2}^{+}\cup P_{2}^{-}\). **Proposition 8**.: Let \(\alpha_{1}:\lambda_{0}\rightarrow\lambda_{1}\) and \(\alpha_{2}:\lambda_{0}\rightarrow\lambda_{2}\) be computon morphisms. The pushout \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2 }\rightarrow\lambda_{3})\) of \(\alpha_{1}\) and \(\alpha_{2}\) exists \(\iff\lambda_{1}\stackrel{{\alpha_{1}}}{{\leftarrow}}\lambda_{0} \stackrel{{\alpha_{2}}}{{\rightarrow}}\lambda_{2}\) is pushable. Proof.: (\(\implies\)) Assuming that the pushout \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2 }\rightarrow\lambda_{3})\) of \(\alpha_{1}:\lambda_{0}\rightarrow\lambda_{1}\) and \(\alpha_{2}:\lambda_{0}\rightarrow\lambda_{2}\) exists in \(\mathbf{Set}^{\mathbf{Comp}}\), we just prove that \(\alpha_{1}(\vec{i}(\alpha_{2}))\subseteq P_{1}^{+}\cup P_{1}^{-}\). This is because the satisfaction of the other conditions of Definition 17 follows analogously. Supposing there is some \(p_{1}\in\alpha_{1}(\vec{i}(\alpha_{2}))\setminus(P_{1}^{+}\cup P_{1}^{-})\), we know there is a port \(p_{0}\in\vec{i}(\alpha_{2})\) such that \(\alpha_{1}(p_{0})=p_{1}\). As the pushout \((\beta_{1},\lambda_{3},\beta_{2})\) exists in \(\mathbf{Set}^{\mathbf{Comp}}\), the equation \(\beta_{1}(\alpha_{1}(p_{0}))=\beta_{1}(p_{1})=\beta_{2}(\alpha_{2}(p_{0}))\) holds. Since \(p_{0}\in\vec{i}(\alpha_{2})\), there is some \(u_{2}\in\bullet\alpha_{2}(p_{0})\setminus\alpha_{2}(\bullet p_{0})\) so that \(\beta_{1}(p_{1})=\beta_{2}(\alpha_{2}(p_{0}))\in\beta_{2}(u_{2})\bullet\). As \(p_{1}\notin P_{1}^{+}\cup P_{1}^{-}\), \(p_{1}\notin\vec{i}(\beta_{1})\) (see Definition 7), meaning that there is some \(u_{1}\in\bullet p_{1}\) where \(\beta_{1}(u_{1})=\beta_{2}(u_{2})\). Using commutativity, we deduce the existence of \(u_{0}\in U_{0}\) such that \(\alpha_{1}(u_{0})=u_{1}\) and \(\alpha_{2}(u_{0})=u_{2}\). As this contradicts the fact that \(u_{2}\in\bullet\alpha_{2}(p_{0})\setminus\alpha_{2}(\bullet p_{0})\), we conclude that \(\alpha_{1}(\vec{i}(\alpha_{2}))\subseteq P_{1}^{+}\cup P_{2}^{-}\). ( \(\Longleftarrow\) ) Assuming \(\lambda_{1}\stackrel{{\alpha_{1}}}{{\leftarrow}}\lambda_{0}\stackrel{{ \alpha_{2}}}{{\rightarrow}}\lambda_{2}\) is a pushable span of computon morphisms (as per Definition 17), we prove that the pushout \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2} \rightarrow\lambda_{3})\) of \(\alpha_{1}\) and \(\alpha_{2}\) can be constructed via Definition 16. For this, we first prove that \(\beta_{1}\) and \(\beta_{2}\) are computon morphisms. Below we provide the proof for \(\beta_{1}\) only, since the other one is completely analogous. As **Set** has all pushouts, the existence of each component of \(\beta_{1}\) and \(\beta_{2}\) can be directly deduced. For example, the \(\beta_{1}\)-component injecting ports of \(P_{1}\) into \(P_{3}\) exists because \(P_{3}=P_{1}+_{P_{0}}P_{2}\) can always be computed in **Set**. Consequently, the equations \(\beta_{i}\circ c_{1}=c_{3}\circ\beta_{i}\), \(\beta_{i}\circ\tau_{1}=\tau_{3}\circ\beta_{i}\), \(\beta_{i}\circ\sigma_{1}=\sigma_{3}\circ\beta_{i}\), \(\beta_{i}\circ s_{1}=s_{3}\circ\beta_{i}\) and \(\beta_{i}\circ t_{1}=t_{3}\circ\beta_{i}\) hold for \(i=1,2\) (see the commutative diagrams of Definition 7). To prove that the \(\Sigma\)-component of \(\beta_{1}\) is an inclusion function, we note that \(\Sigma_{0}=\Sigma_{1}\cap\Sigma_{2}\) because \(\Sigma_{0},\Sigma_{1},\Sigma_{2}\subset\mathbb{N}\) and because the \(\Sigma\)-components of \(\alpha_{1}\) and \(\alpha_{2}\) are both inclusion functions (see Definition 7). This means that the pushout \(\Sigma_{1}+_{\Sigma_{0}}\Sigma_{2}\) can be canonically identified with \(\Sigma_{1}\cup\Sigma_{2}\subset\mathbb{N}\) and, consequently, that the induced total function \(f:\Sigma_{1}\rightarrow\Sigma_{1}+_{\Sigma_{0}}\Sigma_{2}\) (which corresponds to the \(\Sigma\)-component of \(\beta_{1}\)) is an inclusion function. That is, \(f(x)=x\) for all \(x\in\Sigma_{1}\). We now show that \(\vec{i}(\beta_{1})\cup\vec{\omega}(\beta_{1})\subseteq P_{1}^{+}\cup P_{1}^{-}\) also holds. If \(p_{1}\in\vec{i}(\beta_{1})\) then \(\bullet\beta_{1}(p_{1})\setminus\beta_{1}(\bullet p_{1})\neq\emptyset\) so there exists some \(u_{3}\in\bullet\beta_{1}(p_{1})\setminus\beta_{1}(\bullet p_{1})\) and no \(u_{1}\in\bullet p_{1}\) where \(\beta_{1}(u_{1})=u_{3}\). Since \(U_{3}=U_{1}+_{U_{0}}U_{2}\) (by Definition 16), there must be some \(u_{2}\in U_{2}\) where \(\beta_{2}(u_{2})=u_{3}\in\bullet\beta_{1}(p_{1})\setminus\beta_{1}(\bullet p_{ 1})\) and, consequently, some \(e_{3}\in E_{3}\) where \(\beta_{2}(u_{2})\stackrel{{ e_{3}}}{{\rightarrow}}\beta_{1}(p_{1})\) (see Definition 5). As there is no \(e_{1}\in E_{1}\) satisfying \(\beta_{1}(\sigma_{1}(e_{1}))=\sigma_{3}(\beta_{1}(e_{1}))=\sigma_{3}(e_{3})=u _{3}\) because there is no \(u_{1}\in U_{1}\) satisfying \(\beta_{1}(u_{1})=u_{3}\), \(E_{3}=E_{1}+_{E_{0}}E_{2}\) implies that there must be some \(e_{2}\in E_{2}\) for which \(\beta_{2}(e_{2})=e_{3}\) is true. As \(\beta_{2}(u_{2})\stackrel{{\beta_{2}(e_{2})}}{{\longrightarrow}} \beta_{1}(p_{1})\) holds, we use the commutativity property to deduce the existence of \(p_{2}\in P_{2}\) such that \(\beta_{2}(p_{2})=\beta_{1}(p_{1})\) and \(u_{2}\in\bullet p_{2}\). As \(\beta_{2}(p_{2})=\beta_{1}(p_{1})\) and \(P_{3}=P_{1}+_{p_{0}}P_{2}\), we use again commutativity to deduce that there is also a port \(p_{0}\in P_{0}\) with \(\alpha_{1}(p_{0})=p_{1}\) and \(\alpha_{2}(p_{0})=p_{2}\) so that \(u_{2}\in\bullet\alpha_{2}(p_{0})\). Since \(U_{3}=U_{1}+_{U_{0}}U_{2}\) and \((\nexleftarrow u_{1}\in U_{1})[\beta_{1}(u_{1})=u_{3}=\beta_{2}(u_{2})]\), we have that there is no \(u_{0}\in U_{0}\) where \(\alpha_{2}(u_{0})=u_{2}\). That is, \(u_{2}\notin\alpha_{2}(\bullet p_{0})\) because \(\bullet p_{0}\subseteq U_{0}\). Thus, it is true that \(u_{2}\in\bullet\alpha_{2}(p_{0})\setminus\alpha_{2}(\bullet p_{0})\) and, therefore, that \(p_{0}\in\vec{i}(\alpha_{2})\) (see Definition 7). Since \(\alpha_{1}(\vec{i}(\alpha_{2}))\subseteq P_{1}^{+}\cup P_{1}^{-}\) (by Definition 17) and \(p_{0}\in\vec{i}(\alpha_{2})\) (by the above), we have that \(\alpha_{1}(p_{0})=p_{1}\in P_{1}^{+}\cup P_{1}^{-}\). A similar approach can be used to show that \(q_{1}\in\vec{\sigma}(\beta_{1})\) implies \(q_{1}\in P_{1}^{+}\cup P_{1}^{-}\). So, \(\vec{i}(\beta_{1})\cup\vec{\sigma}(\beta_{1})\subseteq P_{1}^{+}\cup P_{1}^{-}\). Having proved that \(\beta_{1}\) is a computon morphism, we now assume that \(\gamma_{1}:\lambda_{1}\rightarrow\lambda_{4}\) and \(\gamma_{2}:\lambda_{2}\rightarrow\lambda_{4}\) are computon morphisms with \(\gamma_{1}\circ\alpha_{1}=\gamma_{2}\circ\alpha_{2}\), in order to show that there is a unique computon morphism \(\gamma_{3}:\lambda_{3}\rightarrow\lambda_{4}\) such that the corresponding diagram commutes. As it is obvious that the \(\Sigma\)-component of \(\gamma_{3}\) is an inclusion function because \(\beta_{i}\) and \(\gamma_{i}\) are (for \(i=1,2\)) and because \(\Sigma_{3}=\Sigma_{1}\cup\Sigma_{2}\), we just prove that \(\vec{i}(\gamma_{3})\cup\vec{\sigma}(\gamma_{3})\subseteq P_{3}^{+}\cup P_{3}^{-}\). Let \(p_{3}\in\vec{i}(\gamma_{3})\) so \(\bullet\gamma_{3}(p_{3})\setminus\gamma_{3}(\bullet p_{3})\neq\emptyset\). As \(P_{3}=P_{1}+_{P_{0}}P_{2}\), we observe that \(p_{3}=\beta_{j}(p_{j})\) for some \(p_{j}\in P_{j}\) and \(j=1,2\). With this in mind, we perform the following operations: \[\emptyset\neq\bullet\gamma_{3}(p_{3})\setminus\gamma_{3}(\bullet p_{3}) =\bullet\gamma_{3}(\beta_{j}(p_{j}))\setminus\gamma_{3}(\bullet \beta_{j}(p_{j}))\] \[=\bullet\gamma_{j}(p_{j})\setminus\gamma_{3}(\bullet\beta_{j}(p_{j})) \text{ because }\gamma_{3}\circ\beta_{j}=\gamma_{j}\] \[\subseteq\bullet\gamma_{j}(p_{j})\setminus\gamma_{3}(\beta_{j}( \bullet p_{j})) \text{ because }\beta_{j}(\bullet p_{j})\subseteq\bullet\beta_{j}(p_{j})\] \[=\bullet\gamma_{j}(p_{j})\setminus\gamma_{j}(\bullet p_{j}) \text{ because }\gamma_{3}\circ\beta_{j}=\gamma_{j}\] By the above, we deduce that \(p_{j}\in\vec{i}(\gamma_{j})\) and, consequently, that \(p_{j}\in P_{j}^{+}\cup P_{j}^{-}\) (because \(\gamma_{j}\) is a computon morphism with \(\vec{i}(\gamma_{j})\subseteq P_{j}^{+}\cup P_{j}^{-}\) -- see Definition 7). Using the facts \(p_{3}=\beta_{j}(p_{j})\in\vec{i}(\gamma_{3})\) and \(p_{j}\in P_{j}^{+}\cup P_{j}^{-}\), we further deduce that \(\beta_{j}^{-1}(p_{3})\subseteq P_{j}^{+}\cup P_{j}^{-}\). Hence, \(p_{3}\in P_{3}^{+}\cup P_{3}^{-}\) by Proposition 2. The proof that \(q_{3}\in\vec{\sigma}(\gamma_{3})\implies q_{3}\in P_{3}^{+}\cup P_{3}^{-}\) is Evidently, \(\mathbf{Set}^{\mathbf{Comp}}\) does not have all pushouts because a span must meet the requirements imposed by Definition 17 to enable square completion. Nevertheless, when there is such a span whose legs are connected computons, the respective pushout operation results in another connected computon (see Proposition 9). **Proposition 9**.: Let \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_{2}\) be a pushable span of computon morphisms. If \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons, then the pushout of \(\alpha_{1}\) and \(\alpha_{2}\) yields a connected computon. Proof.: Let \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) be a pushable span of computon morphisms and assume that \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons. By Proposition 8, \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2 }\rightarrow\lambda_{3})\) can be constructed from \(\alpha_{1}\) and \(\alpha_{2}\). To prove that \(\lambda_{3}\) is a connected computon, it suffices to show that \(p_{3}\xrightarrow{*}q_{3}\) holds for some e-inport \(p_{3}\in P_{3}^{+}\) and for some e-outport \(q_{3}\in P_{3}^{-}\). By Proposition 2 and by the fact that \(P_{3}=P_{1}+_{P_{0}}P_{2}\), we know that there is some \(p_{i}\in P_{i}^{+}\) and some \(q_{j}\in P_{j}^{-}\) where \(\beta_{i}(p_{i})=p_{3}\) and \(\beta_{j}(q_{j})=q_{3}\) such that \(i,j\in\{1,2\}\). * If \(i=j\), then \(p_{i}\xrightarrow{*}q_{j}\) trivially holds because \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons. By the commutativity property, \(\beta_{i}(p_{i})\xrightarrow{*}\beta_{j}(q_{j})\) also holds. * If \(i\neq j\), then \(\beta_{i}(p_{i})\xrightarrow{*}\beta_{j}(q_{j})\) follows from the connectivity of \(\lambda_{1}\) and \(\lambda_{2}\) and from the fact that there is always a port \(p_{0}\in\{p\in P_{0}\mid c_{0}(p)=0\}\) where \(\beta_{1}(\alpha_{1}(p_{0}))=\beta_{2}(\alpha_{2}(p_{0}))\). As \(p_{3}\xrightarrow{*}q_{3}\) is true in both cases, we conclude that \(\lambda_{3}\) is a connected computon. **Definition 18** (Sequential Computon).: Let \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) be a span of computon morphisms. We say that the pushout of \(\alpha_{1}\) and \(\alpha_{2}\) yields a _sequential computon_ if and only if (i) \(\lambda_{0}\) is a trivial computon with \(P_{0}=\vec{i}(\alpha_{1})\cap\vec{o}(\alpha_{2})\), (ii) \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons, (iii) \(\alpha_{1}(\vec{o}(\alpha_{2}))\subseteq P_{1}^{-}\) and (iv) \(\alpha_{2}(\vec{i}(\alpha_{1}))\subseteq P_{2}^{+}\). **Proposition 10**.: If a span \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) of computon morphisms produces a sequential computon, then it is pushable. Proof.: If \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) is a span of computon morphisms that produces a sequential computon (see Definition 18), \(p_{0}\in P_{0}\implies p_{0}\in\vec{i}(\alpha_{1})\cap\vec{o}(\alpha_{2})\) so \(\alpha_{1}(p_{0})\in P_{1}^{-}\) and \(\alpha_{2}(p_{0})\in P_{2}^{+}\). That is, \(\alpha_{1}(p_{0})\bullet=\emptyset=\bullet\alpha_{2}(p_{0})\). As \(p_{0}\bullet=\emptyset=\bullet\pi p_{0}\) (because \(\lambda_{0}\) is a trivial computon), it follows that \(\alpha_{1}(p_{0})\bullet\backslash\alpha_{1}(p_{0}\bullet)=\emptyset=\bullet \alpha_{2}(p_{0})\setminus\alpha_{2}(\bullet p_{0})\), meaning that \(p_{0}\not\in\vec{o}(\alpha_{1})\cup\vec{i}(\alpha_{2})\) and, hence, \(\vec{o}(\alpha_{1})\cup\vec{i}(\alpha_{2})=\emptyset\). For \(j\in\{1,2\}\), we observe that \(P_{j}^{+}\neq\emptyset\neq P_{j}^{-}\) (by Definition 1) and that \(\lambda_{j}\) is not a trivial computon because it is a connected computon (see Proposition 1). Thus, \(\alpha_{1}(\vec{i}(\alpha_{2}))\cup\alpha_{1}(\vec{o}(\alpha_{2}))\subseteq P_ {1}^{-}\subset P_{1}^{+}\cup P_{1}^{-}\) and \(\alpha_{2}(\vec{i}(\alpha_{1}))\cup\alpha_{2}(\vec{o}(\alpha_{1}))\subseteq P _{2}^{+}\subset P_{2}^{+}\cup P_{2}^{-}\). **Corollary 1**.: If a span \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) of computon morphisms produces a sequential computon, then \(\vec{o}(\alpha_{1})\cup\vec{i}(\alpha_{2})=\emptyset\). Proof.: The proof follows from Proposition 10. **Remark 3**.: Given a span \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) of computon morphisms satisfying the conditions of Definition 18, we call \(\lambda_{0}\) the apex computon, \(\lambda_{1}\) the left operand and \(\lambda_{2}\) the right operand. When an apex computon is embedded into all the e-outports of the left operand and into all the e-inports of the right one, we say that the computon operands are totally sequeniable; otherwise, we say that they are partially sequeniable (see Definition 19). By Proposition 11, total sequentiality implies partial sequentiality. **Definition 19** (Total and Partial Sequential Composition).: Let \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_ {2}\) be a pushable span of computon morphisms whose pushout forms a sequential computon. We say that \(\lambda_{1}\) is totally sequeniable with \(\lambda_{2}\), written \(\lambda_{1}\unrhd\lambda_{2}\), if \(\alpha_{1}(\vec{o}(\alpha_{2}))=P_{1}^{-}\) and \(\alpha_{2}(\vec{i}(\alpha_{1}))=P_{2}^{+}\). If \(\alpha_{1}(\vec{o}(\alpha_{2}))\subset P_{1}^{-}\) or \(\alpha_{2}(\vec{i}(\alpha_{1}))\subset P_{2}^{+}\), then \(\lambda_{1}\) is partially sequeniable with \(\lambda_{2}\), written \(\lambda_{1}\rhd\lambda_{2}\). **Proposition 11**.: If there is a span \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_{2}\) of computon morphisms where \(\lambda_{1}\unrhd\lambda_{2}\), there is a span of the form \(\lambda_{1}\leftarrow\lambda\rightarrow\lambda_{2}\) where \(\lambda_{1}\rhd\lambda_{2}\). Proof.: Let \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_{2}\) be a span of computon morphisms where \(\lambda_{1}\unrhd\lambda_{2}\). Since \(\lambda_{0}\) is required to have at least one ec-inoutport (see Definition 1), we know that there exists at least one computon morphism \(\alpha_{0}:\Lambda\rightarrow\lambda_{0}\) where \(\alpha_{0}(p)\in P_{0}\iff\alpha_{0}(p)\in\vec{i}(\alpha_{1})\cap\vec{\omega}( \alpha_{2})\) (see Definitions 18 and 19). Consequently, a span \(\lambda_{1}\xleftarrow{\alpha_{1}\circ\alpha_{0}}\Lambda\xrightarrow{\alpha_{ 2}\circ\alpha_{0}}\lambda_{2}\) exists. As \(\alpha_{0}(p)\in\vec{\omega}(\alpha_{2})\) and \(\alpha_{1}(\vec{\omega}(\alpha_{2}))=P_{1}^{-}\) (because \(\lambda_{1}\unrhd\lambda_{2}\)), we have that \(\alpha_{1}(\alpha_{0}(p))\in P_{1}^{-}\). The fact that \(\lambda_{1}\) is a connected computon implies that there exists some computation unit \(u_{1}\in\bullet\alpha_{1}(\alpha_{0}(p))\) so that \(\bullet\alpha_{1}(\alpha_{0}(p))\neq\emptyset\) (see Proposition 1). As \(\bullet p=\emptyset\) because \(\Lambda\) is a trivial computon, \(\bullet\alpha_{1}(\alpha_{0}(p))\setminus\alpha_{1}(\alpha_{0}(\bullet p))\neq\emptyset\) holds, meaning that \(p\in\vec{i}(\alpha_{1}\circ\alpha_{0})\) (see Definition 7). A similar reasoning can be used to deduce \(\alpha_{2}(\alpha_{0}(p))\in P_{2}^{+}\) and \(p\in\vec{\omega}(\alpha_{2}\circ\alpha_{0})\). The facts \(\alpha_{1}(\alpha_{0}(p))\in P_{1}^{-}\) and \(p\in\vec{\omega}(\alpha_{2}\circ\alpha_{0})\) together imply that \(\alpha_{1}(\alpha_{0}(\vec{\omega}(\alpha_{2}\circ\alpha_{0})))\subseteq P_{1} ^{-}\). Similarly, \(\alpha_{2}(\alpha_{0}(p))\in P_{2}^{+}\) and \(p\in\vec{i}(\alpha_{1}\circ\alpha_{0})\) imply that \(\alpha_{2}(\alpha_{0}(\vec{i}(\alpha_{1}\circ\alpha_{0})))\subseteq P_{2}^{+}\). Consequently, as \(\vec{i}(\alpha_{2}\circ\alpha_{0})=\emptyset=\vec{\omega}(\alpha_{1}\circ \alpha_{0})\) because \(\bullet\alpha_{2}(\alpha_{0}(p))=\alpha_{1}(\alpha_{0}(p))\bullet=\bullet p=p\bullet=\emptyset\), we have that the span \(\lambda_{1}\xleftarrow{\alpha_{1}\circ\alpha_{0}}\Lambda\xrightarrow{\alpha_{ 2}\circ\alpha_{0}}\lambda_{2}\) is pushable (see Definition 17). Now: 1. If \(|P_{1}^{-}|=1\) then \(\alpha_{1}(\alpha_{0}(\vec{\omega}(\alpha_{2}\circ\alpha_{0})))=P_{1}^{-}\) because \(|P|=1\). As every set is a subset of itself, \(\alpha_{1}(\alpha_{0}(\vec{\omega}(\alpha_{2}\circ\alpha_{0})))\subset P_{1}^{-}\). 2. If \(|P_{1}^{-}|>1\) then \(\alpha_{1}(\alpha_{0}(\vec{\omega}(\alpha_{2}\circ\alpha_{0})))\subset P_{1} ^{-}\) because \(\alpha_{1}\circ\alpha_{0}\) is monic and \(|P|=1\). 3. Analogously, if \(|P_{2}^{+}|\geq 1\), then \(\alpha_{2}(\alpha_{0}(\vec{i}(\alpha_{1}\circ\alpha_{0})))\subset P_{2}^{+}\). The pushout of \(\alpha_{1}\circ\alpha_{0}\) and \(\alpha_{2}\circ\alpha_{0}\) can always be computed because the span they form is pushable (see Proposition 8). As it satisfies the requirements for partial sequentiality too (see (1)-(3) and Definition 19), we conclude that it forms a sequential computon with \(\lambda_{1}\rhd\lambda_{2}\). Any computon operand can be put in any order within a sequential computon, since control e-inports and control e-outports always possess the same colour (i.e., zero). This property, combined with the fact that a computon always has at least one ec-inport and at least one ec-outport, allow us to compose any two connected computons sequentially regardless of the data they require or produce (see Theorem 1). Composing two connected computons sequentially results in another connected computon (see Proposition 12). **Theorem 1**.: Let \(\lambda_{1}\) and \(\lambda_{2}\) be two computons. Then, \(\lambda_{1}\rhd\lambda_{2}\iff\lambda_{1}\) and \(\lambda_{2}\) are connected computons. Proof.: (\(\implies\)) This part of the proof follows directly from Definition 18. (\(\iff\)) Assuming that \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons, we first prove that there exists a span \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_{2}\) of computon morphisms whose pushout forms a sequential computon. For this, we choose \(\lambda_{0}\) to be \(\Lambda\) which is a trivial computon with a unique port \(p\in P^{+}\cap P^{-}\) and \(c(p)=0\) (see Definition 11). Below we construct computon morphisms \(\alpha_{1}:\Lambda\rightarrow\lambda_{1}\) and \(\alpha_{2}:\Lambda\rightarrow\lambda_{2}\) by only considering port injection because \(E=F=U=\emptyset\) and \(\Sigma=\{0\}\). Since any computon has at least one ec-inport and at least one ec-outport, we have that \(Q_{1}^{-}\neq\emptyset\neq Q_{2}^{+}\). If we trivially define \(\alpha_{1}(p)=p_{1}\in Q_{1}^{-}\) and \(\alpha_{2}(p)=p_{2}\in Q_{2}^{+}\), we have that \(p\in\vec{i}(\alpha_{1})\cap\vec{\omega}(\alpha_{2})\) because \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons (see Proposition 1). As \(P=\{p\}\), \(P=\vec{i}(\alpha_{1})\cap\vec{\omega}(\alpha_{2})\) so that \(\alpha_{1}(\vec{\omega}(\alpha_{2}))\subseteq Q_{1}^{-}\subseteq P_{1}^{-}\) and \(\alpha_{2}(\vec{i}(\alpha_{1}))\subseteq Q_{2}^{+}\subseteq P_{2}^{+}\). The equation \(\bullet\alpha_{2}(p)=\alpha_{1}(p)\bullet=\bullet p=p\bullet=\emptyset\) allow us to determine that \(\lambda_{1}\xleftarrow{\alpha_{1}}\lambda_{0}\xrightarrow{\alpha_{2}}\lambda_{2}\) is pushable. Since \(\Lambda\) is a trivial computon, we deduce that the pushout of \(\alpha_{1}\) and \(\alpha_{2}\) forms a sequential computon. If \(\alpha_{1}(\vec{\omega}(\alpha_{2}))\subset Q_{1}^{-}\) then \(\alpha_{1}(\vec{\omega}(\alpha_{2}))\subset P_{1}^{-}\) because \(Q_{1}^{-}\subseteq P_{1}^{-}\). Similarly, \(\alpha_{2}(\vec{i}(\alpha_{1}))\subset Q_{2}^{+}\) implies \(\alpha_{2}(\vec{i}(\alpha_{1}))\subset P_{2}^{+}\) because \(Q_{2}^{+}\subseteq P_{2}^{+}\). Thus, \(\lambda_{1}\rhd\lambda_{2}\) holds in both cases. Now, when \(\alpha_{1}(\vec{\omega}(\alpha_{2}))=Q_{1}^{-}\), we have two possibilities: \(Q_{1}^{-}\subset P_{1}^{-}\) or \(Q_{1}^{-}=P_{1}^{-}\). If \(Q_{1}^{-}\subset P_{1}^{-}\), then \(\alpha_{1}(\vec{\omega}(\alpha_{2}))\subset P_{1}^{-}\) so that \(\lambda_{1}\rhd\lambda_{2}\) holds. A similar approach can be used to prove that \(\lambda_{1}\rhd\lambda_{2}\) is true when both \(\alpha_{2}(\vec{i}(\alpha_{1}))=Q_{2}^{+}\) and \(Q_{2}^{+}\subset P_{2}^{+}\) are true. The only scenario in which \(\lambda_{1}\unrhd\lambda_{2}\) holds is when \(\lambda_{1}\) posesses only one ec-outport with no ed-outports and \(\lambda_{2}\) has only one ec-inport with no ed-inports. More precisely, \(\lambda_{1}\unrhd\lambda_{2}\) holds when \(\alpha_{1}(\vec{\sigma}(\alpha_{2}))=Q_{1}^{-}=P_{1}^{-}\) and \(\alpha_{2}(\vec{i}(\alpha_{1}))=Q_{2}^{+}=P_{1}^{+}\). As Proposition 11 states that \(\lambda_{1}\unrhd\lambda_{2}\implies\lambda_{1}\rhd\lambda_{2}\), we conclude that for every pair \((\lambda_{1},\lambda_{2})\) of connected computons, \(\lambda_{1}\rhd\lambda_{2}\) holds. **Proposition 12**.: A sequential computon is a connected computon. Proof.: The proof follows directly from Proposition 9 and Definition 18. Theorem 1 is important for our theory since it entails that any two connected computons can always be composed sequentially. Although an apex computon always exists, it is important to note that it does not need to correspond to the entire common part between the e-outports of the left operand and the e-inports of the right operand. By common, we mean ports sharing the same colour. Figure 6 depicts a scenario of this sort. Figure 6 shows that, when a computon \(\lambda_{1}\) is partially composed with a computon \(\lambda_{2}\), there is an implicit effect in which all the e-outports of \(\lambda_{1}\) that are not in the image of \(\alpha_{1}\) become e-outports in the sequential computon (let us call it \(\lambda_{3}\)). Similarly, all the e-inports of \(\lambda_{2}\) that are not in the image of \(\alpha_{2}\) become e-inports in \(\lambda_{3}\). Naturally, this generative effect does not occur in the case of total sequential composition since the images of the computon morphisms involved would cover every e-outport of the left operand and every e-inport of the right one. Instead, each \(p_{1}\in P_{1}^{-}\) and each \(p_{2}\in P_{2}^{+}\) would be mapped to an i-port of \(\lambda_{3}\). No matter whether \(\lambda_{1}\unrhd\lambda_{2}\) or \(\lambda_{1}\rhd\lambda_{2}\), it is true that \((\forall p_{1}\in P_{1}^{+})(\exists p_{3}\in P_{3}^{+})[\beta_{1}(p_{1})=p_{ 3}]\) and \((\forall p_{2}\in P_{2}^{-})(\exists q_{3}\in P_{3}^{-})[\beta_{2}(p_{2})=q_{ 3}]\) (see Proposition 13). **Proposition 13**.: If \((\beta_{1}:\lambda_{1}\rightarrow\lambda_{3},\lambda_{3},\beta_{2}:\lambda_{2 }\rightarrow\lambda_{3})\) is the pushout of computon morphisms \(\alpha_{1}:\lambda_{0}\rightarrow\lambda_{1}\) and \(\alpha_{2}:\lambda_{0}\rightarrow\lambda_{2}\) with \(\lambda_{1}\unrhd\lambda_{2}\), then \(\beta_{1}(P_{1}^{+})=P_{3}^{+}\) and \(\beta_{2}(P_{2}^{-})=P_{3}^{-}\). Proof.: Considering the commutative diagram presented in Definition 16 and assuming that \(\lambda_{1}\unrhd\lambda_{2}\), below we just show that \(\beta_{1}(P_{1}^{+})=P_{3}^{+}\) since the proof of \(\beta_{2}(P_{2}^{-})=P_{3}^{-}\) is completely analogous. Assume for contrapositive that \(p_{3}\notin P_{3}^{+}\) so there is some \(e_{3}\in E_{3}\) where \(t_{3}(e_{3})=p_{3}\). The fact \(E_{3}=E_{1}+_{E_{0}}E_{2}\) entails that there are three possibilities: (i) there exclusively is some \(e_{1}\in E_{1}\) where \(\beta_{1}(e_{1})=e_{3}\), (ii) there exclusively is some \(e_{2}\in E_{2}\) where \(\beta_{2}(e_{2})=e_{3}\) or (iii) there are \(e_{4}\in E_{1}\) and \(e_{5}\in E_{2}\) such that \(\beta_{1}(e_{4})=e_{3}=\beta_{2}(e_{5})\). The third scenario never holds since \(\lambda_{0}\) is trivial by the fact \(\lambda_{1}\unrhd\lambda_{2}\). So, we just prove for (i) and (ii). For (i), \(\beta_{1}(t_{1}(e_{1}))=t_{3}(\beta_{1}(e_{1}))=t_{3}(e_{3})=p_{3}\) implies that there is some \(p_{1}\in P_{1}\) for which \(t_{1}(e_{1})=p_{1}\) and \(\beta_{1}(p_{1})=p_{3}\). Consequently, by Definition 2, \(p_{1}\notin P_{1}^{+}\) so \(p_{3}\notin\beta_{1}(P_{1}^{+})\). If (ii) holds, \(\beta_{2}(t_{2}(e_{2}))=t_{3}(\beta_{2}(e_{2}))=t_{3}(e_{3})=p_{3}\) implies that there is some \(p_{2}\in P_{2}\) for which \(t_{2}(e_{2})=p_{2}\) and \(\beta_{2}(p_{2})=p_{3}\). If there is some \(p_{1}\in P_{1}\) where \(\beta_{1}(p_{1})=p_{3}=\beta_{2}(p_{2})\), there is some \(p_{0}\in P_{0}\) where \(\alpha_{1}(p_{0})=p_{1}\) and \(\alpha_{2}(p_{0})=p_{2}\). As \((\nexists e_{1}\in E_{1})[\beta_{1}(e_{1})=e_{3}=\beta_{2}(e_{2})]\) and \(\sigma_{2}\) is surjective, \(p_{0}\in\vec{i}(\alpha_{2})\) which contradicts the fact \(\vec{i}(\alpha_{2})=\emptyset\) (see Corollary 1). So, there is no \(p_{1}\in P_{1}\) where \(\beta_{1}(p_{1})=p_{3}=\beta_{2}(p_{2})\). That is, \(p_{3}\notin\beta_{1}(P_{1}^{+})\). Proving \(p_{3}\notin P_{3}^{+}\implies p_{3}\notin\beta_{1}(P_{1}^{+})\) in the above cases entails that \(\beta_{1}(P_{1}^{+})\subseteq P_{3}^{+}\). We now prove that \(P_{3}^{+}\subseteq\beta_{1}(P_{1}^{+})\) also holds. If we let \(q_{3}\in P_{3}^{+}\), by Proposition 2 and by the fact \(P_{3}=P_{1}+_{p_{0}}P_{2}\), we have three options: (1) there exclusively is some \(q_{1}\in P_{1}^{+}\) such that \(\beta_{1}(q_{1})=q_{3}\), (2) there exclusively is some \(q_{2}\in P_{2}^{+}\) such that \(\beta_{2}(q_{2})=q_{3}\) or (3) there are \(q_{4}\in P_{1}^{+}\) and \(q_{5}\in P_{2}^{+}\) such that \(\beta_{1}(q_{4})=q_{3}=\beta_{2}(q_{5})\). If (1) is true, then \(q_{3}\in\beta_{1}(P_{1}^{+})\) follows directly. We now show that (2) and (3) do not hold. Supposing (2) is true, \(q_{2}\in\alpha_{2}(\vec{i}(\alpha_{1}))\) because \(\lambda_{1}\unrhd\lambda_{2}\) implies \(\alpha_{2}(\vec{i}(\alpha_{1}))=P_{2}^{+}\). Therefore, \((\exists q_{0}\in P_{0}\cap\vec{i}(\alpha_{1}))(\exists q_{1}\in P_{1})[ \alpha_{1}(q_{0})=q_{1}\ \land\ \alpha_{2}(q_{0})=q_{2}]\). As commutativity contradicts (2), there is no \(q_{2}\in P_{2}^{+}\) such that \(\beta_{2}(q_{2})=q_{3}\in P_{3}^{+}\). That is, \(q_{3}\notin\beta_{2}(P_{2}^{+})\). To disprove (3), we deduce by commutativity the existence of some \(q\in P_{0}\) where \(\alpha_{1}(q)=q_{4}\) and \(\alpha_{2}(q)=q_{5}\). Since \(q_{4}\in P_{1}^{+}\) and \(q_{5}\in P_{2}^{+}\), Proposition 2 says that \(q\in P_{0}^{+}\). The fact that \(\lambda_{2}\) is a connected computon and that \(\alpha_{2}(q)=q_{5}\in P_{2}^{+}\) entail that \(q\in\vec{0}(\alpha_{2})\). Because \(\lambda_{1}\unrhd\lambda_{2}\), it is true that \(\alpha_{1}(\vec{o}(\alpha_{2}))=P_{1}^{-}\) and, consequently, that \(\alpha_{1}(q)=q_{4}\in P_{1}^{-}\). But \(\lambda_{1}\) is also a connected computon, so \(q_{4}\in P_{1}^{+}\cap P_{1}^{-}\) cannot hold (see Proposition 1). As this contradicts our initial totality and, thus, the reverse of Proposition 11 does not hold. Proposition 11 combined with Theorem 1 states that if any two connected computons can be composed totally, they can also be composed partially. Although Figure 6 shows an example of partial sequential composition, the same computon operands can be used to perform total sequential composition. This is because, in this case, there exists an apex computon that can be inserted into all the e-outports of \(\lambda_{1}\) and into all the e-inports of \(\lambda_{2}\). Such an apex does not always exist so partiality does not imply totality and, thus, the reverse of Proposition 11 does not hold. Proposition 11 combined with Theorem 1 states that if any two connected computons can be composed totally, they can also be composed partially. ### Parallel Computons Parallel composition is an operation that combines coproduct and pushout constructions for defining a parallel computon in \(\mathbf{Set}^{\mathbf{Comp}}\). As per Proposition 14, the coproduct of two computons can always be computed in this category. **Proposition 14**.: The coproduct \(\lambda_{1}+\lambda_{2}\) of computons \(\lambda_{1}\) and \(\lambda_{2}\) exists in \(\mathbf{Set}^{\mathbf{Comp}}\). Proof.: The coproduct \(\lambda_{3}\) of a computon \(\lambda_{1}\) and a computon \(\lambda_{2}\), written \(\lambda_{1}+\lambda_{2}\), is obtained by computing the following in \(\mathbf{Set}\): \(P_{3}=P_{1}+P_{2}\), \(U_{3}=U_{1}+U_{2}\), \(E_{3}=E_{1}+E_{2}\), \(F_{3}=F_{1}+F_{2}\) and \(\Sigma_{3}=\Sigma_{1}+_{\Sigma_{1}\cap\Sigma_{2}}\Sigma_{2}\). Particularly, the pushout operation to obtain \(\Sigma_{3}\) is valid since the span \(\Sigma_{1}\leftrightarrow\Sigma_{1}\cap\Sigma_{2}\hookrightarrow\Sigma_{2}\) of inclusion functions always exists in \(\mathbf{Set}\) (because \(\Sigma_{1},\Sigma_{2}\subset\mathbb{N}\)). Thus, \(\Sigma_{3}=\Sigma_{1}+_{\Sigma_{1}\cap\Sigma_{2}}\Sigma_{2}=\Sigma_{1}\cup \Sigma_{2}\). As \(c_{3}\) is canonically the mapping \(P_{1}+P_{2}\rightarrow\Sigma_{1}\cup\Sigma_{2}\) and \((\forall i\in\{1,2\})(\forall p\in P_{i})(\exists ix\in\Sigma_{i})[c_{i}(p)=x]\), it follows that \(c_{3}\) is surjective. All the functions of \(\lambda_{3}\), including \(c_{3}\), are defined in the obvious way to make the corresponding squares commute. For example: The existence of each component of \(\beta_{j}:\lambda_{j}\to\lambda_{1}+\lambda_{2}\) (for \(j=1,2\)) follows directly from the fact that \(\mathbf{Set}\) has all coproducts and all pushouts. Particularly, the \(\Sigma\)-component of \(\beta_{j}\) is an inclusion map because \(\Sigma_{j}\subseteq\Sigma_{1}\cup\Sigma_{2}\). Furthermore, \(\vec{i}(\beta_{j})\cup\vec{o}(\beta_{j})=\emptyset\subseteq P_{j}^{+}\cup P_ {j}^{-}\) because \(U_{3}\) is computed as the disjoint union of \(U_{1}\) and \(U_{2}\). Coproduct and pushout satisfy the universal property in \(\mathbf{Set}\) so coproduct in \(\mathbf{Set}^{\mathbf{Comp}}\) also satisfies it. This means that, if there is a computon \(\lambda_{4}\) with morphisms \(\gamma_{1}:\lambda_{1}\to\lambda_{4}\) and \(\gamma_{2}:\lambda_{2}\to\lambda_{4}\), there is a unique morphism \(\gamma_{3}:\lambda_{3}\to\lambda_{4}\) such that \(\gamma_{3}\circ\beta_{1}=\gamma_{1}\) and \(\gamma_{3}\circ\beta_{2}=\gamma_{2}\). To preserve commutativity, an \(A\)-component of \(\gamma_{3}\) is given as follows: \[\forall a_{3}\in A,\gamma_{3}(a_{3})=\begin{cases}\gamma_{1}(a_{1})&\text{if $a_{ 3}=\beta_{1}(a_{1})$ for some $a_{1}\in A_{1}$}\\ \gamma_{2}(a_{2})&\text{if $a_{3}=\beta_{2}(a_{2})$ for some $a_{2}\in A_{2}$}\end{cases}\] By the above and by the fact that the \(\Sigma\)-components of \(\gamma_{1}\) and \(\gamma_{2}\) are both inclusion functions, it is easy to see that the \(\Sigma\)-component of \(\gamma_{3}\) is also an inclusion. We just now have to prove \(\vec{i}(\gamma_{3})\cup\vec{o}(\gamma_{3})\subseteq P_{3}^{+}\cup P_{3}^{-}\). Below we provide the proof of \(\vec{i}(\gamma_{3})\subseteq P_{3}^{+}\cup P_{3}^{-}\) since the other is completely analogous. Let \(p_{3}\in\vec{i}(\gamma_{3})\) so \(\bullet\gamma_{3}(p_{3})\setminus\gamma_{3}(\bullet p_{3})\neq\emptyset\). As \(P_{3}=P_{1}+P_{2}\), we observe that \(p_{3}=\beta_{k}(p_{k})\) for some \(p_{k}\in P_{k}\) (\(k=1,2\)). Using a similar reasoning as the proof of Proposition 8, we deduce \(p_{k}\in\vec{i}(\gamma_{k})\) which implies \(p_{k}\in P_{k}^{+}\cup P_{k}^{-}\) because \(\gamma_{k}\) is a computon morphism with \(\vec{i}(\gamma_{k})\subseteq P_{k}^{+}\cup P_{k}^{-}\) (see Definition 7). As \(\vec{i}(\beta_{k})\cup\vec{o}(\beta_{k})=\emptyset\), \(P_{k}\cap\vec{i}(\beta_{k})=\emptyset=P_{k}\cap\vec{o}(\beta_{k})\) and, consequently, \(\beta_{k}^{-1}(P_{3}^{+})=P_{k}^{+}\) and \(\beta_{k}^{-1}(P_{3}^{-})=P_{k}^{-}\) (see Proposition 3). That is, \(p_{k}\in P_{k}^{+}\cup P_{k}^{-}\)\(\iff\)\(p_{k}\in\beta_{k}^{-1}(P_{3}^{+})\cup\beta_{k}^{-1}(P_{3}^{-})\). Using the fact \(p_{3}=\beta_{k}(p_{k})\), we conclude that \(p_{3}\in P_{3}^{+}\cup P_{3}^{-}\). Structurally, a parallel computon consists of a fork computon, two connected comptons (i.e., the operands) and one join computon. The role of the fork and join computons is to split and synchronise control to/from the operands, respectively. Definition 20 formalises this construction. **Definition 20** (Parallel Computon).: Let \(\lambda_{i}\) be a computon and \(\alpha_{j}\) be a computon morphism for \(i=0,\ldots,16\) and \(j=1,\ldots,26\). We say that the pushout of \(\alpha_{23}\) and \(\alpha_{24}\) forms a parallel computon if the following diagram commutes in \(\mathbf{Set}^{\mathbf{Comp}}\): \(\lambda_{2}+\lambda_{10}\)\(\lambda_{15}\)\(\lambda_{13}\)\(\lambda_{14}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{14}\)\(\lambda_{15}\)\(\lambda_{15}\)\(\lambda_{13}\)\(\lambda_{15}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{14}\)\(\lambda_{18}\)\(\lambda_{15}\)\(\lambda_{15}\)\(\lambda_{12}\)\(\lambda_{13}\)\(\lambda_{14}\)\(\lambda_{15}\)\(\lambda_{12}\)\(\lambda_{13}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{14}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{15}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{15}\)\(\lambda_{19}\)\(\lambda_{17}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{15}\)\(\lambda_{17}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{12}\)\(\lambda_{13}\)\(\lambda_{14}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{14}\)\(\lambda_{15}\)\(\lambda_{16}\)\(\lambda_{17}\)\(\lambda_{18}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{19}\)\(\lambda_{15}\)\(\lambda_{19}\)\(\lambda_{17}\)\(\lambda_{18}\)\( provided that: 1. \(\lambda_{i}\cong\Lambda\) for all \(i\in\{0,1,8,9\}\), 2. \(\lambda_{3}\) and \(\lambda_{11}\) are connected computons, 3. \(\lambda_{4}\) and \(\lambda_{10}\) are fork computons, 4. \(\lambda_{2}\) and \(\lambda_{12}\) are join computons, 5. \(\lambda_{5}\) is a sequential computon with \(\lambda_{3}\rhd\lambda_{2}\), 6. \(\lambda_{6}\) is a sequential computon with \(\lambda_{4}\rhd\lambda_{3}\), 7. \(\lambda_{13}\) is a sequential computon with \(\lambda_{10}\rhd\lambda_{11}\), 8. \(\lambda_{14}\) is a sequential computon with \(\lambda_{11}\rhd\lambda_{12}\), 9. \(\vec{\sigma}(\alpha_{23})\cap\vec{\sigma}(\alpha_{24})=\emptyset\) and 10. \(\vec{i}(\alpha_{23})\cap\vec{i}(\alpha_{24})=\emptyset\). **Notation 4**.: Considering the construction described in Definition 20, we write \(\lambda_{3}\mid\lambda_{11}\) for the parallel computon \(\lambda_{16}\). Here, \(\lambda_{3}\) and \(\lambda_{11}\) are called the computon operands. A glance at the diagram presented in Definition 20, reveals that there are four pushout operations producing a sequential computon each (as per Definition 18). If a pushout does not follow the restrictions imposed by such a definition (i.e., it freely behaves as in Definition 16), then it just "merges" two computons via some common object. The apex computon represents the common part between the computons being merged. In the commutative diagram of Definition 20, the squares marked with \(M\) are pushouts of this sort, whereas the squares marked with \(S\) represent pushouts constructing a sequential computon. Figure 7 depicts the commutative diagram resulting from composing the morphisms of the above diagram. This simplification is useful to understand the building blocks that are needed to construct a parallel computon, namely four trivial computons, a fork computon, a join computon and two connected computons. Specifically, the simplified diagram shows that \(\lambda_{0},\lambda_{1},\lambda_{8}\) and \(\lambda_{9}\) are isomorphic to the unit computon, i.e., they are trivial computons with only one ec-inoutput each (see Definition 11). The symbols \(\lambda_{3}\) and \(\lambda_{11}\) represent connected computons (i.e., the operands) being put in parallel via a fork and a join computon. As every fork computon is trivially isomorphic to any other fork computon, we have that \(\lambda_{4}\cong\lambda_{10}\). The same being true for the join computons \(\lambda_{2}\) and \(\lambda_{12}\). The coproduct \(\lambda_{2}+\lambda_{10}\) is then a yvataposition of a fork computon and a join computon, which represents the common part between \(\lambda_{7}\) and \(\lambda_{15}\). This coproduct serves as a common object for the pushout of \(\alpha_{23}\) and \(\alpha_{24}\), i.e., for constructing the parallel computon \(\lambda_{16}\) (also written \(\lambda_{3}\mid\lambda_{11}\) -- see Notation 4). Figure 8 provides a complete, self-descriptive example for constructing a parallel computon from the connected computons used in our example of sequential composition (see Figure 6). Figure 7: Commutative diagram resulting from composing the morphisms of the Diagram of Definition 20. For increased clarity, some composite morphisms are ommited, e.g., \(\alpha_{26}\circ\alpha_{19}\circ\alpha_{15}\circ\alpha_{11}:\lambda_{8} \rightarrow\lambda_{16}\). A glance at Figure 8 reveals that \(\lambda_{1}|\lambda_{2}\) is constructed from pushouts that rely on unit computons as apices. Consequently, only an ec-inport \(p_{1}\in Q_{1}^{+}\), an ec-outport \(p_{2}\in Q_{1}^{-}\), an ec-inport \(q_{1}\in Q_{2}^{+}\) and an ec-outport \(q_{2}\in Q_{2}^{-}\) become i-ports in \(\lambda_{1}|\lambda_{2}\). The rest of e-inports and e-outports of the operands become e-inports and e-outports in \(\lambda_{1}|\lambda_{2}\), respectively. This structural implication is derived from the fact that fork and join computons have control ports only; so, unlike sequential composition, \(\lambda_{1}\) and \(\lambda_{2}\) do not exchange data when composed in parallel. To ensure a consistent construction of the parallel computon \(\lambda_{1}|\lambda_{2}\), Condition 9 of Definition 20 intuitively says that \(p_{1}\) and \(q_{1}\) cannot be mapped to the same ec-outport of the fork computon. A similar constraint is imposed by Condition 10 which states that \(p_{2}\) and \(q_{2}\) cannot be mapped to the same ec-inport of the join computon. Another difference with respect to sequential composition is that the order of the operands does not matter. So, even if \(\lambda_{1}\) and \(\lambda_{2}\) are interchanged in the construction of Definition 20, we would have the same result as before, i.e., \((\lambda_{1}|\lambda_{2})\cong(\lambda_{2}|\lambda_{1})\). Although commutativity differs in sequential and parallel composition, the result in both operations is always a connected computon (see Proposition 15). Also, like sequential composition, any two computons can be put in parallel regardless of the data they require or produce (see Theorem 2). **Proposition 15**.: A parallel computon is a connected computon. Proof.: Let \(\lambda_{3}|\lambda_{11}\) be the parallel computon constructed from the commutative diagram presented in Definition 20. By Proposition 7, we have that \(\lambda_{2}\), \(\lambda_{4}\), \(\lambda_{10}\) and \(\lambda_{12}\) are connected computons (see Conditions 3 and 4). As \(\lambda_{3}\) and \(\lambda_{11}\) are also connected (see Condition 2), Proposition 9 entails that \(\lambda_{7}\) and \(\lambda_{15}\) are connected too. Even though \(\lambda_{2}+\lambda_{10}\) is not a connected computon, we use Proposition 9 again to deduce that \(\lambda_{3}|\lambda_{11}\) is. **Theorem 2**.: \(\lambda_{1}\) and \(\lambda_{2}\) are connected computons \(\iff\) the computons \(\lambda_{1}|\lambda_{2}\) and \(\lambda_{2}|\lambda_{1}\) exist. Proof.: (\(\iff\) ) This part of the proof follows directly from Definition 20. (\(\implies\) ) Let \(\lambda_{3}\) and \(\lambda_{11}\) be two connected computons, \(\lambda_{4}\) and \(\lambda_{10}\) be two fork computons, \(\lambda_{2}\) and \(\lambda_{12}\) be two join computons, and \(\lambda_{i}\cong\Lambda\) for all \(i\in\{0,1,8,9\}\). We first construct computon morphisms \(\alpha_{1}:\lambda_{0}\to\lambda_{2}\), \(\alpha_{2}:\lambda_{0}\to\lambda_{3}\), \(\alpha_{3}:\lambda_{1}\to\lambda_{3}\), \(\alpha_{4}:\lambda_{1}\to\lambda_{4}\), \(\alpha_{11}:\lambda_{8}\to\lambda_{10}\), \(\alpha_{12}:\lambda_{8}\to\lambda_{11}\), \(\alpha_{13}:\lambda_{9}\to\lambda_{11}\) and \(\alpha_{14}:\lambda_{9}\to\lambda_{12}\). As their domain is a unit computon, each morphism is a diagram with the following shape: In the above diagram, it is evident that the only morphism components that are not empty functions are those mapping ports and colours, respectively. As the set of colours of \(\Lambda\) is \(\{0\}\) and \(0\) is in the set of colours of every computon, the \(\Sigma\)-component can be defined in the obvious way to yield an inclusion function. Now, if \(p_{i}\in P_{i}\), the \(P\)-component of each morphism is given as follows: \(\alpha_{1}(p_{0})\in Q_{2}^{+}\), \(\alpha_{2}(p_{0})\in Q_{3}^{-}\), \(\alpha_{3}(p_{1})\in Q_{3}^{+}\), \(\alpha_{4}(p_{1})\in Q_{4}^{-}\), \(\alpha_{11}(p_{8})\in Q_{10}^{-}\), \(\alpha_{12}(p_{8})\in Q_{11}^{+}\), \(\alpha_{13}(p_{9})\in Q_{11}^{-}\) and \(\alpha_{14}(p_{8})\in Q_{12}^{+}\). Since \(\lambda_{10}\) and \(\lambda_{11}\) are connected computations and \(\lambda_{8}\) is trivial (see Proposition 1), it follows that \(p_{8}\in\vec{i}(\alpha_{11})\cap\vec{\partial}(\alpha_{12})\) and, consequently, that \(P_{8}=\vec{i}(\alpha_{11})\cap\vec{\partial}(\alpha_{12})\) because \(P_{8}=\{p_{8}\}\). The facts \(p_{8}\in\vec{i}(\alpha_{11})\cap\vec{\partial}(\alpha_{12})\), \(\alpha_{11}(p_{8})\in Q_{10}^{-}\) and \(\alpha_{12}(p_{8})\in Q_{11}^{+}\) allow us to further deduce \(\alpha_{11}(\vec{\partial}(\alpha_{12}))\subseteq Q_{10}^{-}\subseteq P_{10}^{-}\) and \(\alpha_{12}(\vec{i}(\alpha_{11}))\subseteq Q_{11}^{+}\subseteq P_{11}^{+}\). In particular, \(\alpha_{11}(\vec{\partial}(\alpha_{12}))\subset P_{10}^{-}\) because \(\alpha_{11}\) is monic, \(|P_{8}|=1\) and \(|P_{10}^{-}|=|Q_{10}^{-}|=2\) (see Remark 1 and Definitions 11 and 13). Since \(\alpha_{11}(p_{8})\in Q_{10}^{-}\) and \(\alpha_{12}(p_{8})\in Q_{11}^{+}\), \(\alpha_{11}(p_{8})\bullet=\emptyset=\bullet\alpha_{12}(p_{8})\) so \(p_{8}\notin\vec{\partial}(\alpha_{11})\cup\vec{i}(\alpha_{12})\). The fact \(P_{8}=\{p_{8}\}\) allow us to derive \(\vec{\partial}(\alpha_{11})\cup\vec{i}(\alpha_{12})=\emptyset\) so that \(\alpha_{11}(\vec{i}(\alpha_{12}))=\emptyset\subset P_{10}^{+}\cup P_{10}^{-}\) and \(\alpha_{12}(\vec{\partial}(\alpha_{11}))=\emptyset\subset P_{11}^{+}\cup P_{11}^ {-}\). Above we showed that \(\alpha_{11}(\vec{\partial}(\alpha_{12}))\subset P_{10}^{-}\subset P_{10}^{+}\cup P _{10}^{-}\) and that \(\alpha_{12}(\vec{i}(\alpha_{11}))\subseteq P_{11}^{+}\subset P_{11}^{+}\cup P_{11}^ {-}\). Therefore, \(\lambda_{10}\stackrel{{\alpha_{11}}}{{\longleftarrow}}\lambda_{8} \stackrel{{\alpha_{11}}}{{\longrightarrow}}\lambda_{11}\) is pushable (see Definition 17). By Proposition 8 and Definition 18, the pushout \((\alpha_{15}:\lambda_{10}\to\lambda_{13},\lambda_{13},\alpha_{16}:\lambda_{11} \to\lambda_{13})\) of \(\alpha_{11}\) and \(\alpha_{12}\) can be constructed to form a sequential computon with \(\lambda_{10}\rhd\lambda_{11}\). A similar reasoning can be used to deduce the existence of \((\alpha_{6}:\lambda_{3}\to\lambda_{5},\lambda_{5},\alpha_{5}:\lambda_{2}\to \lambda_{5})\) from \(\lambda_{3}\stackrel{{\alpha_{2}}}{{\longleftarrow}}\lambda_{0} \stackrel{{\alpha_{1}}}{{\longrightarrow}}\lambda_{2}\), \((\alpha_{8}:\lambda_{4}\to\lambda_{6},\lambda_{6},\alpha_{7}:\lambda_{3}\to \lambda_{6})\) from \(\lambda_{4}\stackrel{{\alpha_{4}}}{{\longleftarrow}}\lambda_{1} \stackrel{{\alpha_{3}}}{{\longrightarrow}}\lambda_{3}\) and \((\alpha_{17}:\lambda_{11}\to\lambda_{14},\lambda_{18}:\lambda_{12}\to \lambda_{14})\) from \(\lambda_{11}\stackrel{{\alpha_{13}}}{{\longleftarrow}}\lambda_{9} \stackrel{{\alpha_{14}}}{{\longrightarrow}}\lambda_{12}\) such that \(\lambda_{3}\rhd\lambda_{2}\), \(\lambda_{4}\rhd\lambda_{3}\) and \(\lambda_{11}\rhd\lambda_{12}\). Now, as \(\alpha_{13}(p_{9})\in Q_{11}^{-}\) and \(\alpha_{14}(p_{9})\in Q_{12}^{+}\), it is routine to show that \(\lambda_{13}\stackrel{{\alpha_{16}}}{{\longleftarrow}}\lambda_{11} \stackrel{{\alpha_{17}}}{{\longrightarrow}}\lambda_{14}\) is pushable too. So, the pushout \((\alpha_{19}:\lambda_{13}\to\lambda_{15},\lambda_{15},\alpha_{20}:\lambda_{14} \to\lambda_{15})\) of \(\alpha_{16}\) and \(\alpha_{17}\) can be constructed. A similar approach can be used to prove that the span \(\lambda_{6}\stackrel{{\alpha_{7}}}{{\longleftarrow}}\lambda_{3} \stackrel{{\alpha_{6}}}{{\longrightarrow}}\lambda_{5}\) yields \((\alpha_{10}:\lambda_{6}\rightarrow\lambda_{7},\lambda_{7},\alpha_{9}:\lambda_{5} \rightarrow\lambda_{7})\). Evidently, these two pushouts do not produce sequential computons by the fact that \(\lambda_{3}\) and \(\lambda_{11}\) are not trivial computons. The fact that \(\lambda_{2}\) and \(\lambda_{12}\) are both join computons entails that \(\lambda_{2}\cong\lambda_{12}\); consequently, there is a computon isomorphism \(\alpha_{27}:\lambda_{2}\rightarrow\lambda_{12}\). By composing morphisms, we obtain the composites \(\alpha_{19}\circ\alpha_{15}:\lambda_{10}\rightarrow\lambda_{15}\) and \(\alpha_{20}\circ\alpha_{18}\circ\alpha_{27}:\lambda_{2}\rightarrow\lambda_{15}\). As the coproduct \(\lambda_{2}+\lambda_{10}\) exists (as per Proposition 14), we know that there also are computon morphisms \(\alpha_{21}:\lambda_{2}\rightarrow\lambda_{2}+\lambda_{10}\) and \(\alpha_{22}:\lambda_{10}\rightarrow\lambda_{2}+\lambda_{10}\). The existence of \(\alpha_{19}\circ\alpha_{15}\) and \(\alpha_{20}\circ\alpha_{18}\circ\alpha_{27}\) allows us to use the universal property of coproducts to deduce that there is a unique computon morphism \(\alpha_{24}:\lambda_{2}+\lambda_{10}\rightarrow\lambda_{15}\). A similar reasoning can be used to deduce a unique \(\alpha_{23}:\lambda_{2}+\lambda_{10}\rightarrow\lambda_{7}\). Again, it is routine to show that the span \(\lambda_{15}\xleftarrow{\alpha_{24}}\lambda_{2}+\lambda_{10}\xrightarrow{ \alpha_{23}}\lambda_{7}\) is pushable. So, using Proposition 8 we deduce that the pushout \((\alpha_{25}:\lambda_{7}\rightarrow\lambda_{3}|\lambda_{11},\lambda_{3}| \lambda_{11},\alpha_{26}:\lambda_{15}\rightarrow\lambda_{3}|\lambda_{11})\) of \(\alpha_{23}\) and \(\alpha_{24}\) can be constructed. This pushout does not form a sequential computon because \(\lambda_{2}+\lambda_{10}\) is not a trivial computon. Since \(\lambda_{3}\) is a connected computon and \(\alpha_{3}(p_{1})\in Q_{3}^{+}\), \(p_{1}\in\vec{\omega}(\alpha_{3})\) so \(\alpha_{3}(p_{1})\bullet\neq\emptyset\). By commutativity, \(\alpha_{7}(\alpha_{3}(p_{1}))=\alpha_{8}(\alpha_{4}(p_{1}))\) so \(\alpha_{8}(\alpha_{4}(p_{1}))\bullet\neq\emptyset\) and, thereby, \(\alpha_{10}(\alpha_{8}(\alpha_{4}(p_{1})))\bullet\neq\emptyset\). As \(\lambda_{4}\cong\lambda_{10}\), there exists an isomorphism \(\alpha_{28}:\lambda_{4}\rightarrow\lambda_{10}\) where \(\alpha_{23}\circ\alpha_{22}\circ\alpha_{28}\circ\alpha_{4}=\alpha_{10}\circ \alpha_{8}\circ\alpha_{4}\) so \(\alpha_{23}(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1}))))\bullet\neq\emptyset\). Since \(p_{1}\bullet=\emptyset\) and \(\vec{\omega}(\alpha_{4})=\vec{\omega}(\alpha_{28})=\vec{\omega}(\alpha_{22})=\emptyset\), it follows that \(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1})))\bullet=\emptyset\). Hence, \(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1})))\in\vec{\omega}(\alpha_{23})\). The fact \(\lambda_{1}\cong\lambda_{8}\) allows us to deduce the existence of a unique isomorphism \(\lambda_{29}:\lambda_{1}\rightarrow\lambda_{8}\) where \(\alpha_{11}\circ\alpha_{29}\neq\alpha_{28}\circ\alpha_{4}\), i.e., \(\alpha_{11}(\alpha_{29}(p_{1}))=\alpha_{11}(p_{8})\neq\alpha_{28}(\alpha_{4}(p_ {1}))\). Using again \(p_{1}\bullet=\emptyset\) and \(\vec{\omega}(\alpha_{4})=\vec{\omega}(\alpha_{28})=\emptyset\), we have \(\alpha_{28}(\alpha_{4}(p_{1}))\bullet=\emptyset\) and, consequently, \(\alpha_{15}(\alpha_{28}(\alpha_{4}(p_{1})))\bullet=\emptyset\). By the above and since \(\alpha_{16}(\alpha_{12}(p_{8}))\neq\alpha_{15}(\alpha_{28}(\alpha_{4}(p_{1})))\), \(\alpha_{19}(\alpha_{15}(\alpha_{28}(\alpha_{4}(p_{1}))))\bullet=\emptyset\). Using commutativity, we obtain \(\alpha_{24}\circ\alpha_{22}\circ\alpha_{28}\circ\alpha_{4}=\alpha_{19}\circ \alpha_{15}\circ\alpha_{28}\circ\alpha_{4}\) and, consequently, \(\alpha_{24}(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1}))))\bullet=\emptyset\). As \(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1})))\bullet=\emptyset\), \(\alpha_{22}(\alpha_{28}(\alpha_{4}(p_{1})))\notin\vec{\omega}(\alpha_{24})\). A similar reasoning can show \(\alpha_{22}(\alpha_{11}(\alpha_{29}(p_{1})))\in\vec{\omega}(\alpha_{24})\) and \(\alpha_{22}(\alpha_{11}(\alpha_{29}(p_{1})))\notin\vec{\omega}(\alpha_{23})\). As \(\alpha_{22}(\alpha_{11}(\alpha_{29}(p_{1})))\neq\alpha_{22}(\alpha_{28}(\alpha _{4}(p_{1})))\) (because \(\alpha_{11}\circ\alpha_{29}\neq\alpha_{28}\circ\alpha_{4}\)) and every fork computon has exactly two ec-outports, \(\vec{\omega}(\alpha_{23})\cap\vec{\omega}(\alpha_{24})=\emptyset\). The proof for \(\vec{i}(\alpha_{23})\cap\vec{i}(\alpha_{24})=\emptyset\) follows analogously. Since our construction satisfies all the conditions of Definition 20, the proof of existence of \(\lambda_{3}|\lambda_{11}\) is complete. To prove that \(\lambda_{11}|\lambda_{3}\) also exists, we just need to repeat all the steps of this proof by writing \(\lambda_{3}\) for \(\lambda_{11}\) and \(\lambda_{11}\) for \(\lambda_{3}\). Doing this is completely valid since, by Definition 1, every connected computon has at least one ec-inport and least one ec-outport. ## 7 Related Work In this section, we present the related work of our proposal, namely related compositional approaches and component models that separate concerns. Prosave [18] is a design language built on top of the ProCom component model, which was inspired on [19] to allow the definition of nested structures of interconnected components. Like computons, Procom components are passive units of computation with explicit separation of control ports and data ports. Despite of this similarity, Procom is not compositional since it does not provide algebraic operators to perform control-based composition, but just informal programming constructs for connecting ports either directly or indirectly. Indirect connection is done through so-called connectors which establish control or data flow interaction between components via message passing. As the model is not compositional, Procom composites do not offer a clear separation of concerns like their internal components. They rather have data ports only where both control flow and data flow terminate. SCADE [20] is a similar component model which integrates an imperative language (i.e., Esterel [21]) and a functional language (i.e., Lustre [22]) to define control flow and data flow, respectively. Particularly, so-called Safe-State Machines (SSMs) model the discrete control part of a system, whereas Lustre blocks serve to continuously process data. Like Prosave, SCADE does not provide formal operators for defining control-based composite blocks, but just programming constructs to non-compositionally assemble a system. In the same line of work, [9, 23, 24] describe a component model that provides two orthogonal dimensions to manage control flow and data flow separately. The model encapsulates control since it offers composition operators to define sequential, parallel or branchial composites in a hierarchical, bottom-up manner. Unfortunately, the semantics of the model is semi-formal [4] so it is not possible to precisely determine whether the model is fully compositional or not. Also, components do not have separate ports for data and control, but just control ports. Consequently, the data dimension is implicitly defined in the underlying composition mechanism whose goal is to build complex workflows from simpler ones. Workflow Nets (WF-nets) [25, 26] provide support for modelling workflow processes in the form of control-driven computations. As they offer well-founded semantics, WF-nets formalise the notion of workflow graphs which are traditionally specified through industry-oriented languages such as UML diagrams, Event-driven Process Chains or the Business Process Modelling Notation (BPMN). WF-nets do not separate control from data and do not provide operators for explicitly and compositionally defining parallel or sequential composites. The issue of the separation of concerns is solved by RWFN-nets [27] which unify extended WF-nets and so-called resource nets for separating the process and resource perspectives of a workflow. Although the model provides a clear separation of concerns, it is difficult to see the relationship between data flow and control flow. In fact, the nets for both perspectives are disconnected from each other and synchronisation between them is required. Also, there is not a clear distinction between input and output data, and composition is not algebraically defined. Therefore, like WF-nets, RWFN-nets are not compositional. Other Petri net based approaches, for workflow construction, that non-compositionally separate control flow and data flow are the functor model [28], extended-time nets [29], the FunState model [30] and dual flow nets [7]. Existing compositional approaches built upon Petri net foundations rely on the notion of open places which collectively form an interface to the external world. These specially designated places are particularly used by open Petri nets (ONets) [31] to construct complex behaviours from simpler ones. In this framework, ONet composition is realised by gluing the output places of one net with the input places of another. As this composition mechanism is characterised as a pushout in a categorical setting [32], ONets are compositional. An ONet morphism resembles a computon morphism in the sense that input and output places can be preserved upon transformation (see Proposition 2). Nevertheless, like Definition 16, a pushout operation just serves for merging two ONets via a common object so that there are no specific operators for explicitly defining sequential or parallel composites (i.e., ONets do not encapsulate control flow). Petri box calculus [33], Open WF-nets [34], Whole-grain Petri nets [16], Petri nets with interface [35], nets with boundaries [36] and Petri net components [37] also rely on the notion of open places as interfaces. Like ONets, all these Petri-net-based approaches neither encapsulate control flow nor separate data from control. Although they do not separate concerns, Whole-grain Petri nets deserve a more detailed analysis since, unlike their similars and like computons, they abolish the traditional notion of multisets of places, typically expressed as the free monoid \(\mathbb{N}[P]\). Accordingly, they also work upon a similar categorical scheme to \(\mathbf{Comp}\), in order to define concrete instances of Whole-grain nets (cf. [15, 16]). The difference is that \(\mathbf{Comp}\) has objects that enable computons to have a clear distinction between control and data ports. Another difference is that our theory identifies particular classes of computon objects that can be used as building blocks to define more complex computons through sequencing or parallelising operations. Although primitive computons are isomorphic to Whole-grain corollas, Whole-grain Petri nets do not distinguish between different types of corollas (e.g., join or fork corollas). Within the realm of related compositional models, we also find string diagrams [38, 39] which offer well-founded syntax to graphically represent morphisms of symmetric monoidal categories. A string diagram is made up boxes connected through wires, where boxes represent processes and wires express inputs or outputs for those processes. As this model is rooted in category theory, string diagrams can be composed sequentially via the \(\circ\) operator or in parallel via the tensor product. Since sequential composition is done by totally matching outputs with inputs (or domain with codomain) and there are not specifically designed wires for representing control, it follows that, unlike computons, not every string diagram can be composed sequentially with one another (cf. Theorem 1). Consequently, there is no distinction between control flow and data flow. A glance at Figures 6 and 8 reveals that the structure of a composite computon is like a membrane in which other computons reside and that can be part of another membrane/composite. An edge connected to/from a composite e-port is akin to a fiber. This fiber can traverse other membranes as long as the e-port it is connected to/from does not become an i-port. This analogy resembles the structural organisation of a P-system [40] in which membranes are delimiting compartments where multisets of objects evolve according to bio-inspired rules. Unfortunately, a P-System is not compositional since there are no formal operators to compositionally define membranes. Also, the model does not separate data and control. In fact, such dimensions are implicitly defined in the evolution of internal objects. ## 8 Conclusions and Future Directions In this paper, we presented a constructive model of computation in which computons are first-class semantic entities which structurally possess a number of computation units that can be connected to/from two types of ports: control ports and data ports. Such entities are objects in a functor category, denoted \(\mathbf{Set}^{\mathbf{Comp}}\), where two major classes of objects reside. The first class is that of trivial computons which have just ports and no computation units. The second class pertains to primitive computons which are fully connected entities in the sense they have a unique computation unit to which all ports are attached. These two classes serve as building blocks to define complex computons via category-theoretic operations. We particularly presented operations to inductively form either sequential or parallel composite computons. As the model is compositional, composites exhibit the same properties as their constituents, i.e., they have the same structure with a clear separation of control flow and data flow. Generally speaking, both control flow and data flow are inextricably present in any classical computation, so it is crucial to separately reason about them for verification, maintainability and optimisation purposes. For example, by leveraging the fact that the behaviour of a computon can be expressed as a token game, it is possible to use standard Petri net tools or relevant graph-based analysis techniques to separately verify computing properties, such as reachability of control flow and data flow. Taking advantage of graph-based techniques can also enable an optimal implementation in which functional computons exchange data decentrally while composites coordinate control flow hierarchically [9]. Although we decided not to include explicit structures for data processing (e.g., map-reduce or filter constructs), because data flow is ultimately governed by control flow, we acknowledge that introducing them is important to increase the expressivity of composite computons. However, doing this in a compositional manner requires further investigation. Enabling compositionality is also important to induce modularity which is a well-known feature for reusing computations at scale. Modularity does not imply compositionality because modules can be constructed in many different ways (not necessarily algebraically). When an algebraic composition mechanism is used to realise this feature, computation properties are preserved across all composition levels. In our proposal, the separation of control flow and data flow is one of such properties. Thus, as computons only interact through their respective e-ports, composite computons can be perceived as modular black-boxes that encapsulate some precise control-flow structure. At the moment, we only consider sequential and parallel control flow. In the future, we would like to prove the universality of our model by providing category-theoretic constructions for defining iterative and branchial computons. Branching and iteration are merely processes for choosing and repeating sequential computations, so we conjecture that the yet-to-be-defined categorical constructions could be built upon sequential pushouts (see Definition 18). In Sections 6.1 and 6.2, we proved that any pair of connected computons in \(\mathbf{Set}^{\mathbf{Comp}}\) can always be composed sequentially (see Theorem 1) or in parallel (see Theorem 2) regardless of the data they require or produce. This is because, intuitively, a computon has at least an ec-output that can always be matched with the ec-inport of another one. Matching all the e-outports of one computon with all the e-inports of another one gives rise to total sequential composition which, to the best of our knowledge, is the _de facto_ way of sequencing computations nowadays (cf. [41]). In this paper, we argue that sequencing is a particular form of merging because the former can be expressed in terms of the latter. Particularly, in our proposal, merging corresponds to a pushout operation in \(\mathbf{Set}^{\mathbf{Comp}}\) (see Definition 16), while sequencing is characterised as a pushout with restrictions in the same category (see Definition 18). As sequencing cannot only be done totally but also partially (see Definition 19), our sequencing mechanism is more general than those prevailing in the existing literature. Partial composition entails that non-matching e-ports are preserved across every composition level (see Figures 6 and 8). If computons are seen as relations from e-inports to e-outports, our composition mechanism provides the basis to redefine the current notion of composition of relations which states that the composite \(S\circ R\) of \(R\subseteq X\times Y\) and \(S\subseteq Y\times Z\) is given by \(\{(x,z)\mid\exists y[R(x,y)\;\wedge\;S(y,z)]\}\). Since \((S\circ R)\) is a subset of \(X\times Z\), it is evident that some relations in \(X\times Y\) and in \(Y\times Z\) are lost. By resorting to the foundations laid in this paper, a preservative definition emerges: \((S\circ R)\cup[(X\times Y)\setminus(S\circ R)]\cup[(Y\times Z)\setminus(S \circ R)]\). Thus, rather than being a subset of \(X\times Z\), a composite relation would be a subset of \((X\cup Y)\times(Y\cup Z)\). In the future, we would like to further investigate this preservative notion. Defining computons as preorders in a categorical setting can be achieved by borrowing ideas from resource theories [39]. We hypothesize that there are symmetric monoidal categories in which computons are morphisms and ports are objects. Defining categories of this sort can be helpful to study the operational semantics of composite computons through the arrow of time. Particularly, _v-categories_ and _v-profunctors_ can provide theoretical underpinnings for formally answering questions about the execution of computons. ## Appendix A Table 1 presents a mapping from Petri net syntax to computon syntax, which is useful to discuss the operational semantics of computons. A glance at this table reveals that, in general, places with no incoming arrows correspond to e-inports, whereas places with no outgoing arrows correspond to e-outports. This reflects the fact that e-inports and e-outports receive and send information from/to the external world.
2301.13806
Cavity-enhanced excitation of a quantum dot in the picosecond regime
A major challenge in generating single photons with a single emitter is to excite the emitter while avoiding laser leakage into the collection path. Ideally, any scheme to suppress this leakage should not result in a loss in efficiency of the single-photon source. Here, we investigate a scheme in which a single emitter, a semiconductor quantum dot, is embedded in a microcavity. The scheme exploits the splitting of the cavity mode into two orthogonally-polarised modes: one mode is used for excitation, the other for collection. By linking experiment to theory, we show that the best population inversion is achieved with a laser pulse detuned from the quantum emitter. The Rabi oscillations have an unusual dependence on pulse power. Our theory describes them quantitatively allowing us to determine the absolute photon creation probability. For the optimal laser detuning, the population innversion is 98\%. The Rabi oscillations depend on the sign of the laser-pulse detuning. We show that this arises from the non-trivial effect of phonons on the exciton dynamics. The exciton-phonon interaction is included in the theory and gives excellent agreement with all the experimental results.
Alisa Javadi, Natasha Tomm, Nadia O. Antoniadis, Alistair J. Brash, Rüdiger Schott, Sascha R. Valentin, Andreas D. Wieck, Arne Ludwig, Richard J. Warburton
2023-01-31T17:47:57Z
http://arxiv.org/abs/2301.13806v1
# Cavity-enhanced excitation of a quantum dot in the picosecond regime ###### Abstract A major challenge in generating single photons with a single emitter is to excite the emitter while avoiding laser leakage into the collection path. Ideally, any scheme to suppress this leakage should not result in a loss in efficiency of the single-photon source. Here, we investigate a scheme in which a single emitter, a semiconductor quantum dot, is embedded in a microcavity. The scheme exploits the splitting of the cavity mode into two orthogonally-polarised modes: one mode is used for excitation, the other for collection. By linking experiment to theory, we show that the best population inversion is achieved with a laser pulse detuned from the quantum emitter. The Rabi oscillations have an unusual dependence on pulse power. Our theory describes them quantitatively allowing us to determine the absolute photon creation probability. For the optimal laser detuning, the population inversion is 98%. The Rabi oscillations depend on the sign of the laser-pulse detuning. We show that this arises from the non-trivial effect of phonons on the exciton dynamics. The exciton-phonon interaction is included in the theory and gives excellent agreement with all the experimental results. ## I Introduction Quantum emitters efficiently interfaced with optical cavities represent primary components in photonic quantum technologies. They are used for generating quantum states of light such as single photons and entangled states. The generation efficiency requirement is strict, with many proposals requiring efficiencies higher than 90% [1; 2]. Generating photonic quantum states requires coherent control over the quantum emitter, which is often carried out using fast laser pulses. The main challenges are to ensure that the laser pulse results in occupation of the upper level with near-unity probability and that laser light does not enter the collection mode. Another major complication for solid-state quantum emitters is the interaction with the environmental degrees of freedom, in particular the acoustic phonons [3; 4]. Several approaches have been developed to separate the excitation pulse from the generated photons. One method excites the emitter via non-cavity modes with a propagation direction perpendicular to the cavity axis, a scheme which is often used to generate photons from atoms and ions [5; 6]. In the solid-state domain, non-resonant excitation schemes, such as a phonon-assisted mechanism [7; 8; 9; 10], allow spectral filtering of the laser pulse. However, the essential spectral filtering unavoidably reduces the efficiency of the source. Additionally, phonon-assisted schemes require large pulse areas [8]. The pulse area can be reduced to the minimum, \(\pi\), by exciting the quantum emitter resonantly. In such schemes, the collection and excitation modes have a different spatial [11] or polarization degree-of-freedom [12]. It is challenging to avoid losses. For instance, the cross-polarized scheme can result in the loss of 50% of the generated photons. In many cases, the cavity mode splits into two modes with orthogonal polarization, a consequence of weak birefringence either in the mirrors or in the solid-state host. This mode structure offers a solution to the excitation-collection challenge. One cavity-mode, resonant with the quantum emitter, is used for collection; the other cavity-mode is used for excitation [13]. Furthermore, if the quantum emitter has a circularly-polarized optical dipole-moment, a cross-polarized detection scheme does not compromise the efficiency of the source [13]. The scheme was originally developed for quantum dots (QDs) in semiconductor micropillars, for which the cavity mode-splitting is induced by an elliptical pillar cross-section [13]. It was subsequently employed in a QD-in-open-cavity device [14]. In this case, the mode-splitting arises from birefringence in the semiconductor heterostructure, and it can be tuned via the electro-optic [15] and photo-elastic effects [16]. Here, we probe both experimentally and theoretically the cavity-based excitation-collection scheme using a QD coupled to a one-sided open-microcavity, Fig. 1a. The experiment explores the dependence on both laser detuning and pulse power. The theoretical model describes the effect of the cavity on the excitation pulse. It also includes the exciton-phonon interaction. The model describes the experimental results precisely allowing us to quantify the photon creation probability, to understand the unusual Rabi oscillations, and to predict the behaviour as a function of cavity-mode splitting. Cavity-mediated excitation of a two-level system We consider initially pulsed excitation of a two-level system (TLS) in the ideal limit (absence of decay and dephasing processes), where the pulse duration is significantly shorter than the lifetime of the emitter. In the case of a transform-limited pulse, a resonant pulse drives the TLS around the Bloch-sphere, inverting its population from the ground state \(|g\rangle\) to the excited state \(|e\rangle\), black dots in Fig. 1b. On increasing the pulse area, the population rotates coherently around the Bloch-sphere, resulting in the well-known Rabi-oscillations, gray line in Fig. 1c. Another well-known case is the dynamics of a TLS under excitation with a strongly chirped pulse, rapid adiabatic passage [17; 18]. The TLS interacts with different frequency components present in the pulse at different instants in time, resulting in a different trajectory on the Bloch-sphere, blue dots Fig. 1b. Starting at the south pole of the Bloch sphere, the state tends to gravitate to the north pole, making the excited state population insensitive to the pulse area, Fig. 1c. In the case of a cavity-mediated excitation with a detuned pulse, a Gaussian-shaped pulse is convoluted with the time-response of the cavity itself. Effectively, the cavity acts as a dispersive filter, altering the spectral profile of the pulse such that it can no longer be described by a Gaussian profile in the frequency domain. Figure 1d shows the spectral configuration of the original laser pulse and the cavity modes. The TLS is resonant with the higher-frequency cavity mode (H-polarized), and the laser pulse is launched via the lower-frequency (V-polarized) cavity mode. The red curves in Fig. 1b,c show the evolution of the TLS as a function of the input pulse area. A full population in the excited state can be obtained, but depletion of the population is incomplete at higher pulse areas. At high pulse areas, the excited state population converges to a constant value lying between 0 and 100% dependending on the detuning. This behaviour at high pulse areas is quite different to the other two excitation mechanisms. ## III Experimental results We use a QD in an open microcavity [14] to study the cavity-mediated excitation scheme. The cavity hosts two modes with orthogonal polarization and a mode-splitting of 50 GHz. The laser pulses have a temporal width (intensity full-width at half maximum) between 3.6 and 5.0 ps. We use the polarization of the excitation laser to select the excitation cavity mode. Figure 1d shows the case where the QD is on resonance with the higher-frequency (H-polarized) mode and the laser is launched through the lower-frequency (V-polarized) mode, the "blue" collection case. Figure 2a shows the calculated intra-cavity field in this configuration. As evident from the spectrum, the intra-cavity field has a strong peak at frequencies below the resonance of the QD. Figure 2b shows the spectrum in the inverted case, the "red" collection case: the QD is on resonance with the lower-frequency (V-polarized) mode and the laser is launched through the higher-frequency (H-polarized) mode. We present first experimental results on the "blue" collection case. Figure 2c shows the normalized single-photon rate as a function of the input laser power. The laser is detuned from the QD by \(\Delta\omega_{\mathrm{L}}/(2\pi)=88\) GHz. We observe the expected oscillatory behaviour along with strong damping as the laser power is increased. The damping is a result of the interaction between the QD exciton, the red-detuned components of the intra-cavity field, and the phonons in the environment of the QD. In particular, the damping can be depicted as the process in which the QD decays by emitting a photon on resonance with the intra-cavity peak and a phonon, depicted in the inset of Fig. 2c. The process is enhanced by the large amplitude of the intra-cavity field, and hence we observe stronger decays for increasing laser powers. Processes of this nature have already been observed in pump-probe Figure 1: Excitation mechanisms. (a) Schematic picture of a quantum emitter coupled to a one-sided cavity. The cavity’s fundamental mode is split into two non-degenerate H- and V-polarized modes. (b) Bloch-sphere representation of the TLS state when interacting with a resonant Gaussian pulse (black dots), a chirped pulse (blue dots), and a cavity-filtered pulse (red dots), as a function of the pulse area. (c) The excited state population, \(\rho_{ee}\), versus pulse area for excitation with a Gaussian pulse (gray line), a chirped pulse (blue line), and a cavity-filtered pulse (red line). (d) Spectral configuration for the cavity-mediated excitation. The quantum emitter is resonantly coupled to the higher-frequency H-polarized cavity mode (blue), and the V-polarized laser pulse interacts with the quantum emitter via the lower-frequency V-polarized cavity mode (red). experiments [19]. The theory (Sec. IV) captures all these details, shown by the solid lines in Fig. 2c. A crucial metric is the population inversion, the probability of creating an exciton in the QD following pulsed excitation. The probability of creating a photon in the collection cavity-mode is \(\eta_{c}=\beta_{c}\pi_{e}\) where \(\pi_{e}\) stands for the population inversion, and \(\beta_{c}\) the probability that an exciton creates a photon in the collection cavity-mode. \(\beta_{c}\) (and the other factors which determine the exact measured photon flux [14]) remains constant as a function of power. Hence, the convincing match of the theory to the experimental results allows us to deduce that we achieve a maximum population inversion of \(\pi_{e}=96\%\) in this experiment. The theory also allows us to quantify the exact role of phonons - the dashed line in Fig. 2c shows the theory with the exciton-phonon interaction turned off. Phonons limit only slightly the population inversion (\(99\%\to 96\%\)) at small powers but have a significant effect at large powers. We turn to the "red" collection case, a scheme mirrored in frequency with respect to the "blue" case. Figure 2d shows the normalized single-photon rate as a function of the input laser power with \(\Delta\omega_{\mathrm{L}}/(2\pi)=-82\,\mathrm{GHz}\). Interestingly, the strong damping disappears in this case. This is due to the fact that a phonon-mediated process is suppressed: the peak of the intra-cavity field lies at a higher frequency than the QD resonance, such that phonon-mediated depopulation of the excited state would require absorption of a phonon which is suppressed at \(4.2\,\mathrm{K}\), the temperature of the experiment, due to the low thermal population of the phonon bath. The symmetry breaking in the system's evolution, "red" with respect to "blue", is strong evidence for the role of phonons in the system dynamics [20; 21]. As for the "blue" case, the theory reproduces the experimental results very convincingly. We now investigate the dependence on the laser detuning. Zero detuning corresponds to the case when the laser and the QD are in resonance. Figures 2e and f show the measured normalized photon rates for different laser detunings (\(\Delta\omega_{\mathrm{L}}/(2\pi)\)) for the "blue" and "red" collection cases, respectively. For the "blue" collection case (Fig. 2e), one observes the fingerprint of Rabi rotations along with phonon-induced damping. The peak population inversion is higher than \(90\%\) for \(\Delta\omega_{\mathrm{L}}/(2\pi)\) between \(40\,\mathrm{GHz}\) and \(100\,\mathrm{GHz}\). At resonance and for negative detunings, \(\Delta\omega_{\mathrm{L}}\leq 0\), the peak population inversion decreases drastically. For the "red" collection case (Fig. 2f), the oscillatory behaviour is less pronounced and only present when the laser is red-detuned, \(\Delta\omega_{\mathrm{L}}\leq 0\). Interestingly, the population inversion can still be close to unity for \(\Delta\omega_{\mathrm{L}}\geq 0\). This can be described by a phonon-assisted excitation of the QD: in this scenario, the intra-cavity field lies at higher frequencies than the QD transition such that a laser photon can be converted to a QD-exciton and a phonon [7; 21]. Figures 2g and 2h show the highest measured photon rates as a function of the excitation frequency for the Figure 2: Normalized photon rate as a function of laser detuning and choice of collection cavity. (a) Schematic of the “blue” collection case in which the QD is resonant with the higher-frequency cavity-mode and the lower-frequency cavity acts as the excitation cavity. (b) Schematic of the “red” collection case in which the role of the two cavity-modes is reversed. (c) Normalized photon rate (measured signal normalized to the known losses of the system) as a function of the square-root of the input power for the “blue” case with \(\Delta\omega_{\mathrm{L}}/(2\pi)=88\,\mathrm{GHz}\). The solid black line is the result for the population inversion in the model including phonons; the dotted line is the result of the model in the absence of phonons. The inset depicts the energy levels of the QD and the peak photon-energy of the intra-cavity field (IC). (d) Same as (c), but for the “red” case. (e) and (f) Normalized photon rate for “blue” and “red” cases, respectively. The data sets are offset by one unit for better visualization. (g) and (h) Measured peak signal and calculated peak population inversion as a function of laser detuning. The model uses a fixed \(t_{p}=3.6\,\mathrm{ps}\). "red" and "blue" schemes (data in Figures 2e and 2f, respectively). In both cases, the maximum is achieved for a detuned pulse. ## IV Theoretical treatment We describe the interaction between a TLS coupled to the collection cavity and a driving electric field with the standard Hamiltonian: \[\begin{split}\hat{H}=&\hbar\,\Delta\omega_{c}\, \hat{a}_{c}^{\dagger}\hat{a}_{c}+\hbar g\left(\hat{a}_{c}^{\dagger}\hat{\sigma} _{-}+\hat{a}_{c}\hat{\sigma}_{+}\right)\\ &+\hbar\left(\overline{E(t)}\cdot\overline{\mu}\right)\left(e^{ i\omega_{0}t}\hat{\sigma}_{+}+e^{-i\omega_{0}t}\hat{\sigma}_{-}\right).\end{split} \tag{1}\] The Hamiltonian is in the rotating frame of the TLS. \(\omega_{0}/(2\pi)\) is the resonance frequency of the TLS, \(\Delta\omega_{c}/(2\pi)\) is the frequency detuning between the collection cavity-mode and the TLS (\(\Delta\omega_{c}=\omega_{c}-\omega_{0}\)), and \(\hat{a}_{c}^{\dagger}\) is the photon creation operator for the collection cavity. \(\overline{E(t)}\) is the intra-cavity field driving the TLS. The leakage through the top mirror can be modeled using the Lindblad operator \(\mathcal{\hat{L}}=\sqrt{\kappa}\hat{a}_{c}\). The laser interacts with the TLS via the second cavity mode, the excitation mode, with resonance frequency \(\omega_{e}/(2\pi)\). The intra-cavity field is a convolution of the input pulse and the impulse response of the cavity mode. The impulse response of a cavity is \(h(\tau)=e^{-\frac{\kappa}{2}}\text{cos}(\omega_{e}\tau)\Theta(\tau)\), where \(\kappa/(2\pi)\) is the linewidth of the cavity mode. Assuming an input field \(E_{0}(t)=(\pi t_{p})^{-1}\text{sech}(t/t_{p})\text{cos}(\omega_{L}t)\), typical of a mode-locked laser, the intra-cavity field is: \[\begin{split}\overline{E(t)}=&\frac{\kappa e^{i \omega_{L}t}\text{sech}(t/t_{p})}{2\pi j_{m}}\\ &\times{}_{2}F_{1}\left(1,1,1+j_{m}/2,\text{sech}(t/t_{p})/2 \right)+\text{c.c.},\end{split} \tag{2}\] where \(j_{m}=1-(i\Delta\omega_{\text{EL}}-\kappa/2)t_{p},\omega_{L}/(2\pi)\) is the centre frequency of the laser, \(t_{p}\) the input pulse width, \(\Delta\omega_{\text{EL}}/(2\pi)\) the detuning between the laser and the excitation cavity mode, and \({}_{2}F_{1}\) Gauss's hypergeometric function. We plug Eq. 2 into Eq. 1 and apply the standard rotating-wave approximation. Phonons play a significant role in the dynamics of a QD, as the instantaneous Rabi frequency can be as high as several terahertz at which the exciton-phonon coupling is strongest. Assuming weak coupling of the exciton to the environment, one can include the effect of phonons on the TLS dynamics using the Bloch-Redfield master equation [22, 23, 24]. We use the Python package Qutip [25, 26] to set up and solve the equations of motion based on the Hamiltonian in Eq. 1. Finally, the photon creation probability in the collection mode (\(\eta_{e}\)) is calculated as \(\beta_{e}\pi_{e}\) with \(\pi_{e}=\int\kappa\left\langle\hat{a}_{c}^{\dagger}\hat{a}_{c}\right\rangle dt\) and \(\beta_{c}=\frac{F_{p}(\omega_{c})}{F_{p}(\omega_{c})+F_{p}(\omega_{c})+1}\), the probability of the QD exciton creating a photon in the collection cavity mode. We note that \(\int\kappa\left\langle\hat{a}_{c}^{\dagger}\hat{a}_{c}\right\rangle dt\) is the number of photons generated by the excitation pulse and can in principle can exceed one (via multiple excitations of the TLS by one pulse). However, in the regime explored here (pulse duration much less than radiative decay time), \(\int\kappa\left\langle\hat{a}_{c}^{\dagger}\hat{a}_{c}\right\rangle dt\) follows the population of the TLS upper-state very closely and we chose to describe it with "population inversion". To extract numerical results, we use the exciton-phonon coupling described in Ref. [27]. We assume a spherically symmetric wavefunction for both electrons and holes (\(\psi\propto e^{-r^{2}/r_{0}^{2}}\)). Our model matches the experimental data very well for an electron radius of \(5.9\,\mathrm{nm}\) and a hole radius of \(3.6\,\mathrm{nm}\). We also use \(\kappa/(2\pi)=25\,\mathrm{GHz}\) and a mode-splitting of \(50\,\mathrm{GHz}\) extracted from earlier measurements [14]. We take a pulse width between \(3.6\,\mathrm{ps}\) and \(5.0\,\mathrm{ps}\). Figure 3: Simulated population inversion of a QD excited by a cavity-filtered light pulse. \(\eta_{c}\) is plotted as a function of input laser detuning from the QD (\(\Delta\omega_{\text{L}}\)) and the excitation pulse amplitude. For this simulation, \(\kappa/(2\pi)=25\,\mathrm{GHz}\), the splitting between the two orthogonal cavity modes is \(50\,\mathrm{GHz}\), and the input pulse width is \(t_{p}=4.2\,\mathrm{ps}\). (a) The “blue” collection case in the absence of phonons. The “red” collection case is equivalent in the absence of phonons but with symmetrically reflected laser frequencies. (b), (c) The calculated values with the same parameters in the presence of phonons: (b) “blue” collection case, where \(\pi_{e}\) reaches \(96.1\%\); and (c) the “red” collection case, where \(\pi_{e}\) reaches \(97.8\%\). We use our model to map out \(\pi_{e}\) as a function of the laser detuning. We use the same parameters from modeling the data in Fig. 2. Figure 3a shows the behaviour of an ideal (phonon-less) QD in the "blue" collection case. Near-unity population inversion is observed over a range of laser detunings. For a phonon-less QD coupled to the red-detuned cavity mode ("red" collection case), this plot would be mirrored with respect to \(\Delta\omega_{\mathrm{L}}=0\). Figures 3b and c show the population inversion including the interaction with phonons. The effect of phonons is visible in the striking difference between these two plots and Fig. 3a. Notably, despite the asymmetric behaviour, "blue" with respect to "red" collection modes, these results clearly show that near-unity population inversion can be obtained in both cases. These results also clearly demonstrate that the maximum population inversion is not obtained at a strict resonant condition when exploiting the cavity-mediated excitation scheme. The success of the theory allows us to predict the behaviour on changing the mode-splitting (\(\Delta\omega_{e}/(2\pi)\)) over a large range. It is now important to calculate \(\eta_{\mathrm{c}}\) as \(\beta_{\mathrm{c}}\) depends on \(\Delta\omega_{e}/(2\pi)\): at small \(\Delta\omega_{e}/(2\pi)\), the two cavity modes overlap, reducing the \(\beta_{\mathrm{c}}\). Figure 4 shows the maximum attainable \(\eta_{\mathrm{c}}\) for a range of laser detunings. The plot confirms that the maximum efficiency for a finite \(\Delta\omega_{e}\) is achieved away from the strict resonance condition (i.e. \(\Delta\omega_{L}\neq 0\) and \(\mathrm{sign}(\Delta\omega_{e})=-\mathrm{sign}(\Delta\omega_{L})\)). The optimum laser frequency approaches the resonance of the QD as the mode-splitting increases. While near-unity efficiency is possible over a large range of mode-splittings, the photon creation efficiency shows a minimum of 50% around \(\Delta\omega_{e}=0\), as in this case the QD couples to both cavity modes. Finally, it is worth considering the input power needed for cavity-mediated excitation of a QD. For the parameters in Fig. 3, the intra-cavity input pulse area for optimal efficiency is \(5.4\,\pi\). This is less than the pulse area required using the phonon-assisted excitation mechanism. Furthermore, the intra-cavity field is enhanced by the high finesse of the cavity, \(E_{\mathrm{c}}=\sqrt{2F/\pi}E_{\mathrm{in}}\) (not included in Eq. 2). In our experiment, we used a one-sided cavity with a finesse of 500, hence giving us an enhancement of 18 in the amplitude of the field, reducing the excitation pulse area to just \(0.3\,\pi\). ## V Conclusions We consider the excitation of a QD-cavity system with a laser pulse that drives the QD via a cavity mode. We show that the cavity acts as a dispersive filter, modifying the spectral configuration of the laser pulse, and that excitation of the QD proceeds via an indirect route on the Bloch sphere. Nevertheless, we demonstrate a population inversion of a QD-in-a-cavity system as high as 98%. Both the "red" and "blue" collection cases result in equivalent population inversions at the optimal parameters. In the "red" case, the excitation mechanism resembles a rapid adiabatic passage scheme in that there is a near-unity plateau for increasing laser powers. This behaviour could be exploited to generate single photons with low sensitivity to fluctuations in the excitation power by setting the laser power to lie within the plateau. In both "red" and "blue" cases, the population inversion is maximum for a laser-pulse detuned with respect to the QD, illustrating the importance of a complete model of the QD-cavity system and its phonon environment to optimize the performance. Our results demonstrate that cavity excitation of a QD can deliver the high single-photon efficiencies required for optical quantum technologies. The methods developed in this work can readily be applied to a host of different emitter and cavity systems, supporting future advances in solid-state quantum light sources. We comment on a potential drawback for the cavity-based excitation mechanism, namely that the long-lived intra-cavity field may lead to double-excitation events [28; 29]. In this case, the large-bandwidth first photon is filtered away by the cavity; the resultant time jitter on the second photon compromises the indistinguishability of the photons. This problem can be mitigated by choosing \(\kappa\gg\Gamma\), where \(\Gamma\) is the lifetime of the TLS in the cavity, a limit appropriate to the experiments performed here. ## VI Acknowledgments We acknowledge financial support from Swiss National Science Foundation project 200020_204069, NCCR QSIT and Horizon-2020 FET-Open Project QLUSTER. A.J. acknowledges support from the European Union's Horizon 2020 Research and Innovation Programme under Marie Sklodowska-Curie grant agreement No. 840453 (HiFig), and Research Fund of the University of Basel. AJB gratefully acknowledges support from the EPSRC (UK) Quantum Technology Fellowship EP/W027909/1. S.R.V., R.S., A.L. and A.D.W. gratefully acknowledge Figure 4: \(\eta_{\mathrm{c}}\) as a function of the mode-splitting and the laser detuning. \(\eta_{\mathrm{c}}\) is above 95% for most of the mode-splittings. Note that the bright quadrant \(\Delta\omega_{e}>0,\Delta\omega_{L}>0\) is a result of phonon-mediated excitation of the QD. support from DFH/UFA CDFA05-06, DFG TRR160, DFG project 383065199 and BMBF Q.Link.X.
2306.00163
Wavefront error of PHI/HRT on Solar Orbiter at various heliocentric distances
We use wavefront sensing to characterise the image quality of the the High Resolution Telescope (HRT) of the Polarimetric and Helioseismic Imager (SO/PHI) data products during the second remote sensing window of the Solar Orbiter (SO) nominal mission phase. Our ultimate aims are to reconstruct the HRT data by deconvolving with the HRT point spread function (PSF) and to correct for the effects of optical aberrations on the data. We use a pair of focused--defocused images to compute the wavefront error and derive the PSF of HRT by means of a phase diversity (PD) analysis. The wavefront error of HRT depends on the orbital distance of SO to the Sun. At distances $>0.5$\,au, the wavefront error is small, and stems dominantly from the inherent optical properties of HRT. At distances $<0.5$\,au, the thermo-optical effect of the Heat Rejection Entrance Window (HREW) becomes noticeable. We develop an interpolation scheme for the wavefront error that depends on the thermal variation of the HREW with the distance of SO to the Sun. We also introduce a new level of image reconstruction, termed `aberration correction', which is designed to reduce the noise caused by image deconvolution while removing the aberrations caused by the HREW. The computed PSF via phase diversity significantly reduces the degradation caused by the HREW in the near-perihelion HRT data. In addition, the aberration correction increases the noise by a factor of only $1.45$ compared to the factor of $3$ increase that results from the usual PD reconstructions.
F. Kahil, A. Gandorfer, J. Hirzberger, D. Calchetti, J. Sinjan, G. Valori, S. K. Solanki, M. Van Noort, K. Albert, N. Albelo Jorge, A. Alvarez-Herrero, T. Appourchaux, L. R. Bellot Rubio, J. Blanco Rodrí guez, A. Feller, B. Fiethe, D. Germerott, L. Gizon, L. Guerrero, P. Gutierrez-Marques, M. Kolleck, A. Korpi-Lagg, H. Michalik, A. Moreno Vacas, D. Orozco Su\' arez, I. P\' erez-Grande, E. Sanchis Kilders, J. Schou, U. Sch\" uhle, J. Staub, H. Strecker, J. C. del Toro iniesta, R. Volkmer, J. Woch
2023-05-31T20:15:21Z
http://arxiv.org/abs/2306.00163v1
# Wavefront error of PHI/HRT on Solar Orbiter at various heliocentric distances ###### Abstract Context: Aims:We use wavefront sensing to characterise the image quality of the the High Resolution Telescope (HRT) of the Polarimetric and Helioseismic Imager (SO/PHI) data products during the second remote sensing window of the Solar Orbiter's (SO) nominal mission phase. The ultimate aims are to reconstruct the HRT data by deconvolving with the HRT point spread function (PSF) as well as to correct just for the effects of optical aberrations on the data. Methods:We use a pair of focused-defocused images to compute the wavefront error and derive the PSF of HRT by means of a phase diversity (PD) analysis. Results:The wavefront error of HRT depends on the orbital distance of SO to the Sun. At distances \(>0.5\) au, the wavefront error is small, and stems dominantly from the inherent optical properties of HRT. At distances \(<0.5\) au the thermo-optical effect of the Heat Rejection Entrance Window (HREW) becomes noticeable. We develop an interpolation scheme for the wavefront error which depends on the thermal variation of the HREW with the distance of SO to the Sun. We also introduce a new level of image reconstruction, termed "aberration correction", which aims to reduce the noise caused by image deconvolution, while removing the aberrations caused by the HREW. Conclusions:The computed PSF via phase diversity reduces significantly the degradation caused by the HREW in the near-perihelion HRT data. In addition, the aberration correction increases the noise by a factor of only 1.45 compared to the factor of 3 increase which results from the usual PD reconstructions. Conclusions: ## 1 Introduction Solar Orbiter (SO, Muller et al.2020) entered its low-orbital nominal mission phase (NMP, Zouganelis et al.2020) in late November 2021. During the first orbit of the NMP most of the observations by the remote sensing instruments were carried out in three remote sensing windows (RSWs) spanning the period of 01 March 2022 until 06 April 2022. The closest approach of SO to the Sun during these RSWs, of 0.32 au, was reached on 26 March 2022. Among the remote sensing instruments onboard, here we consider the Polarimetric and Helioseismic Imager (SO/PHI, Solanki et al.2020), which provides measurements of the magnetic field, either of the full solar disc, or at higher resolution of a small portion of the solar surface. The latter observations are carried out with the High Resolution Telescope of SO/PHI (HRT, see Gandorfer et al.2018), which is a two-mirror system with a decentered Ritchey-Chretien configuration. The entrance aperture of the telescope has a diameter of 140 mm. With an effective focal length of 4125 mm in the focal plane, the angular sampling corresponding to a working wavelength of \(\lambda=6173\) A is \(0.5^{\circ}\). This angular sampling equals about 100 km on the solar surface at the closest perihelion of SO at 0.28 au. Changes in the very high image quality achieved by HRT are driven mainly by the thermal environment. In particular, the Heat Rejection Entrance Window (HREW, Solanki et al.2020) acts as a passive thermal element in the heat-shield assembly of the spacecraft, and exhibits a large temperature variation along the highly elliptic orbit of Solar Orbiter. The HREW is designed such that the temperature gradient across the glass plates of the window is radially symmetric. Thus, the produced thermal lensing effect (the dependence of the refractive index of the glass on the temperature) introduces only a defocus term which can be compensated for by the HRT Refocus Mechanism (HRM, see also Solanki et al.2020). The amplitude of this gradient is estimated to produce a defocus up to 4 \(\lambda\) at perihelion (where the glass temperature reaches about 200 degC in the center of the window with a 20 degC radial gradient towards the outer edges). These conditions, however, are not perfectly fulfilled in flight. Therefore, it is expected that, at close solar proximity, the wavefront error (WFE) is compromised by higher order residual optical aberrations, which the HRM is incapable of removing. Then again, the HRM enables acquiring a pair of focused and defocused images of the solar scene, which could be used, by means of phase diversity analysis (PD, Paxman and Crippen, 1990; Lofdahl and Scharmer, 1994; Paxman et al., 1994), to determine the optical degradations of HRT due to deformations of the HREW. PD is a powerful technique which can be employed to capture the low-to-medium order telescope aberrations (Gonsalves 1983, 1985). These are usually reflected in the total wavefront error at the exit pupil of a telescope, here HRT, or in the point spread function (PSF) in the corresponding image plane (Goodman 1996). Internal optical aberrations such as coma or astigmatism are inherent to the HRT, and originate from imperfections in the complex optical system, mainly thermo-elastic despace errors in the two mirror system (Wilson 1999). Residual defocus, spherical aberration or trefoil terms are not expected to originate from the SO/PHI optics and are produced by thermal gradients on the HREW. During the Near-Earth Commissioning Phase (NECP, 0.8 au) and in the second remote sensing checkout window of the Cruise Phase (CP, 0.5 au) the optical aberrations of HRT were characterised. The results are published in Kahil et al. (2022). They found a common wavefront error over the field of view (FOV) of HRT and that the WFE is larger during CP when SO is closer to the Sun. From March to April 2022, SO reached, for the first time, solar distances below 0.5 au. We aim in this work to evaluate the image quality of HRT data products taken during the first two remote sensing windows of the NMP. In Section 2 we present the PD data and our approach for fitting the WFE. In Sections 3 and 4 we describe the methods we use to reconstruct the near-perihelion data with the available PD measurements of HRT and show our results. These are discussed along with conclusions in Section 5. ## 2 Phase Diversity analysis We adopt the PD algorithm presented by Lofdahl & Scharmer (1994). The procedure for fitting the wavefront error (and all corresponding references) is described in Kahil et al. (2022). We Figure 1: The HRT PD image pair of 22 March 2022 at 0.334 au. The focused image is shown to the left, while the defocused image (by half a wavelength with respect to the focused image) is shown to the right. We draw in yellow a blow-up of the same region in both images. Figure 2: Results of the PD analysis. The wavefront error (upper panel, in units of wavelength) and Modulation Transfer Function (MTF, lower panel) in four sub-regions, of \(750\times 750\) pixels each, of the entire FOV of HRT (\(2048\times 2048\) pixels). use Noll's expansion scheme of orthonormal Zernike polynomials (Noll, 1976) to characterise the wavefront error. The PD pair is acquired during the second remote-sensing window (RSW2) on 22 March 2022 and at a distance of 0.334 au. The image pair is taken in the continuum of the SO/PHI spectral line. The artificial defocus introduced to the focused images was chosen to be half of the SO/PHI wavelength (\(\lambda/2\)). The PD image pair is shown in Figure 1. Before fitting the WFE, we align, using cross-correlation, the defocused image to sub-pixel accuracy to the focused image. We therefore disregard the first three aberrations (piston, tip, tilt) and start the WFE fitting from the fourth Zernike polynomial (defocus, \(Z4\)). We run the PD algorithm on four sub-regions of the FOV, each of an equal size of \(750\times 750\) pixels. The optimal number of the employed Zernike polynomials which returns a valid WFE is \(Z=23\) (from \(Z4\) to \(Z26\)). Any number larger than \(Z=23\) results in an over-fitted WFE and an over-reconstructed scene due to the noise amplification. The dependence of the WFE fitting and restoration results on the employed number of Zernike polynomials is discussed in Hirzberger et al. (2011). We show the results of the WFE fitting in Figure 2. As expected, the spatial variation of the WFE across the FOV is small and the images can be assumed to be isoplanatic, in agreement with earlier results of Kahil et al. (2022). To retrieve the set of Zernike coefficients to be used for characterising the WFE of HRT during the RSW2 and for compari \begin{table} \begin{tabular}{c c c c} \hline Date & Distance [au] & \(Z\) & RMS WFE [\(\lambda\)] \\ \hline 20\(-\)04\(-\)2020 & 0.82 & 10 & 1/10 \\ \hline 20\(-\)02\(-\)2021 & 0.523 & 10 & 1/7 \\ \hline 22\(-\)03\(-\)2022 & 0.334 & 23 & 1/2.27 \\ \hline \end{tabular} \end{table} Table 1: Summary of phase diversity data. _First column_: The acquisition dates of the PD image pair. _Second column_: The Heliocentric distance of SO. _Third column_: The optimal number of Zernike polynomials in the PD fitting routine. _Fourth column_: The total RMS wavefront error. Figure 4: The feed-through containing the HREW during SO/PHI ground testing. Two out of the three mounting interfaces to the heat-shield support panel (in the lab setup replaced by the Aluminium plate) can be seen. Note that the images of the HRT (and thus our wavefront plots) are rotated by 90\({}^{\circ}\) with respect to the laboratory frame. Figure 3: The Zernike coefficients distribution as deduced from the analysis of the PD datasets for each orbit. The dashed blue lines indicate the RMS WFE values (\(\pm\lambda/14\)) which correspond to a diffraction limited performance. son with earlier results, we use the averaged Zernike terms over the four sub-regions of Figure 2. The Zernike coefficients are shown in the bar plot of Figure 3. Overplotted is the distribution of the Zernike coefficients retrieved during earlier PD measurements in NECP on 20 April 2021 (0.82 au) and during CP on 20 February 2021 (0.52 au). For these orbits, we employed only 10 Zernike polynomials. This number was chosen because the total root mean square (RMS) of the WFE saturates for \(Z>10\). We summarize the results of the PD analysis and compare them to earlier results in Table 1. As expected, the Zernike terms that mostly increase in amplitude during the RSW2 are the defocus (Z4), the first order trefoil (Z9, Z10) and spherical aberration (Z11). This is a result of the deviation of the actual temperature distribution across the HREW from a paraboloid shape. These deviations give rise to spherical aberration, and to trefoil due to azimuthal inhomogeneities. The contribution of the trefoil terms is apparent in the trigonal shape of the wavefront error in Figure 2. This shape is related to the mount of the HREW in the heat-shield of SO and attributed to a trigonal temperature inhomogeneity on the HREW. The inhomogeneity is caused by heat conduction through the mount points of the HREW within the feed-through of the spacecraft heat-shield (which were designed to minimise thermal coupling), and of the hot feed-through to the heat-shield support panel, which sees the strongest temperature difference (see Figure 4). Figure 5 shows a sub-region of the focused image of the PD pair acquired on 22 March 2022 at a solar distance of 0.334 au. The trigonal pattern on top of the solar scene can be observed (left panel). We construct the PSF of HRT from the calculated best-fit Zernike polynomials and apply image restoration through deconvolution with the Wiener filter. The restoration (right panel) successfully removes the trigonal pattern caused by the HREW. To test if the PSF deduced from the PD analysis of the pair taken during RSW2 represents the true aberrations emanating from the HREW of HRT, we degrade a synthesised continuum image obtained using MURaM (Vogler, 2005) with the calcu Figure 5: Reconstruction of the RSW2 data. The central region (\(800\times 800\) pixels) of the focused image of the PD pair of 22 March 2022 at 0.334 au (left panel). The reconstructed region with the PSF calculated by PD (right panel). Figure 6: MURaM simulation of a sunspot. The original synthesised continuum map (upper panel), rebinned to the pixel size of HRT at 0.334 au, which is equal to 121 km. The rebinned and degraded map (lower panel) with the PSF calculated from the PD analysis of the PD image pair of 22 March 2022. lated HRT PSF. The degradation is applied to the theoretical image, which is then rebinned to the pixel size of HRT after adding Gaussian noise (upper panel of Figure 6). The degraded image (lower panel of Figure 6) displays the same trigonal pattern as seen in the degraded HRT data of the RSW2 (left panel of Figure 5). This indicates that the wavefront fitting algorithm returns a reasonable set of Zernike coefficients that describe the true aberrations produced by the HREW of HRT. ## 3 PSF interpolation The calculated PSF was estimated at a solar distance of 0.334 au. Since the aberrations introduced by the HREW are expected to increase with decreasing solar distance of SO (see Figure 3), the HRT data obtained at different orbital positions cannot be restored using the same calculated PSF. This will result in an over-reconstruction for data recorded when SO was further from the Sun than 0.334 au. Therefore, we develop an interpolation scheme to approximate the PSF for such distances where no PD image pairs were acquired. We assume that the instrument does not change with orbital position and only the temperature inhomogeneities on the HREW vary quadratically with solar distance. This behaviour is apparent in the plot of the total WFE compensated by the HRM along the orbit shown in Figure 7. For constructing the interpolated PSF at a given distance below 0.5 au (where artifacts due to the HREW are significant), we use the 23 Zernike coefficients deduced from the analysis of the RSW2 PD data at 0.334 au (orange bars in Figure 3) and interpolate with SO-Sun distance only the terms which are mostly affected by the temperature of the HREW. These are: defocus (\(Z4\)), first order trefoil components (\(Z9\), \(Z10\)) and spherical aberration (\(Z11\)). We use a quadratic function to model the variation of these aberrations with distance. The choice of a quadratic function is motivated by the following arguments: (1) The temperature across the HREW goes quadratically with distance, and the WFE is temperature dependent; (2) the HRM compensates for the defocus in HRT following a quadratic dependence on the heliocentric distance (see Figure 7); (3) a quadratic function is Figure 8: The variation of the four Zernike coefficients (defocus, spherical aberration, and the two components of 1st order trefoil) in units of \(\lambda\) with distance of SO to the Sun (in au). The red curves are the quadratic fits to the data points. The absolute value of the X-Trefoil aberration is plotted in order to use the same scaling on the y-axis. Figure 7: The variation of the HRM best focus position in coarse and fine focus (in units of \(\lambda\)) along the orbit of SO from 05 November 2021 to 22 March 2022. The curves correspond to the best-fit quadratic function to both types of focus. The y-scaling is chosen such that the HRM best focus position at the absolute maximum of the quadratic fit is equal to zero. Figure 9: Restoration with the interpolated PSF. The continuum image of a sub-region of one dataset from 17 March 2022 at 0.379 au (upper panel). The restored region with the PSF calculated at 0.334 au (middle panel). The restored region with the PSF interpolated to a distance of 0.379 au (lower panel). the highest order that could be fit unambiguously to three data points. The results of the fits are shown in Figure 8. The interpolated values are plotted for the range of 0.28 au (the closest perihelion of SO) to 1 au. We note that we are only interested in reconstructing data from distances below 0.5 au, where the HREW causes a significant reduction of the image quality. The HRT data taken at distances larger than 0.5 au are corrected with the PSF calculated during the cruise phase at 0.5 au (see Section 4.2). We show in Figure 9 a sub-region of a dataset taken on 17 March 2022 in the SO/PHI continuum. The dataset was acquired at a solar distance of 0.379 au. The restoration is done with both, the PSF calculated at 0.334 au and the PSF interpolated to a distance of 0.379 au. The aberrations due to the HREW appear to be greatly reduced in both images, but without PSF interpolation, images appear to be over-reconstructed. We quantify this effect by calculating the normalized RMS contrast of the continuum intensity of quiet-Sun regions in the RSW2 observations before restoration, after restoration with the PSF determined at 0.334 au and after restoration with the interpolated PSF. The contrast values are plotted in Figure 10. For comparison, we also show the contrast values of reconstructed scenes from earlier orbits. These correspond to the focused images of the PD datasets, and they are marked as crosses in Figure 10. The PD datasets are acquired by HRT shortly after refocusing with the HRM, so that the obtained contrast of 9.3% to 9.7% in the corresponding reconstructed solar scene are considered to be optimal. During the perihelion approach the overall WFE is deteriorating rapidly, as depicted in Figure 7. Since re-focusing with the HRM was not done between 16 March 2022 and 22 March 2022 (see Figure 7), we witness a significant decrease in contrast in the corresponding initial images (lower green dots). Deconvolution with the interpolated PSF (upper green dots) does increase the contrast significantly, but the contrast values in these datapoints do not fully reach the optimum value, which is only re-established once the system has been brought to optimum focus again by the HRM (on 22 March 2022). Using the nearest PSF estimated on 22 March 2022 for all data points (magenta dots) shows clearly, that - with the natural exception of the March 22 point itself - all data points suffer from over-reconstruction, since the applied PSF corresponds to the worst WFE, while the data have been taken with a more relaxed instrument. ## 4 Aberration correction ### RSW2 data The restoration of the full Stokes images with the interpolated PSF results in amplified noise levels compared to non-restored data. This is an expected drawback of deconvolution, which cannot be avoided (see discussion in Kahil et al. 2022). Usual PD reconstructions are known to increase the noise in the restored data by a factor of three with respect to the original degraded data (see for example Martinez Pillet et al. 2011). Therefore, the polarisation signals of small-scale magnetic structures in the restored data may lie within the noise level, which is not convenient for studying such structures. To reduce the noise level, while removing the aberrations caused by the HREW, we convolve the restored images with the Figure 10: The variation of the initial (lower data points) and reconstructed (upper data points) contrast values in the SO/PHI continuum along the orbit of SO. The dots correspond to science data while the crosses correspond to the PD datasets. Each data point of the RSW2 data (green and magenta dots) is the average of all daily observations on 17, 19, 20, 21 and 22 March 2022. The dashed horizontal lines represent the averaged contrasts of all the points/crosses of the corresponding colour. PSF of an ideal telescope of 140 mm aperture, i.e. by an Airy function with its first zero at the diffraction limit. This results in lower noise amplification at higher spatial frequencies due to deconvolution. In this work, the term "restored data" refers to the reconstructed data with the corresponding interpolated PSF, and with a noise level equal to roughly 3\(\sigma\) (\(\sigma\) being the noise level in the non-restored data,which is estimated from the distribution of Stokes \(V\) in the continuum and of the LOS magnetic field (B\({}_{\mathrm{liss}}\)) signals in quiet-Sun areas, as described below), while "aberration-corrected data" refers to data corrected only for the aberrations caused by the HREW while keeping the degradation caused by diffraction. The restoration and aberration corrections are applied to the individual Stokes images before the Milne-Eddington inversion of the radiative transfer equation is performed. This step is being implemented in the on-ground HRT data reduction pipeline presented in Sinjan et al. (2022). Example studies which make use of the aberration-corrected HRT data have been carried out by Calchetti et al. (2023) and Sinjan et al. (2023). We show in Figures 11 and 12 sub-regions of the continuum and B\({}_{\mathrm{liss}}\) maps of an example dataset. The figures show the two levels of correction with the HRT PSF. The first panels of each figure correspond to the non-restored data, while the panels in the middle correspond to the restored version with noise levels of about 3\(\sigma\). The last panels are the aberration-corrected data with lower noise levels. We calculate the amount of noise reduction in the following sub-sections. We illustrate in Figure 13 the one dimensional (1D) power spectrum of a quiet-Sun region taken on 17 March 2022 at 00:20:09 UT. The three curves correspond to the same region in the original, restored and aberration-corrected versions. The plot shows that the aberration correction lowers the signal towards higher frequencies, which reduces the noise in the aberration corrected data. It also lowers the signal at intermediate frequencies, which results in lower image contrast: the RMS contrast of quiet-sun regions decreases, on average, from 9% to 7% due to the aberration correction (the initial contrast is equal to 3.4%). To quantify the amount of noise reduction, we measure the distribution of Stokes \(V\) signals in the continuum in quiet-Sun areas. We also follow the method in Liu et al. (2012) to compute the noise in the LOS magnetic field maps. The results are shown in Figures 14 and 15 where the noise is measured for all datasets of the RSW2 observations. Averaged over all days of the RSW2 observations, the noise in Stokes \(V\) increases by a factor of only 1.45 in the aberration corrected data compared to the original data. This factor is equal to 3 upon full restoration of the data. ### RSW1 data During the first remote sensing window of SO (RSW1), the HRT datasets were acquired at heliocentric distances ranging from 0.547 au to 0.489 au. Therefore, the degradation in these data by the HREW is smaller than for the RSW2 data described earlier. However, for consistency with the released RSW2 data, we also apply the aberration correction. No PD measurements were acquired during RSW1 so we use the PSF estimated during the cruise phase at a similar distance from the Sun of 0.5 au. This PSF had a smaller RMS WFE and lower number of Zernike polynomials (see Table 1), so that the aberration correction results in increased noise levels of only a factor of 1.2 wrt non-restored RSW1 data (see Figures 16 and 18). We show in Figures 17 and 19 an example of one HRT dataset taken on 03 March 2022 at a distance of 0.547 au. We show the restored image and the aberration-corrected one. ## 5 Discussion and conclusions In Sections 2 and 3 we have calculated, by means of phase diversity analysis, the wavefront error of HRT when close to the Sun during the second remote sensing window of the nominal phase mission. At such distances, the breaking of the rotational symmetry of the temperature gradients across the HREW introduces aberrations which cannot be corrected for by the HRT refocus mechanism, but are well captured by PD calibration data. By inspecting the variation of the Zernike terms describing the WFE with solar distance, we could isolate the coefficients which are introduced by this effect (defocus, trefoil and spherical aberration). The fact that we still see a defocus term, even after refocusing by the HRM, can be explained by the action of the autofocus system, which aims to optimize the RMS contrast in the image by minimising the overall wavefront error. But, since the shape of the wavefront is not parabolic, there is a trade-off between the rotationally symmetric terms, mainly defocus and spherical aberration, such that after the "refocusing" we see remaining contributions of both of these terms. With the set of PD data available so far, we build an interpolation model for the calculated Zernike coefficients in order to construct a PSF that depends on the heliocentric distance. This PSF is employed to reconstruct the corresponding HRT datasets close to perihelion (\(<0.5\) au), when no PD measurements are available. We showed that the deconvolution with an interpolated PSF yields very good reconstruction of the original scene, as long as the instrument has been brought to best focus by the HRM and the residual WFE has been minimized. Refocusing by the HRM on a regular basis especially during perihelion approach, where the temperature changes rapidly is highly recommended in order to minimize the overall WFE prior to PSF deconvolution. Under these preconditions, the smooth variation of the residual WFE terms allows good reconstruction of the solar scene, without strong amplification of noise, which would happen when the initial contrast gets too low due to insufficient focusing. In addition, we have proposed, in Section 4, a solution to avoid the noise amplification which results from the reconstruction with the interpolated PSF. The resulting data products are termed "aberration corrected" with a noise level that is on average 1.45 larger than the noise in the non-reconstructed RSW2 data. For RSW1 data where the degradation effect by the window is smaller, the noise of the aberration corrected data increases by a factor of 1.2. This moderate increase of the noise level makes the aberration-corrected data ideal for applying inversion methods, whereas studies based on the intensity images might benefit from the full interpolated PSF restoration. For future near-perihelion observations, HRT will acquire daily PD measurements in order to study in more detail the variation of the aberrations introduced by the HREW with solar distance, and with different pointings of the spacecraft. In addition to these measurements, potential long-term effects, which are not a direct function of distance alone cannot be ruled out at the current time and they need to be further investigated in a future study. Furthermore, acquiring multiple defocused datasets may increase the accuracy of the wavefront error retrieval by the PD algorithm, as shown by Bailen et al. (2023). ###### Acknowledgements. Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. We are grateful to the ESA SOC and MOC teams for their support. The German contribution to SO/PHI is funded by the BMWi through DLR and by MPG central funds. The Spanish contribution is funded by FEEDER/AEI/MCU (RTI2108-09686-C5), a "Center of Excellence Severo Ochoa" award to IAA-CSIC (SEV-2017-0709), and a Ramon y Cajal fellowship awarded to DOS. The French contribution is funded by CNES.
2309.14787
Virtual Linking Bids for Market Clearing with Non-Merchant Storage
In the context of energy market clearing, non-merchant assets are assets that do not submit bids but whose operational constraints are included.Integrating energy storage systems as non-merchant assets can maximize social welfare. However, the disconnection between market intervals poses challenges for market properties, and this has not been well-studied yet. We contribute to the literature on market-clearing with non-merchant storage by proposing a market-clearing procedure that preserves desirable market properties, even under uncertainty. This approach is based on a novel representation of storage systems in which the energy available is discretized to reflect the different prices at which the storage system was charged. These prices are included as virtual bids in subsequent market clearings, establishing a link between different market intervals. We show that market clearing with virtual linking bids has the advantage of guaranteeing cost recovery for market participants and can outperform traditional methods in terms of social welfare.
Eléa Prat, Jonas Bodulv Broge, Richard Lusby
2023-09-26T09:34:29Z
http://arxiv.org/abs/2309.14787v2
# Virtual Linking Bids for Market Clearing with Non-Merchant Storage ###### Abstract In the context of energy market clearing, non-merchant assets are assets that do not submit bids but whose operational constraints are included. Integrating energy storage systems as non-merchant assets can maximize social welfare. However, the disconnection between market intervals poses challenges for market properties, that are not well-considered yet. We contribute to the literature on market-clearing with non-merchant storage by proposing a market-clearing procedure that preserves desirable market properties, even under uncertainty. This approach is based on a novel representation of the storage system in which the energy available is discretized to reflect the different prices at which the storage system was charged. These prices are included as virtual bids in the market clearing, establishing a link between different market intervals. We show that market clearing with virtual linking bids outperforms traditional methods in terms of cost recovery for the market participants and discuss the impacts on social welfare. energy market design, non-merchant storage, passive storage ## I Introduction In order to enable the safe operation of future energy systems with a high share of intermittent and stochastic renewable sources of energy production, the share of large-scale energy storage is expected to increase significantly in the coming years [1]. A major challenge that comes with this evolution is how to best integrate storage in energy markets. This is emphasized by a recent order by the Federal Energy Regulatory Commission in the United States, urging system operators to implement changes in order to facilitate market participation of electric storage systems [2]. As a step towards addressing this challenge, [3] describes three different ways in which storage systems can be included in the market clearing. In the first and second options, storage systems participate similarly to conventional generators and loads, submitting price and quantity bids. We refer to this setup as _merchant storage_. In the third option, the storage systems' operational constraints are included but they do not need to submit bids, which we call _non-merchant storage_. Market clearing with non-merchant storage can achieve the most economically efficient outcomes and the highest social welfare [3], as opposed to market clearing with merchant storage [4, 5]. However, this setup requires more modifications to the current energy markets, which might explain why it has not been considered in detail yet. The main issue to address when clearing a market with non-merchant storage is how to represent the time-linking effect of the storage system. Indeed, while storage systems potentially connect an infinite number of time periods, the market-clearing window is finite. In the literature on non-merchant storage, a very common assumption is that the storage is initially empty and the state of energy at the end of the market-clearing horizon is free1[6, 7, 8]. This leads to myopic decision-making regarding the state of energy of the storage at the end of the market interval. These assumptions are not always stated [9], showing that this problem is disregarded. Footnote 1: This is equivalent to setting it to zero in the absence of negative prices. This time-linking property also has to be considered in the pricing problem. By default, the market does not remember the price at which the storage was charged and thus considers any energy available at the beginning of a market horizon as free. Therefore, the resulting market price might be lower than the price the storage system paid to charge. In [10] we proposed a method to reestablish the connection between the different market intervals and retrieve prices that send the proper signal to the storage system. However, it is only valid under the assumption of perfect information, which does not correspond to a realistic setup. The problem of connection between current and future time periods with uncertain realization has also been identified in the case of remuneration of the storage system with financial storage rights [11]. This situation has also been observed in the case of demand response in [12], which could be seen as a virtual type of storage in the case of load shifting [13], and is therefore also covered by this paper. In real-time markets, which include ramping products that can span over several time periods, a common approach is to use a multi-interval market clearing, where only the decisions over the first time periods, termed the _decision horizon_, are implemented, while the decisions over the rest of the horizon are advisory [14, 15, 16, 17]. It can help to make future-aware decisions regarding the state of energy at the end of the decision horizon. However, even in this framework, the final decision might have an impact if the window is not long enough. Moreover, it also suffers from issues with pricing, even in a deterministic setup, as shown in [16]. The work in [16] proposes two methods to reintroduce a link to previous intervals but does not prevent the need for non-transparent uplift payments. In [14], an argument for non-uniform pricing is made, but the prices they obtain are highly dependent on the forecasts used. Because of these limitations, we focus on single-interval market clearing with uniform pricing. Moreover, this is the usual approach for day-ahead auctions, while multi-interval market-clearing models are used for real-time markets [17]. Since single-interval markets are still widely in use, it is important to study the integration of non-merchant storage into those. We move the consideration of future intervals to the choice of end-of-horizon storage parameters, which are to be decided in a separate problem2. In this setup, a modeling approach to reflect the transfer of energy between time periods has been introduced in [18]. However, the authors do not discuss the valuation of the charged energy in subsequent market intervals. The problem of valuation has been studied in [12] for demand response, in terms of deviation from a scenario with no flexibility, which cannot always be applied in the case of a storage system. Footnote 2: This other problem is out-of-scope for this paper. We aim to help address the unanswered question: How to best design a market clearing that includes non-merchant storage and fully exploit the potential of storage systems to increase social welfare? Towards this, we propose a novel market-clearing procedure with non-merchant storage that ensures cost recovery for all the market participants, in particular for the storage system, and even when considering uncertainty. The main idea is to remember the prices at which the storage system charged and automatically create virtual linking bids to reflect these values in the next market intervals. First, in Section II, we present in more detail the challenges of market clearing with non-merchant storage, with the help of an illustrative example. We then introduce market clearing with virtual linking bids in Section III and show promising results in terms of market properties in Section IV. We conclude in Section V. ## II Challenges of Market Clearing with Non-Merchant Storage ### _Initial Model and Assumptions_ In order to get a better understanding of the problem, we model a stylized version of the storage system using a couple of assumptions. First, we consider that there is only one, non-merchant, storage system in the market. We also assume that there are no losses when charging and discharging, meaning that the storage system is perfectly efficient, and has no leakage over time. We consider a storage system without charging and discharging limits and with no minimum on the state of energy. We thereby focus on the time-linking aspect of the storage system and limit the subtleties that would be introduced by considering each of these aspects. Indeed, the essence of the method presented in this paper would be the same, but small modifications would have to be introduced to deal with these different aspects, which would complicate the understanding of the basics if included here. We furthermore assume that minimum levels for loads and generators are 0, in order to avoid non-linearities. We refer to _split market clearing_ as the process of clearing the market interval-by-interval, as opposed to an _ideal market clearing_, where all market intervals would be cleared at once. Here, we consider one market interval. The set \(\mathcal{T}\) gathers all the time periods \(t\) of the given market interval, where each time period has a duration of \(\Delta t\) (in hours). For a day-ahead market for instance, \(\mathcal{T}\) would typically correspond to one day and \(\Delta t\) would be one hour. Under these assumptions, the market clearing with non-merchant storage for a given market interval can be modeled as follows: \[\max_{\mathbf{x}} \Delta t\sum_{t\in\mathcal{T}}\left(\sum_{l\in\mathcal{L}}U_{lt} d_{lt}-\sum_{g\in\mathcal{G}}C_{gt}p_{gt}\right)\] (1a) s.t. \[\sum_{l\in\mathcal{L}}d_{lt}+p_{t}^{\mathrm{C}}-\sum_{g\in \mathcal{G}}p_{gt}=0, \forall t\in\mathcal{T} \tag{1b}\] \[0\leq p_{gt}\leq\overline{P}_{gt}, \forall g\in\mathcal{G},t\in\mathcal{T}\] (1c) \[0\leq d_{lt}\leq\overline{D}_{lt}, \forall l\in\mathcal{L},t\in\mathcal{T}\] (1d) \[0\leq e_{t}\leq\overline{E}, \forall t\in\mathcal{T}\] (1e) \[e_{t}=e_{t-1}+p_{t}^{\mathrm{C}}\Delta t, \forall t\in\mathcal{T}\setminus\{1\}\] (1f) \[e_{1}=E^{\mathrm{init}}+p_{1}^{\mathrm{C}}\Delta t. \tag{1g}\] Here, and in the following, \(\mathbf{x}\) is a vector gathering all the decision variables of the model at hand. The variables are the quantity accepted for load \(l\in\mathcal{L}\), \(d_{lt}\), the quantity accepted for generator \(g\in\mathcal{G}\), \(p_{gt}\), and for the storage system, the state of energy \(e_{t}\) and the amount charged \(p_{t}^{\mathrm{C}}\). The latter can be negative, indicating a discharge. The objective function (1a) is to maximize the difference between load utilities \(U_{lt}\) and generation costs \(C_{gt}\) for accepted offers. The maximum generation \(\overline{P}_{gt}\) and load \(\overline{D}_{lt}\) are enforced by constraints (1c) and (1d) respectively. Constraint (1e) sets the maximum state of energy \(\overline{E}\). Constraints (1f) and (1g) update the storage level, starting from the initial level \(E^{\mathrm{init}}\). The market price at \(t\) is given by \(\lambda_{t}\), the dual variable of (1b). We make further assumptions regarding the market operation, namely that there is perfect competition and that market participants bid their true costs. To evaluate the efficiency of the market, one of the results to consider is the social welfare, \(SW\), which is calculated as the sum of surpluses of loads and generators and the payments to the storage system. With (1b), it reduces to: \[SW=\Delta t\sum_{t\in\mathcal{T}}\left(\sum_{l\in\mathcal{L}}U_{lt}d_{lt}^{*}- \sum_{g\in\mathcal{G}}C_{gt}p_{gt}^{*}\right), \tag{2}\] where the superscript \({}^{*}\) indicates that the optimal value of the considered variables is used. ### _Approaches to Avoid Myopic Decisions_ As mentioned in [3] and [10], the model in (1) is not sufficient to avoid myopic decisions regarding the state of energy at the end of the market interval. Indeed, with this model the storage level will most likely return to zero, and there will not be stored energy available in the first time period of the next market interval. In [3], several options to avoid myopic decisions are listed. The first one is to impose a final state of energy \(E^{\mathrm{end}}\) at the end of the market interval, which is determined by considering information about future market intervals. This is done by adding to (1) the constraint \[e_{\mathrm{T}}=E^{\mathrm{end}}, \tag{3}\] where \(t=\mathrm{T}\) corresponds to the last time period of the market interval. Another option is to steer the level to the desired value by adding a penalty term in the objective function, with a cost \(S^{\mathrm{end}}\). In (1), the objective function (1a) becomes \[\max_{\mathbf{x}}\quad\Delta t\sum_{\mathbf{i}\in\mathcal{T}}\left(\sum_{l\in \mathcal{C}}U_{lt}d_{lt}-\sum_{g\in\mathcal{G}}C_{gt}p_{gt}\right)-S^{\mathrm{ end}}e_{\mathrm{T}}. \tag{4}\] ### _Limits on an Illustrative Example_ We consider a storage system that has a capacity \(\overline{E}=2.5\) MWh and which is initially empty. We clear the market for two market intervals (MI) of one time period of one hour each. One load and two generators participate in the market. The related parameters are listed in Table I. The code for this example, as well as all the other examples presented in this paper, is available online at [https://github.com/eleaprat/MC_non_merchant_stg](https://github.com/eleaprat/MC_non_merchant_stg). We first solve (1) with (3), where \(E^{\mathrm{end}}=1\) MWh for the first MI and \(E^{\mathrm{end}}=0\) MWh for the second MI 3. The resulting dispatch and market prices are shown in Table II. Footnote 3: These are found to maximize the total social welfare when clearing the two MIs together. We refer the interested reader to [10] for more information. We can see that on MI 2, there is a price multiplicity, where any price between 2 and 9E/MWh is valid. It can be a problem if the final price chosen is below 5E/MWh because the storage system would then not recover its charging cost. On MI 2, the information that the storage system charged at 5E/MWh is not accessible. As a consequence, the price is chosen without accounting for it. The total social welfare is 27E. On the other hand, we can use the penalty term by solving (1) with (4). Knowing about the potential price multiplicity on MI 2, we set the penalty \(S^{\mathrm{end}}=2\)E/MWh on the first MI, to make sure the storage system will then recover its costs if this lower price gets selected on MI 2, and \(S^{\mathrm{end}}=0\) on MI 2. The resulting dispatch and market prices are shown in Table II. In this case, the storage system does not charge at all, and as a consequence, the social welfare is reduced to 23E. Due to this limitation, in the rest of the paper "split market clearing" refers to solving (1) with (3). ## III Market Clearing with Virtual Linking Bids In this section, we modify the market-clearing model with final storage level, to ensure that the cost at which the storage system charged in one market interval is accounted for in the following market intervals. To do so, we introduce a new representation of the non-merchant storage system. ### _Inter- and Intra-Storage_ In order to save the value at which the storage charges in view of subsequent intervals, we introduce the concepts of net charge and net discharge over a market interval. Those are indicated by the difference between the initial and the final state of energy. If it is positive, it corresponds to a net charge and if negative, it corresponds to a net discharge. We can then consider separately the exchanges of energy within the market interval, which we associate to an _intra-storage_, and the exchanges of energy with past or future intervals, which correspond to _inter-storage_. We can conceptually split the storage, where the quantity charged previously is equivalent to a generator, bidding with the saved price, and the capacity for intra-storage corresponds to the available capacity at the beginning of the market interval. This is illustrated in Figure 1. In the case of net charge, this quantity is saved along with the corresponding charging price, which is then used as a virtual bid in future market intervals. In this way, we ensure that the storage will get paid at least what it paid for charging. Examples of net charge and net discharge are shown in Figure 2. In the example of net charge from Figure 2a, we see that over the market interval, the quantity in the inter-storage is not used. At the end of the market interval, the quantity net charged is added to the intra-storage. In the example of net discharge from Figure 2b, we can see how intra- and inter- storages can be used independently. Similarly, the level at the end of the market interval determines the intra-storage for the next interval. ### _Model for Market Clearing with Virtual Linking Bids_ We now introduce a model for market clearing with virtual linking bids (VLB), based on the model in (1) and including Fig. 1: Storage system at the beginning of a market interval, with the previously charged quantities in red and the associated price in parenthesis. the representation of the storage system with intra- and inter-storage components. \[\max_{\mathbf{x}} \Delta t\sum_{t\in\mathcal{T}}\left(\sum_{l\in\mathcal{L}}U_{lt}d_{lt }-\sum_{g\in\mathcal{G}}C_{gt}p_{gt}-\sum_{v\in\mathcal{V}}S_{v}p_{vt}^{\mathrm{ D,e}}\right)\] (5a) s.t. \[\sum_{l\in\mathcal{L}}d_{lt}+p_{t}^{\mathrm{C,a}}-\sum_{g\in \mathcal{G}}p_{gt}-\sum_{v\in\mathcal{V}}p_{vt}^{\mathrm{D,e}}=0,\qquad\forall t\in \mathcal{T} \tag{5b}\] \[(\mathrm{1c})-(\mathrm{1d})\] \[e_{t}^{\mathrm{a}}=e_{t-1}^{\mathrm{a}}+p_{t}^{\mathrm{C,a}} \Delta t, \forall t\in\mathcal{T}\setminus\{1\}\] (5c) \[e_{1}^{\mathrm{a}}=p_{1}^{\mathrm{C,a}}\Delta t,\] (5d) \[e_{t}^{\mathrm{T}}\geq 0,\] (5e) \[e_{vt}^{\mathrm{a}}=e_{v,t-1}^{\mathrm{c}}-p_{vt}^{\mathrm{D,e} }\Delta t, \forall v\in\mathcal{V},\,t\in\mathcal{T}\setminus\{1\}\] (5f) \[e_{v,1}^{\mathrm{e}}=E_{v}^{\mathrm{init}}-p_{v,1}^{\mathrm{D,e} }\Delta t, \forall v\in\mathcal{V}\] (5g) \[0\leq e_{t}^{\mathrm{a}}+\sum_{v\in\mathcal{V}}e_{vt}^{\mathrm{ e}}\leq\overline{E}, \forall t\in\mathcal{T}\] (5h) \[e_{vt}^{\mathrm{a}},\,p_{vt}^{\mathrm{D,e}}\geq 0, \forall v\in\mathcal{V},\,\forall t\in\mathcal{T}\] (5i) \[e_{t}^{\mathrm{a}}+\sum_{v\in\mathcal{V}}e_{vt}^{\mathrm{e}}\geq E ^{\mathrm{end}}. \tag{5j}\] Here again, \(\mathbf{x}\) is a vector gathering all the decision variables of the model. There are no changes regarding loads and generators and their decision variables. We have new variables for tracking the storage system. For the intra-storage system, \(e_{t}^{\mathrm{a}}\) gives the state of energy and \(p_{t}^{\mathrm{C,a}}\) the quantity charged (negative for a discharge). For the inter-storage system, we introduce \(v\in\mathcal{V}\), which corresponds to the different values saved in the storage system, similarly to what was shown in Figure 1. The corresponding variables are \(e_{vt}^{\mathrm{e}}\), the state of energy for value \(v\) and \(p_{vt}^{\mathrm{D,e}}\), the quantity discharged from inter-storage with value \(v\). The objective function (5a) is modified to include the artificial bids from the inter-storage, with prices \(S_{v}\). Here and in the balance constraint (5b), the inter-storage appears similarly to conventional generators. Constraints (5c) and (5d) give the update of the state of energy for the intra-storage. Note that the intra-storage is by definition empty at the beginning of the market interval. However, it can occasionally take negative values, which corresponds to temporarily using some of the capacity that is reserved by the inter-storage. This is necessary to ensure that the arbitrage opportunities within a given market interval are not limited due to the intra-storage being initially empty, while there is, in practice, some energy to discharge. To limit these operations to arbitrage within this market interval only, (5e) specifies that the final state of energy of the intra-storage cannot be negative. Constraints (5f) and (5g) give the update of the state of energy for each inter-storage. The initial energy available at each value is given by \(E_{v}^{\mathrm{init}}\). Constraint (5h) enforces the storage capacity. The states of energy and the quantity discharged from each inter-storage are positive with (5i). Constraint (5j) is the updated version of constraint (3). With the greater or equal sign, the artificial bid in the objective function guides the final level of the storage. This formulation thus combines some of the aspects of (3) and (4). There is no charge of the inter-storage during the market clearing. In the case of net charge, the inter-storage is modified outside of the market clearing, as shown in Section III-D. ### _Simultaneous Charge and Discharge_ Note that in the model in (5), \(p_{t}^{\mathrm{C,a}}\) and \(p_{vt}^{\mathrm{D,e}}\) can be positive at the same time. However, this separation of the storage system is only a modeling artifice and has no physical meaning. It is the sum of the two that corresponds to the instruction of charge or discharge for the storage system. Allowing the intra-storage system to take negative values generates a multiplicity of optimal solutions, for which the sum of intra- and inter-storage is the same. It is possible to select among all those solutions one for which there is no simultaneous charge and discharge, by minimizing the total quantity exchanged or by introducing binary variables to choose between charge and discharge. We formalize this result with a proposition. **Proposition 1**.: _For the market clearing in (5), and under the assumptions that the value of stored energy is strictly positive, i.e., \(S_{v}>0\), \(\forall v\in\mathcal{V}\), and that a feasible solution exists, there will always be an optimal solution where the intra-storage does not charge when the inter-storage discharges._ Proof.: First, we give an equivalent formulation of the model where the state of charge is replaced, using that \(e_{t}^{\mathrm{a}}=\sum_{i=1}^{t}p_{i}^{\mathrm{C,a}}\) and \(e_{vt}^{\mathrm{e}}=E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D,e}}\). We also notice that in (5i), \(e_{vt}^{\mathrm{e}}\geq 0\) can be equivalently replaced by \(e_{vT}^{\mathrm{e}}\geq 0\), since the update quantities \(p_{vt}^{\mathrm{D,e}}\) are non-negative. Lastly, we assume that the time periods of the market have a duration of one hour in order to lighten the notations with \(\Delta t=1\). The same proof can be made for any value of \(\Delta t\). We obtain the following: \[\max_{\mathbf{x}} \sum_{t\in\mathcal{T}}\left(\sum_{l\in\mathcal{L}}U_{lt}d_{lt}- \sum_{g\in\mathcal{G}}C_{gt}p_{gt}-\sum_{v\in\mathcal{V}}S_{v}p_{vt}^{\mathrm{ D,e}}\right)\] (6a) s.t. \[\sum_{l\in\mathcal{L}}d_{lt}+p_{t}^{\mathrm{C,a}}-\sum_{g\in \mathcal{G}}p_{gt}-\sum_{v\in\mathcal{V}}p_{vt}^{\mathrm{D,e}}=0,\qquad\forall t \in\mathcal{T} \tag{6b}\] \[0\leq p_{gt}\leq\overline{P}_{gt}, \forall g\in\mathcal{G},t\in\mathcal{T}\] (6c) \[0\leq d_{lt}\leq\overline{D}_{lt}, \forall l\in\mathcal{L},t\in\mathcal{T}\] (6d) \[\sum_{i=1}^{T}p_{i}^{\mathrm{C,a}}\geq 0, \tag{6e}\] Fig. 2: Examples of net charge and net discharge, for a fictional market interval of 3 time periods of one hour each. Quantities are in MW. \[0\leq\sum_{i=1}^{t}p_{i}^{\mathrm{C},\mathrm{a}}+\sum_{v\in\mathcal{V }}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D},\mathrm{e}}\right) \leq\overline{E},\forall t\in\mathcal{T} \tag{6f}\] \[p_{vt}^{\mathrm{D},\mathrm{e}}\geq 0,\qquad\qquad\qquad\qquad \forall v\in\mathcal{V},\,\forall t\in\mathcal{T}\] (6g) \[E_{v}^{\mathrm{init}}-\sum_{i=1}^{T}p_{vi}^{\mathrm{D},\mathrm{e }}\geq 0,\qquad\qquad\qquad\forall v\in\mathcal{V}\] (6h) \[\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}}+\sum_{v\in\mathcal{V }}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{T}p_{vi}^{\mathrm{D},\mathrm{e}} \right)\geq E^{\mathrm{end}}. \tag{6i}\] Let's consider an optimal solution to (6). We denote it with the superscript \({}^{*}\), for example, \(p_{t}^{\mathrm{C},\mathrm{a}*}\). Under the assumption that the feasible set is not empty, such a solution exists. If it is such that the intra-storage never charges when the inter-storage discharges, we are done. We consider the case where there is at least one time period \(\tau\in\mathcal{T}\) such that the intra-storage charges when the inter-storage discharges, meaning that \(p_{r}^{\mathrm{C},\mathrm{a}*}>0\) and \(\sum_{v\in\mathcal{V}}p_{v}^{\mathrm{D},\mathrm{e}*}>0\). Let's call \(q_{v}^{\tau}\) the quantity discharged from the inter storage with value \(v\) at \(\tau\), i.e. \(q_{v}^{\tau}=p_{vr}^{\mathrm{D},\mathrm{e}*}\). We also call \(q^{\tau}\) the total quantity discharged from inter storage at \(\tau\), \(q^{\tau}=\sum_{v\in\mathcal{V}}q_{v}^{\tau}\). We now identify another time period \(\kappa\in\mathcal{T}\) such that \(p_{\kappa}^{\mathrm{C},\mathrm{a}*}<0\). Existence of \(\kappa\)We first prove that \(\kappa\) exists. Let's suppose that it does not, i.e. we suppose that \(p_{t}^{\mathrm{C},\mathrm{a}*}\geq 0\), \(\forall t\in\mathcal{T}\). We have \(\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}>0\) since \(p_{r}^{\mathrm{C},\mathrm{a}*}>0\). We build a new solution that we identify with the superscript \({}^{\prime}\). It is identical to the previous solution, except for \(p_{r}^{\mathrm{C},\mathrm{a}^{\prime}}=p_{r}^{\mathrm{C},\mathrm{a}*}-q^{\tau}\) and \(p_{vr}^{\mathrm{D},\mathrm{e}^{\prime}}=p_{vr}^{\mathrm{D},\mathrm{e}*}-q_{v}^ {\tau}\), \(\forall v\in\mathcal{V}\), with \(q^{\tau^{\prime}}=\min\{q^{\tau},\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}\}\) and \(q_{v}^{\tau}\) are such that \(q^{\tau^{\prime}}=\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\) and \(0\leq q_{v}^{\tau}\leq q_{v}^{\tau}\), \(\forall v\in\mathcal{V}\). It is possible to find such \(q_{v}^{\tau}\), since \(\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}>0\) and \(q^{\tau}>0\), so \(0<q^{\tau^{\prime}}\leq q^{\tau}\), meaning that \(0<\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\leq\sum_{v\in\mathcal{V}}q_{v}^ {\tau}\). We check that this new solution is feasible. For (6b), at \(t=\tau\), we now have \[\sum_{i\in\mathcal{L}}d_{tr}^{\prime}+p_{r}^{\mathrm{C},\mathrm{a }^{\prime}}-\sum_{g\in\mathcal{V}}p_{gr}^{\prime}-\sum_{v\in\mathcal{V}}p_{vr }^{\mathrm{D},\mathrm{e}^{\prime}}= \tag{7a}\] \[= \sum_{l\in\mathcal{L}}d_{tr}^{\prime}+p_{r}^{\mathrm{C},\mathrm{ a}*}-q^{\tau^{\prime}}-\sum_{g\in\mathcal{G}}p_{gr}^{\prime}-\sum_{v\in \mathcal{V}}(p_{vr}^{\mathrm{D},\mathrm{e}*}-q_{v}^{\tau^{\prime}})\] (7b) \[= \sum_{l\in\mathcal{L}}d_{tr}^{\prime}+p_{r}^{\mathrm{C},\mathrm{ a}*}-\sum_{g\in\mathcal{G}}p_{gr}^{\prime}-\sum_{v\in\mathcal{V}}p_{vr}^{\mathrm{D}, \mathrm{e}*}-q^{\tau^{\prime}}+\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\] (7c) \[= \sum_{l\in\mathcal{L}}d_{tr}^{\prime}+p_{r}^{\mathrm{C},\mathrm{ a}*}-\sum_{g\in\mathcal{G}}p_{gr}^{\prime}-\sum_{v\in\mathcal{V}}p_{vr}^{\mathrm{D}, \mathrm{e}*}, \tag{7d}\] so the constraint (6b) is satisfied. Constraints (6c) and (6d) still hold, since the solution is not modified for these variables. For constraint (6e), we have \[\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}^{\prime}} =\sum_{i=1}^{\tau-1}p_{i}^{\mathrm{C},\mathrm{a}^{\prime}}+p_{r}^{ \mathrm{C},\mathrm{a}^{\prime}}+\sum_{i=\tau+1}^{T}p_{i}^{\mathrm{C},\mathrm{a} ^{\prime}} \tag{8a}\] \[= \sum_{i=1}^{\tau-1}p_{i}^{\mathrm{C},\mathrm{a}*}+p_{r}^{\mathrm{ C},\mathrm{a}*}-q^{\tau^{\prime}}+\sum_{i=\tau+1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}\] (8b) \[= \sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}-q^{\tau^{\prime}}, \tag{8c}\] which we know is positive, since \(q^{\tau^{\prime}}=\min\{q^{\tau},\sum_{i=1}^{T}p_{i}^{\mathrm{C},\mathrm{a}*}\}\). For constraint (6f), there are no changes for \(t<\tau\). For \(t\geq\tau\), we have \[\sum_{i=1}^{t}p_{i}^{\mathrm{C},\mathrm{a}^{\prime}}+\sum_{v\in \mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D}, \mathrm{e}^{\prime}}\right)= \tag{9a}\] \[= \sum_{i=1}^{\tau-1}p_{i}^{\mathrm{C},\mathrm{a}^{\prime}}+p_{r}^{ \mathrm{C},\mathrm{a}^{\prime}}+\sum_{i=\tau+1}^{t}p_{i}^{\mathrm{C},\mathrm{a}^{ \prime}}\] \[+\sum_{v\in\mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{ \tau-1}p_{vi}^{\mathrm{D},\mathrm{e}^{\prime}}-p_{vr}^{\mathrm{D},\mathrm{e}^{ \prime}}-\sum_{i=\tau+1}^{t}p_{vi}^{\mathrm{D},\mathrm{e}^{\prime}}\right)\] (9b) \[= \sum_{i=1}^{\tau-1}p_{i}^{\mathrm{C},\mathrm{a}*}+p_{r}^{\mathrm{C},\mathrm{a}*}-q^{\tau^{\prime}}+\sum_{i=\tau+1}^{t}p_{i}^{\mathrm{C},\mathrm{a}*}\] \[+\sum_{v\in\mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{ \tau-1}p_{vi}^{\mathrm{D},\mathrm{e}*}-p_{vr}^{\mathrm{D},\mathrm{e}*}+q_{v}^{ \tau^{\prime}}-\sum_{i=\tau+1}^{t}p_{vi}^{\mathrm{D},\mathrm{e}*}\right)\] (9c) \[= \sum_{i=1}^{t}p_{i}^{\mathrm{C},\mathrm{a}*}-q^{\tau^{\prime}}+\sum_ {v\in\mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D}, \mathrm{e}*}\right)+\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\] (9d) \[= \sum_{i=1}^{t}p_{i}^{\mathrm{C},\mathrm{a}*}+\sum_{v\in\mathcal{V}} \left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D},\mathrm{e}*}\right), \tag{9e}\] so the constraint still stands. For (6g), we only have to check at \(t=\tau\). We defined \(q_{v}^{\tau^{\prime}}\leq q_{v}^{\tau}\), to make sure that this constraint is satisfied. Constraint (6h) still holds, since we discharge less. We have \[E_{v}^{\mathrm{init}}-\sum_{i=1}^ It is greater than the value of the objective function for the \({}^{*}\) solution, which is optimal, indicating a contradiction. _Note 1:_ This also excludes the case where \(|\mathcal{T}|=1\), meaning that it cannot be optimal to have \(p_{\tau}^{\mathrm{C,as}}>0\) and \(p_{\tau}^{\mathrm{D,e*}}>0\) in this case. _Note 2:_\(\sum_{i=1}^{T}p_{i}^{\mathrm{C,as}}>0\) also in case of net charge, so there will not be simultaneous charge of intra-storage and discharge of inter-storage then. Building a new solutionWe now know that there exists another time period \(\kappa\in\mathcal{T}\) and \(\kappa\neq\tau\) such that \(p_{\kappa}^{\mathrm{C,as}*}<0\). We build a new solution to our problem, which we identify with the superscript \({}^{\prime}\). It is identical to the previous solution, except for \(t=\tau\) and \(t=\kappa\). We want to discharge less the inter-storage at \(t=\tau\) and discharge it more at \(t=\kappa\), and in turn charge less the intra-storage at \(t=\tau\) and discharge it less at \(t=\kappa\), in order to keep the same storage level when both are summed. For \(t=\tau\), \(p_{\tau}^{\mathrm{C,a^{\prime}}}=p_{\tau}^{\mathrm{C,as}}-q_{\tau}^{\tau^{ \prime}}\) and \(p_{\nu\tau}^{\mathrm{D,e^{\prime}}}=p_{\nu\tau}^{\mathrm{D,e*}}-q_{\nu}^{\tau^ {\prime}}\), \(\forall v\in\mathcal{V}\), with \(q_{\tau}^{\prime}=\min\{q_{\tau}^{\prime},-p_{\kappa}^{\mathrm{C,as}}\}\) and \(q_{\nu}^{\tau^{\prime}}\) are such that \(q^{\tau^{\prime}}=\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\) and \(0\leq q_{v}^{\tau^{\prime}}\leq q_{v}^{\tau}\), \(\forall v\in\mathcal{V}\). It is possible to find such \(q_{v}^{\tau^{\prime}}\), since \(-p_{\kappa}^{\mathrm{C,as}}>0\) and \(q^{\tau}>0\), so \(0<q^{\tau^{\prime}}\leq q^{\tau}\), meaning that \(0<\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\leq\sum_{v\in\mathcal{V}}q_{v}^{\tau}\). For \(t=\kappa\), \(p_{\kappa}^{\mathrm{C,a}}=p_{\kappa}^{\mathrm{C,as}}+q^{\tau^{\prime}}\) and \(p_{\nu\kappa}^{\mathrm{D,e^{\prime}}}=p_{\nu\kappa}^{\mathrm{D,e*}}+q_{v}^{ \tau}\), \(\forall v\in\mathcal{V}\). Let's check that this solution is feasible. For (6b), at \(t=\tau\), we have the same as in (7), which is feasible. At \(t=\kappa\), \[\sum_{i\in\mathcal{L}}d_{\kappa}^{\prime}+p_{\kappa}^{\mathrm{C,a }^{\prime}}-\sum_{g\in\mathcal{G}}p_{\beta\kappa}^{\prime}-\sum_{v\in \mathcal{V}}p_{\nu\kappa}^{\mathrm{D,e^{\prime}}} \tag{13a}\] \[= \sum_{l\in\mathcal{L}}d_{\kappa}^{\prime}+p_{\kappa}^{\mathrm{C,a }^{\prime}}+q^{\tau^{\prime}}-\sum_{g\in\mathcal{G}}p_{\beta\kappa}^{\prime}- \sum_{v\in\mathcal{V}}(p_{\nu\kappa}^{\mathrm{D,e*}}+q_{v}^{\tau^{\prime}})\] (13b) \[= \sum_{l\in\mathcal{L}}d_{\kappa}^{\prime}+p_{\kappa}^{\mathrm{C,a }^{\ast}}-\sum_{g\in\mathcal{G}}p_{\beta\kappa}^{\ast}-\sum_{v\in\mathcal{V}}p_ {\nu\kappa}^{\mathrm{D,e*}}+q^{\tau^{\prime}}-\sum_{v\in\mathcal{V}}q_{v}^{\tau ^{\prime}}\] (13c) \[= \sum_{l\in\mathcal{L}}d_{\kappa}^{\prime}+p_{\kappa}^{\mathrm{C,a }^{\ast}}-\sum_{g\in\mathcal{G}}p_{\beta\kappa}^{\ast}-\sum_{v\in\mathcal{V}}p_ {\nu\kappa}^{\mathrm{D,e*}}, \tag{13d}\] so the constraint (6b) is satisfied. Constraints (6c) and (6d) still hold, since the solution is not modified for these variables. Let's call \(t_{1}=\min\{\tau,\kappa\}\) and \(t_{2}=\max\{\tau,\kappa\}\). The total charged quantity in the intra-storage is unchanged: \[\sum_{i=1}^{T}p_{i}^{\mathrm{C,a^{\prime}}} =\sum_{i=1}^{t_{1}-1}p_{i}^{\mathrm{C,a^{\prime}}}+\sum_{i=t_{1} +1}^{t_{2}-1}p_{i}^{\mathrm{C,a^{\prime}}}+\sum_{i=t_{2}+1}^{T}p_{i}^{\mathrm{ C,a^{\prime}}}\] \[+p_{\tau}^{\mathrm{C,a^{\prime}}}+p_{\kappa}^{\mathrm{C,a^{\prime}}} \tag{14a}\] \[=\sum_{i=1}^{t_{1}-1}p_{i}^{\mathrm{C,a*}}+\sum_{i=t_{1}+1}^{t_{2 }-1}p_{i}^{\mathrm{C,a*}}+\sum_{i=t_{2}+1}^{T}p_{i}^{\mathrm{C,a*}}\] \[+p_{\tau}^{\mathrm{C,a*}}-q^{\tau^{\prime}}+p_{\kappa}^{\mathrm{C, a*}}+q^{\tau^{\prime}}\] (14b) \[=\sum_{i=1}^{T}p_{i}^{\mathrm{C,a*}}. \tag{14c}\] It is also the case for the inter-storage: \[\sum_{i=1}^{T}p_{vi}^{\mathrm{D,e^{\prime}}}=\sum_{i=1}^{t_{1}-1}p_{vi}^{\mathrm{ D,e^{\prime}}}+\sum_{i=t_{1}+1}^{t_{2}-1}p_{vi}^{\mathrm{D,e^{\prime}}}+\sum_{i=t_{2}+1 }^{T}p_{vi}^{\mathrm{D,e^{\prime}}}\] \[+p_{\tau}^{\mathrm{D,e^{\prime}}}+p_{\nu\kappa}^{\mathrm{D,e^{ \prime}}} \tag{15a}\] \[=\sum_{i=1}^{t_{1}-1}p_{vi}^{\mathrm{D,e*}}+\sum_{i=t_{1}+1}^{t_{2 }-1}p_{vi}^{\mathrm{D,e*}}+\sum_{i=t_{2}+1}^{T}p_{vi}^{\mathrm{D,e*}}\] \[+p_{\nu\tau}^{\mathrm{D,e*}}-q_{v}^{\tau^{\prime}}+p_{\nu\kappa}^{ \mathrm{D,e*}}+q_{v}^{\tau^{\prime}}\] (15b) \[=\sum_{i=1}^{T}p_{vi}^{\mathrm{D,e*}}. \tag{15c}\] As a consequence, constraints (6e), (6b) and (6i) are satisfied by the new solution. For constraint (6f), there are no changes for \(t<t_{1}\). For \(t_{1}\leq t<t_{2}\). If \(t_{1}=\tau\), we have the same as in (9). If \(t_{1}=\kappa\), \[\sum_{i=1}^{t}p_{i}^{\mathrm{C,a^{\prime}}}+\sum_{v\in\mathcal{V}} \left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D,e^{\prime}}}\right)\] (16a) \[= \sum_{i=1}^{\kappa-1}p_{vi}^{\mathrm{C,a^{\prime}}}+p_{\kappa}^{ \mathrm{C,a^{\prime}}}+\sum_{i=\kappa+1}^{t}p_{i}^{\mathrm{C,a^{\prime}}}\] \[+\sum_{v\in\mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{ \kappa-1}p_{vi}^{\mathrm{D,e^{\prime}}}-p_{\nu\kappa}^{\mathrm{D,e^{\prime}}}- \sum_{i=\kappa+1}^{t}p_{vi}^{\mathrm{D,e^{\prime}}}\right)\] (16b) \[= \sum_{i=1}^{t_{1}}p_{i}^{\mathrm{C,a*}}+p_{\kappa}^{\mathrm{C,a*}}+q ^{\tau^{\prime}}+\sum_{i=\kappa+1}^{t}p_{i}^{\mathrm{C,a*}}\] \[+\sum_{v\in\mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{ \kappa-1}p_{vi}^{\mathrm{D,e*}}-p_{\nu\kappa}^{\mathrm{D,e*}}-q_{v}^{\tau^{ \prime}}-\sum_{i=\kappa+1}^{t}p_{vi}^{\mathrm{D,e*}}\right)\] (16c) \[= \sum_{i=1}^{t}p_{i}^{\mathrm{C,a*}}+q^{\tau^{\prime}}+\sum_{v\in \mathcal{V}}\left(E_{v}^{\mathrm{init}}-\sum_{i=1}^{t}p_{vi}^{\mathrm{D,e*}} \right)-\sum_{v\in\mathcal{V}}q_{v}^{\tau^{\prime}}\] (16d) \[= \sum_{i=1}^{t}p_{i}^{\mathrm{C,a*}}+\sum_{ 6. Check if there is another \(t\) for which \(p_{t}^{\mathrm{C,a*}}>0\) and \(\sum_{v\in\mathcal{V}}p_{vt}^{\mathrm{D,e*}}>0\). If so, it is the new \(\tau\) and we go back to 1. We have thus shown that there is always an optimal solution in which there is no simultaneous charge and discharge, under the assumption that \(S_{v}>0\), \(\forall v\in\mathcal{V}\). The assumption that \(S_{v}>0\), \(\forall v\in\mathcal{V}\) is reasonable since there is no interest to charge the storage system when prices are non-positive. Moreover, charging at a value of 0 is equivalent to not valuing the stored energy, in which case we can equivalently use the original formulation, in which there is no separation between intra- and inter-storage and thus no simultaneous charge and discharge of intra- and inter-storage. ### _Update of the Inter-Storage_ In the case of net discharge, the inter-storage can be easily updated by subtracting the sum of \(p_{vt}^{\mathrm{D,e}}\) to \(E_{v}^{\mathrm{init}}\). If doing so, the inter-storage for value \(v\) is empty, the corresponding index is dropped in \(\mathcal{V}\). In the case of net charge, the quantity net charged is added to the inter-storage, with the corresponding charging price. However, since the storage might have been charging at different prices during the market interval, the question of which price to save arises. To address this, we introduce the following optimization model \[\min_{\mathbf{p}^{\mathrm{C,loc}},\mathbf{p}^{\mathrm{C,mov}}} -\Delta t\sum_{t\in\mathcal{T}}\lambda_{t}^{*}p^{\mathrm{C,loc}}\] (17a) s.t. \[-\sum_{t\in\mathcal{T}}\lambda_{t}^{*}p^{\mathrm{C,loc}}\geq 0 \tag{17b}\] \[\sum_{t\in\mathcal{T}}p_{t}^{\mathrm{C,loc}}=0\] (17c) \[p_{t}^{\mathrm{C,loc}}+p_{t}^{\mathrm{C,mov}}=p_{t}^{\mathrm{C,a*}}, \forall t\in\mathcal{T}\] (17d) \[p_{t}^{\mathrm{C,loc}}=p_{t}^{\mathrm{C,a*}}, \forall t\in\mathcal{T},\,p_{t}^{\mathrm{C,a*}}\leq 0\] (17e) \[p_{t}^{\mathrm{C,loc}}\geq 0, \forall t\in\mathcal{T},\,p_{t}^{\mathrm{C,a*}}>0\] (17f) \[p_{t}^{\mathrm{C,mov}}\geq 0, \forall t\in\mathcal{T}. \tag{17g}\] The parameters in this model are obtained from the solution of the market clearing, and indicated with *. The idea is to split the quantity charged in the intra-storage into a local and a moved quantity, \(p_{t}^{\mathrm{C,loc}}\) and \(p_{t}^{\mathrm{C,mov}}\). Then, \(p_{t}^{\mathrm{C,mov}}\) is added to the inter-storage, with the market price at \(t\). The total moved quantity must correspond to the net charge, which is equivalent to setting the total local quantity to zero (sum of total local charge and discharge), in (17c). The inter-storage is only updated by quantities charged, which is ensured by (17g): the quantity moved can only be positive. The sum of local and moved quantities has to correspond to the quantity charged in the inter-storage, with (17d). In case of discharge, constraint (17e) applies, ensuring that the moved quantity is equal to zero. In case of charge, constraint (17f) prevents discharge for the local quantity. Footnote *: We prove cost recovery for loads and generators by formulating their individual surplus maximization problem and its dual problem and using the strong duality theorem. The objective (17a) is to minimize the storage profit over this interval, while still ensuring that it is positive with (17b). We are minimizing in order to avoid situations where the storage system would be setting prices higher than necessary. ## IV Study of Market Properties We study cost recovery and social welfare for the market clearing with VLB, comparing to the split market clearing, (1) with (3). We discuss the impact of imperfect foresight on these properties. Finally, we discuss a limitation of both models, opening directions for future research. ### _Cost Recovery_ There is cost recovery for a market participant, if their surplus is always non-negative. We can easily show that this is the case for the generators and loads4. Footnote 4: We prove cost recovery for loads and generators by formulating their individual surplus maximization problem and its dual problem and using the strong duality theorem. For a storage system, it is not relevant to ensure that the surplus is always non-negative. For example, the storage system could be only paying to charge in one market interval, to later discharge at a higher price and make a profit in a subsequent market interval. In this example, the surplus of the storage in the first market interval would be negative. Hence, we redefine cost recovery for a non-merchant storage system. **Definition 1** (Cycle and cost recovery for a non-merchant storage).: _We define a cycle as a group of consecutive market intervals for which the storage system is initially empty and finally returns to this same state. We say that there is cost recovery for a non-merchant storage system if its surplus over a cycle is non-negative._ For a non-merchant storage system, and for a fair comparison, it only makes sense to look at the surplus over a cycle. Otherwise, we need to know the value of the stored energy to include it, which is actually the complex problem that we are trying to solve here. We argue that cost recovery stands by design of the market clearing and of the update of inter-storage from (17), under the assumption that looking far enough into the future, the inter-storage will eventually be empty. In this model, constraint (17b) ensures that the profit of the storage system is non-negative for the quantity exchanged over the time interval. Regarding the quantity moved to the inter-storage, the charging price is saved and later used as a bid. Since discharge is never imposed, this ensures that for this quantity the price received will be at least equal to the charging price. Note that this result is not based on the assumption of perfect foresight and is thus valid in uncertain settings. The assumption that the storage will eventually be empty is mild. It would be challenged in the case that a decision was made to charge the storage in a day with very high prices that never occur again. A complete day with very high prices is very unlikely to happen without being foreseen. And if foreseen, this situation would illustrate very poor decision-making on the final level of the storage system. We saw in the example of Section II-C that cost recovery is not ensured for the storage system in the split market clearing. In that example, the storage system starts empty and finishes empty, so the two market intervals considered are a cycle. However, the surplus of the storage in this cycle can be as low as -3E. We run that same example with the market clearing with VLB, to show that this situation does not arise anymore. The results are shown in Table III. At the end of the first MI, the inter-storage is charged with a price of 5E/MWh and it is discharged in the next hour. The storage is now marginal and gets paid at least 5E/MWh, so that the minimum surplus of the storage system over this cycle is 0E, and cost recovery stands. ### _Social Welfare_ If the level of the storage system at the end of each interval of (5) is set to the value that is optimal for the ideal market clearing, the storage system will follow the same trajectory, which will result in the same social welfare. Indeed, since this level is obtained in a way that the storage system recovers its costs in the ideal clearing, the inter-storage will discharge the same quantity. We now discuss different scenarios that can occur in case of imperfect information and error in setting the level of the storage system at the end of each market interval, and illustrate them with simple examples. We look at the social welfare for the market clearing with VLB, in comparison to the one obtained for the split market clearing using the same values for the final levels \(E^{\mathrm{end}}\). Since social welfare is the sum of the surpluses of all the market participants, its calculation has to be carried out over a cycle, as defined in Definition 1. We compare both methods on a cycle for the market-clearing with VLB 5. We also compare to the outcome of the ideal market clearing. In both illustrative examples, the storage has the same capacity \(\overline{E}=2.5\) MWh and it is initially empty, each MI consists of one time period of one hour, and there is one load and one generator participating in the market. The rest of the data and results are given in Tables IV and V, including the number of MIs. The results in column "\(e_{t}\)" for the split market correspond to the values set for \(E^{\mathrm{end}}\), which are also used for VLB. Footnote 5: A cycle for the split market-clearing is not necessarily a cycle for the market-clearing with VLB but the opposite is true. If \(E^{\mathrm{end}}=0\), the final level with VLB might be higher because of (5j), while if the final level with VLB is equal to zero, it means that \(E^{\mathrm{end}}=0\) and the final level for the split market is also equal to zero. The first scenario corresponds to the case where discharge is imposed in a period of low prices, also limiting the availability of storage for future periods with low generation. In this case, the social welfare can be higher for (5) than for (1) with (3). We show this in the example of Table IV. The total social welfare for these three MIs is -1E for the split market clearing with VLB, compared to 21E for the ideal clearing. Due to the error in predicting the ideal final levels for the storage, the social welfare is lower than in the ideal clearing. The difference in social welfare between split and VLB is due to the fact that in the split market clearing, the storage discharges on the second MI, while it does not in the market clearing with VLB. Indeed, the prices are too low for the storage to recover the costs from charging on the first MI. Rather, it discharges on the third MI, thereby allowing for the load to be completely supplied, which increases social welfare. We also observe that the formulation with VLB reestablishes cost recovery for the storage system. The second scenario corresponds to the case where due to an error when choosing the final level, charge is imposed in a period of high prices, which do not occur again soon enough, preventing the complete use of the storage capacity in the following market intervals. Then, the social welfare can be lower for (5) than for (1) with (3). We show this in the example of Table V. The total social welfare for these three MIs is 842.5E for the split market clearing, and 772.5E for the market clearing with VLB, compared to 855E for the ideal clearing. The difference in social welfare between split and VLB is due to the fact that the inter-storage does not discharge on the second and fourth MIs. Indeed, the prices are too low for the storage to recover the costs from charging on the first MI. More expensive generators are used to supply the load on these MIs, thereby decreasing social welfare. However, this can be limited by introducing a discount on stored value, which is shown in the last column of Table V. In this case, we decreased by 25% the value of the energy stored in the inter-storage after each MI, except after the MI in which it is stored. The social welfare in this case is 807.5E, as the inter-storage can now be used on the fourth MI. ### _Limitation of Split Models_ We have seen that our new method performs better than the traditional split market clearing in terms of cost recovery for the market participants. However, not all the limitations of the split market clearing are overcome, which we see here in a last illustrative example. We use again a storage system with capacity \(\overline{E}=2.5\) MWh and initially empty. The rest of the data and the results are shown in Table VI. There is one load and one generator. The market is cleared for two MIs of three hours each and the final storage level is set to its optimal value, obtained when clearing for the two MIs together. Note that the final level at the end of the second MI is equal to 2.5 MWh because of subsequent MIs, which are not represented here. We are in the perfect foresight set-up and we can see that all quantities agree. As a consequence, social welfare is the same for all these types of market clearing. However, we see that the prices in the last hour of the first MI can potentially be higher than in the ideal clearing, for both other approaches. This difference is due to future market intervals not being considered for pricing in these cases. In the ideal clearing, the price for that hour is at most 4E/MWh, which corresponds to the utility of the load in the first hour of the next MI, which the other approaches do not take into consideration. We argue for a limited impact of future uncertain information on the formation of current prices, to ensure transparency. However, we see here that it comes with a cost that somebody will ultimately have to pay. The study of this trade-off is a topic for future research. ## V Discussion and Conclusion We introduced a novel procedure for clearing an energy market with non-merchant storage, using virtual linking bids. This is based on an artificial representation of the storage system, dividing it into a component for local arbitrage, within this market interval, and a component for arbitrage between market intervals. We showed that it outperforms traditional approaches when it comes to cost recovery. Indeed, it ensures cost recovery for the storage system, even over multiple market intervals, which common split market clearing does not. More importantly, we showed that this property also stands when forecast errors are made when calculating the final state of energy of the storage system, which corresponds to a realistic setup. It still remains to study how this final state of energy should be determined. This should also come with a study of the impact of the storage level on pricing. In particular, a critical next step would be to investigate the potential impact of a strategic choice of this level on prices and social welfare, and how it compares to having merchant storage. We also discuss the impacts of uncertainty on social welfare compared to traditional approaches. In the case of forcing the discharge of the storage system at a disadvantageous price, we show that our approach comes closer to closing the gap with an ideal oracle market clearing. We also showed that this method does not solve the problem of accounting for prices in future market intervals, thereby leading to higher prices compared to an ideal market clearing. This problem needs to be further studied. Another limitation of the method is that it might happen that a stored quantity is kept in store for too long, due to a very high value. We showed that a discount factor could be applied to the value of stored energy over time to avoid this. We used illustrative small-scale examples to give better intuition on how the method introduced behaves compared to a traditional split-horizon market clearing. Since this method does not introduce non-linearities, the computational complexity is similar to traditional approaches. It only involves solving one more linear program for the update of storage values, which should also scale well. This procedure was introduced on an idealized representation of a storage system, and the promising results are a good motivation for extending it to more general storage system models and to multiple storage systems.
2301.13812
Learning Roles with Emergent Social Value Orientations
Social dilemmas can be considered situations where individual rationality leads to collective irrationality. The multi-agent reinforcement learning community has leveraged ideas from social science, such as social value orientations (SVO), to solve social dilemmas in complex cooperative tasks. In this paper, by first introducing the typical "division of labor or roles" mechanism in human society, we provide a promising solution for intertemporal social dilemmas (ISD) with SVOs. A novel learning framework, called Learning Roles with Emergent SVOs (RESVO), is proposed to transform the learning of roles into the social value orientation emergence, which is symmetrically solved by endowing agents with altruism to share rewards with other agents. An SVO-based role embedding space is then constructed by individual conditioning policies on roles with a novel rank regularizer and mutual information maximizer. Experiments show that RESVO achieves a stable division of labor and cooperation in ISDs with different complexity.
Wenhao Li, Xiangfeng Wang, Bo Jin, Jingyi Lu, Hongyuan Zha
2023-01-31T17:54:09Z
http://arxiv.org/abs/2301.13812v1
# Learning Roles with Emergent Social Value Orientations ###### Abstract Social dilemmas can be considered situations where individual rationality leads to collective irrationality. The multi-agent reinforcement learning community has leveraged ideas from social science, such as social value orientations (SVO), to solve social dilemmas in complex cooperative tasks. In this paper, by first introducing the typical "division of labor or roles" mechanism in human society, we provide a promising solution for intertemporal social dilemmas (ISD) with SVOs. A novel learning framework, called Learning Roles with **E**mergent **SVOs** (**RESVO**), is proposed to transform the learning of roles into the social value orientation emergence, which is symmetrically solved by endowing agents with altruism to share rewards with other agents. An SVO-based role embedding space is then constructed by individual conditioning policies on roles with a novel rank regularizer and mutual information maximizer. Experiments show that RESVO achieves a stable division of labor and cooperation in ISDs with different complexity. ## 1 Introduction The continuity of human civilization and the prosperity of the race depends on our ability to cooperate. From evolutionary biology to social psychology and economics, cooperation in human populations has been regarded as a paradox and a challenge (Fehr and Fischbacher, 2003; Pennisi, 2009; Santos et al., 2021). Cooperation issues vary in scale and are widespread in daily human life, ranging from assembly line operations in factories and scheduling of seminars to peace summits between significant powers, business development, and pandemic control (Dafoe et al., 2020). Although cooperation can benefit all parties, it might be costly. Thus, the temptation to evade any cost (i.e., the free-riding) becomes a tempting strategy, which leads to cooperation collapsing, or the multi-person social dilemma (Rapoport et al., 1965; Xu et al., 2019). That is, "individually reasonable behavior leads to a situation in which everyone is worse off than they might have been otherwise" (Kollock, 1998). Just as cooperation widely exists in human social, economic, and political activities, most thorny problems we face, from the interpersonal to the international, are at their core social dilemmas. This article presents two cases in recent years closely related to the future economic and political decisions of countries, namely autonomous driving and carbon trading, and the role of social dilemmas in them. Autonomous driving (AV), which promises world-changing benefits by increasing traffic efficiency (Van Arem et al., 2006), reducing pollution (Spieser et al., 2014), and eliminating up to 90% of traffic accidents (Gao et al., 2014), is a very complex systems engineering. Existing work mainly focuses on accomplishing generic tasks, such as following a planned path while obeying traffic rules. However, there are many driving scenarios in practice, most of which have social dilemmas. Examples include lane changing (Dafoe et al., 2020), meeting, parking (Li, 2022), and even ethical aspects of aggressive versus conservative driving behavior choices (Bonnefon et al., 2016). Therefore, the practicality of AV depends on the efficient solution to social dilemmas. Carbon trading is a greenhouse gas emission right (emission reduction) transaction based on the United Nations Framework Convention on Climate Change established by the Kyoto Protocol to promote the reduction of greenhouse gas emissions, using a market mechanism (Grimeaud, 2001). Carbon emission is a representative social dilemma in which countries' direct gas emissions for the sake of economic development undermine collective interests. The typical mechanisms in carbon trading, such as _distribution of allowances_(Fullerton and Metcalf, 2014), _joint implementation_(Grimeaud, 2001), etc., have obvious correspondences with the _boundaries_(Ibrahim et al., 2020, 2020) and _institutions_(Koster et al., 2020; Lupu and Precup, 2020) used to solve social dilemmas in economics and social psychology. The social dilemma has been comprehensively studied in economics, social psychology, and evolutionary biology in the past few decades. This paper focuses on the _public good dilemma_ in the intertemporal social dilemma (ISD). A public good is a resource from which all may benefit, regardless of whether they have helped provide the good (producer) (Kollock, 1998). This is to say that public goods are _non-excludable_. As a result, there is the temptation to enjoy the good (consumer) without contributing to its creation or maintenance. Those who do so are termed _free-riders_, and while it is individually rational to free-ride if all do so, the public good is not provided, and all are worse off. Artificial intelligence (AI) advances pose increasing opportunities for AI research to promote human cooperation and enable new tools for facilitating cooperation (Dafoe et al., 2020). Recently, multi-agent reinforcement learning (MARL) has been utilized as a powerful toolset to study human cooperative behavior with great success (Lowe et al., 2017; Silver et al., 2018; Jaderberg et al., 2019; Liao et al., 2020; Li et al., 2022). We believe it is reasonable to use MARL as a first step in exploring the use of AI tools to study multi-person social dilemmas. The current model for reinforcement learning suggests that reward maximization is sufficient to drive behavior that exhibits abilities studied in the human cooperation and social dilemmas, including "knowledge, learning, perception, social intelligence, language, generalization and imitation" (Yang, 2021; Silver et al., 2021; Vamplew et al., 2022). The justification for this claim is deeply rooted in the _von Neumann Morgenstern utility theory_(von Neumann and Morgenstern, 2007), which is the basis for the well-known _expected utility theory_(Schoemaker, 2013) and essentially states that it is safe to assume an intelligent entity will always make decisions according to the highest expected utility in any complex scenarios1(Yang, 2021). Footnote 1: Although follow-up works have shown that some of the assumptions on rationality could be violated by real decision-makers in practice (Gigerenzer and Selten, 2002), those conditions are rather taken as the “axioms” of rational decision making (Yang, 2021; Yang, 2021). In MARL, the critical issue of multi-person social dilemma can be formalized as an ISD (Leibo et al., 2017; Hughes et al., 2018), and most MARL methods have introduced ideas from social psychology and economics more or less. These methods could be divided into three categories, _strategic_ solutions, _structural_ solutions, and _motivational_ solutions, based on whether the solutions assume egoistic agents and whether the structure of the game can be changed (Kollock, 1998) according to the taxonomy of social science. _Structural_ solutions reduce the difficulty of the original social dilemma by changing the game's rules or completely avoiding the occurrence of the social dilemma. The mechanisms introduced into MARL mainly include boundaries and sanctions (Ostrom, 1990). Ibrahim et al. (2020) indirectly sets boundaries for resources by introducing a shared periodic signal and a conditional policy based on this signal, allowing agents to access shared resources in a fixed order. Ibrahim et al. (2020) achieves resource boundarization by introducing a centralized government module through taxation and wealth redistribution.. Koster et al. (2020); Lupu and Precup (2020) introduce a centralized module and use rules and learning methods to punish the free-riding agent separately. LIO (Yang et al., 2020) enables each agent to punish, thereby implementing the sanction mechanism in a decentralized manner. Vinitsky et al. (2021) adopts a combination of centralized and decentralized modules and judges the decentralized sanctioning behavior of the agent through the centralized module, thereby encouraging appropriate sanctioning behaviors and avoiding unreasonable behaviors. Furthermore, Dong et al. (2021) introduces homophily into the MARL to solve the second-order social dilemma caused by sanctions. _Strategic_ solutions assume that all individuals in the group are egoists and that the algorithm does not change the game's structure. Such methods rely on an individual's ability to shape other individuals' payoffs, thereby directly influencing the behavior of others. Direct and indirect reciprocity is the main mechanisms introduced into MARL. Eccles et al. (2019) introduces the classic direct reciprocity algorithm tit-for-tat (Axelrod and Hamilton, 1981) into the solution of ISD. In order to realize the "imitation" at the core of tit-for-tat and the definition of the binary action (cooperate and defect) in ISD, Eccles et al. (2019) divides the agents into innovators and imitators and introduces the niceness function based on the deep advantage function. Anastassacos et al. (2021) introduces two core concepts of indirect reciprocity, reputation and social norm (Santos et al., 2021) into MARL and uses them as fixed rules to construct the agent's action space. _Motivational_ solutions assume agents are not entirely egoistic and so give some attention (passively or actively) to the outcomes of their partners. One of the typical mechanisms is communication. Across a wide variety of economics and social psychology studies, when individuals are given a chance to talk with each other, cooperation increases significantly (Orbell et al., 1988, 1990). Although there are many works (Sheng et al., 2020; Ahilan and Dayan, 2021) on communication learning in MARL, little attention has been paid to the role of communication in solving ISD. Pretorius et al. (2020) first uses empirical game-theoretic analysis (Tuyls et al., 2018) to study existing communication learning methods in ISD and to verify the effects of these methods experimentally. Another typical mechanism is social value orientation. Social value orientations (SVOs), or heterogeneous distributive preferences (Batson, 2012; Cooper and Kagel, 2016; Eckel and Grossman, 1996; Rushton et al., 1981; Simon, 1993), are widely recognized in social psychology and economics as an effective mechanism for promoting the emergence of human cooperative behavior in different social dilemmas (McKee et al., 2020). The above three types of methods mainly make breakthroughs in methodology and are accompanied by simulation experiments to verify the correctness of the conclusions. Considering the completeness of the theory and the feasibility of convergence analysis, this paper mainly focuses on solving intertemporal or public good social dilemmas based on social value orientations. The aforementioned mainstream conclusions about SVO from social psychology and economics are mainly supported by interdependence theory (Hansen, Figure 1: Interdependence theory in the prisoner’s dilemma (Encyclopaedia Britannica, 2022): the four pathways depict transformation processes for a row player who has individualistic, competitive, cooperative, and altruistic preferences, respectively; four resulting transformations suggest different dominant strategies (highlighted in green). 1982). In social psychology and economics games, classical game theory does not accurately predict human behavior. This is because, in these human-involved games, each player does not rely on the given payoff matrix to make decisions but on their own "effective" payoff matrix (Hansen, 1982; McKee et al., 2020). The effective payoff matrix is constructed by redistributing payoffs for the given payoff matrix based on the players' respective SVOs. As seen from Figure 1, different SVO will make players choose different dominant strategies when facing the prisoner's dilemma, thus affecting the emergence of cooperation. Many different social value orientations are theoretically possible, but most work has concentrated on various linear combinations of individuals' concern for the rewards for themselves and their partners. Inspired by the interdependence theory, many previous works have introduced the SVO into MARL to solve the ISD (Peysakhovich and Lerer, 2018; Hughes et al., 2018; Zhang et al., 2019; Wang et al., 2019; Baker, 2020; Gemp et al., 2022; Yi et al., 2021; Ivanov et al., 2021; Schmid et al., 2021). Peysakhovich and Lerer (2018) introduces the SVO into MARL for the first time and proposes the concept of prosocial, that is, cooperative orientation agents. The reward function of a prosocial agent is shaped as a fixed linear combination of its reward and the others. Hughes et al. (2018) introduces an inequity aversion model in ISD, namely equality orientation, which promotes cooperation by minimizing the gap between one's return and that of other individuals. The latter work is no longer satisfied with a fixed linear combination and begins to introduce trainable weight parameters. Baker (2020) first attempts to randomize the linear weights of one's and others' rewards to observe whether cooperative behavior emerges. Since the linear weights are always greater than 0, all agents can be roughly classified into three categories: cooperative-oriented, altruistic-oriented, or individual-oriented. Going a step further, D3C (Gemp et al., 2022) optimizes the linear combination weights by using the ratio of the worst equilibrium to the optimal solution (Price of Anarchy, PoA) that measures the quality of the equilibrium points. Concurrent work LToS (Yi et al., 2021) models the optimization problem of linearly transforming weights as a bi-level problem and uses an end-to-end approach to train weights and policies jointly. Considering the noise or privacy issues that instantaneous rewards for SVO modeling may introduce, some recent works shape the agents' reward in other ways. Schmid et al. (2021) realizes the conditional linear combination of agent rewards by introducing the idea of the market economy. Zhang et al. (2019) and Ivanov et al. (2021) use state-value and action-value functions to implement SVO modeling. Wang et al. (2019) directly uses reward-to-go and reward-to-come, combined with evolutionary algorithms, to optimize the weights of nonlinear (MLP-based) combinations. However, these methods cannot stably and efficiently converge to mutual cooperation under complex ISDs, which are further verified in our numerical experiments in Section 4. The conceptual diagram of our solution is shown in Figure 2. Specifically, we find that a typical mechanism of human society, i.e., division of labor or roles, can benefit from providing a promising solution for the ISD combined with SVOs. The effectiveness of the division of labor in solving the ISD has emerged in existing MARL works but is still underexplored. The numerical results from sanction-based methods Yang et al. (2020); Vinitsky et al. (2021) on the typical ISD task _Cleanup_(Hughes et al., 2018) and _Altelopathic Harvest_(Koster et al., 2020) show that policies solving ISDs effectively exhibit a clear division of labor (Figure 3). Many natural systems feature emergent division of labor, such as ants (Gordon, 1996), bees (Jeanson et al., 2005), and humans (Butler, 2012). In these systems, the division of labor is closely related to the roles and is critical to labor efficiency. The division of labor, or the role theory, has been widely studied in sociology and economics (Institute, 2013). A role is a comprehensive pattern of behavior, and agents with different roles will show different behaviors. Thus the overall performance can be improved by learning from others' strengths (Wang et al., 2020). These benefits inspired multi-agent system designers, who try to reduce the design complexity by decomposing the task and specializing agents with the same role to certain sub-tasks (Woolridge et al., 2000; Omicini, 2000; Padgham and Winikoff, 2002; Pavon and Gomez-Sanz, 2003; Cossentino et al., 2005; Zhu and Zhou, 2008; Spanoudakis and Moraitis, 2010; DeLoach and Garcia-Ojeda, 2010; Bonjean et al., 2014). However, roles and the associated responsibilities (or subtask-specific rewards Sun et al. (2020)) are predefined using prior knowledge in this systems (Lhaksmana et al., 2018). Although pre-definition can be efficient in tasks with a clear structure, such as software engineering (Bresciani et al., 2004), it hurts generalization and requires prior knowledge that may not be available in practice. To solve this problem, Wilson et al. (2010) uses Bayesian inference to learn a set of roles, and ROMA (Wang et al., 2020) designs a specialization objective to encourage the emergence of roles. Wang et al. (2021) improves the learning efficiency in hard-exploration tasks by first decomposing joint action spaces according to action effects, which makes role discovery much more effortless. Unfortunately, none of these methods considers the intertemporal social dilemma. Drawing the insight from studies in social psychology that characteristics of laborers, or roles, influence the SVOs reciprocally (Sutin and Costa, 2010; Holman and Hughes, 2021), this paper uses the agent's SVO to represent the role of each agent, transforming the role learning problem into the emergence of the agent's SVO, thereby naturally constructing a role-based framework in MARL to solve ISD. Specifically, we use the SVOs, i.e., the coefficients in the transformation matrices from independence theory, to represent each agent's role, e.g., \((0,1)\) in individualistic preference and \((0.5,0.5)\) in cooperative preference. This method assumes that all agents will have real-time access to one another's rewards while learning. However, making reward data unrestrictedly accessible is undesirable for several reasons. For example, agent designers Figure 2: The conceptual diagram of the proposed RESVO, which is based on the social value orientations combined with a typical “division of labor or roles” mechanism, can benefit from providing a promising solution for the intertemporal social dilemma. RESVO is divided into two training phases of joint optimization and interleaved update: SVO-based role or division of labor emergence and role or division-based policy optimization. In the first phase, RESVO transforms the learning of roles into a social value orientation emergence problem, which is symmetrically solved by endowing agents with altruism to learn to share rewards with other agents. An SVO-based role embedding space is then constructed by conditioning individual policies on roles with a novel rank regularizer and mutual information maximizer. Moreover, RESVO optimizes the policies based on the multi-agent policy gradient theorem in the second phase by maximizing the shaped rewards of all agents with different emerged social value orientations. want to imperceptibly modify the agent's reward function or prohibit from sharing their agents' reward function (Kairouz et al., 2021). This makes the emergence of social value orientation unfeasible, making it impossible to promote the division of labor based on SVOs. Inspired by the fact that _altruism_ plays a crucial role in human's solution to social dilemmas (Kollock, 1998; Eisenberg and Mussen, 1989), that is, consumers altruistic share a part of their profits with producers, this paper proposes a novel algorithm framework, called Learning Role with Emergent SVO (RESVO), that establish a symmetric relationship between SVO emergence and learning to share. RESVO encourages agents to learn to dynamically share the reward with other agents (see Figure 4). In this learning paradigm, the learnable parameters, or the SVO of each agent, are the proportions2 of rewards it receives from other agents to the extrinsic rewards of these reward giver. We take these emergent SVOs as the role representations of each agent. RESVO then imposes a novel low-rank constraint on the SVO matrix of all agents to effectively represent the different roles of agents and uses projected gradient descent to solve constrained optimization problems. Furthermore, to establish the connection between roles and decentralized policies, RESVO conditions agents' policies on individual emergent SVOs by explicitly feeding agents' SVO-based role embeddings into their local policies correspondingly. Furthermore, to associate roles with responsibilities, we propose to learn SVOs that are identifiable by agents' long-term behaviors by maximizing the conditional Figure 4: Symmetrically converting (left) the social value orientation learning problem to (right) the learning to share problem in a three-agents environment. The circles of different colors represent different agents, and the numbers of different colors represent the parameters that each agent needs to learn. Assuming that \(r_{1}\), \(r_{2}\), and \(r_{3}\) represent the extrinsic reward of each agent, the shaped reward of \(A_{1}\) is \(0.6r_{1}+0.2r_{2}+0.2r_{3}\). The shaped rewards of other agents can also be computed similarly. Figure 3: (a) is a snapshot of the division of labor found by Yang et al. (2020) in _Cleanup_ task, where the blue agent picks apples, and the purple one stays on the riverside to clean waste. In contrast, (b) shows a jointly suboptimal division where two failure agents compete for apples. (c-d) Vinitsky et al. (2021) shows similar results in _Allelopathic Harvest_ task. mutual information between the individual trajectory and the emergent SVO given the current observation and other agents' actions, which is similar as (Wang et al., 2020). ## 2 Preliminaries Although studies on social dilemmas have contributed significantly to the research of cooperation emergence for decades (Axelrod and Hamilton, 1981; Peysakhovich and Lerer, 2018; Anastassacos et al., 2020), they focus on matrix games and fixed binary policies. To be more realistic, as in real-world situations, the MARL community considers the intertemporal social dilemmas (ISDs, Leibo et al. (2017); Hughes et al. (2018)). Before conducting numerical experiments, we first give the formal definition of ISD as follows. An ISD can be modeled as a partially observable general-sum Markov game (Hansen et al., 2004), \[\mathcal{M}=\left\langle\mathcal{I},\mathcal{S},\left\{\mathcal{A}_{i} \right\}_{i=1}^{N},\left\{\mathcal{O}_{i}\right\}_{i=1}^{N},\mathcal{P}, \mathcal{E},\left\{\mathcal{R}_{i}\right\}_{i=1}^{N}\right\rangle,\] where \(\mathcal{I}\) represents the \(N\)-agent space. \(s\in\mathcal{S}\) represents the true state of the environment. We consider partially observable settings, where agent \(i\) is only accessible to a local observation \(o_{i}\in\mathcal{O}_{i}\) according to the emission function \(\mathcal{E}(o_{i}\mid s)\). At each timestep, each agent \(i\) selects an action according to a policy \(a_{i}\in\pi_{i}\left(a\mid o_{i}\right)\), forming a joint action \(\mathbf{a}=\left\langle a_{1},\ldots,a_{N}\right\rangle\in\times\mathcal{A}_{i}\), results in the next state \(s^{\prime}\) according to the transition function \(P\left(s^{\prime}\mid s,\mathbf{a}\right)\) and a reward \(r_{i}=\mathcal{R}_{i}(s,\mathbf{a})\). In ISDs, agents must learn cooperation or defection policies consisting of potentially long sequences of environmental actions instead of taking atomic cooperation or defection actions. In this paper, we focus on the episodic game with horizon \(T\), and the goal of each agent is to maximize the _local_ expected return, i.e., \[Q_{i}^{\pi}(s,\mathbf{a})=\mathbb{E}_{s_{0:T},\mathbf{a}_{0:T}\sim\pi,P}\left[\sum_{t =0}^{T}\gamma^{t}\mathcal{R}_{i}\left(s_{t},\mathbf{a}_{t}\right)\mid s_{0}=s,\bm {a}_{0}=\mathbf{a}\right].\] ## 3 Methods This section proposes a novel learning framework, RESVO, that transforms role-based learning into an SVO emergence problem to solve the ISD. Because consumers altruistically share a part of their profits with producers and making reward data unrestrictedly accessible is undesirable for several reasons, the proposed RESVO achieves SVO emergence by endowing agents with altruism to learn to share rewards with different weights to other agents. An SVO-based role embedding space is then constructed by introducing a novel low-rank constraint and conditioning individual policies on roles to ensure that the emergent SVO can effectively represent the different roles of agents and associate roles with responsibilities. Therefore, the following content in this section will be expanded from two aspects: SVO-based role emergence and role-based policy optimization. ### SVO-based Role Emergence As mentioned in Section 1, to consider the fact that consumers altruistically share a part of their profits with producers and avoid the realistic constraint (making reward data unrestrictedly accessible) imposed by directly learning the SVO of each agent according to the independence theory, RESVO enables agents to learn to dynamically share the reward with other agents, as shown in Figure 4. Specifically, the SVO-based role emergence mechanism learns an orientation function for each agent by explicitly accounting for its impact on recipients' behavior and, through them, the impact on its extrinsic objective. Each agent gives rewards using its orientation function and learns an SVO-conditioned policy with all received rewards. For clarity, we use index \(i\) when referring to the reward-sharing part of an agent, and we use \(j\) for the part that learns from the received reward, which is similar with Yang et al. (2020). A reward-sharing agent \(i\) learns a individual orientation function, \(w_{\eta_{i}}^{i}:\mathcal{O}_{i}\times\mathcal{A}_{-i}\mapsto\mathbb{R}^{N}\), parameterized by \(\eta_{i}\), that maps its own observation \(o_{i}\) and all other agents' actions \(a_{-i}\) to a vector of reward-sharing ratios for all \(N\) agents. Unlike the existing methods based on SVO (Peysakhovich and Lerer, 2018; Baker, 2020; Gemp et al., 2022; Yi et al., 2021) or section (Koster et al., 2020; Lupu and Precup, 2020; Yang et al., 2020; Vinitsky et al., 2021; Dong et al., 2021) mechanism, the orientation function in RESVO **(1)** allows agents to reward itself 3, and **(2)** the sum of all sharing ratios does not need to be equal to 1. This is one of the reasons why reward sharing, a mechanism used by existing work, can encourage the division of labor and solve ISD. The intuition behind this lies in the particular properties of the public good dilemma. If there is a good division of labor among agents, the reward (punishment) of the agent does not come entirely from its behavior but partly from the producers (the consumers). Therefore, the agent that gets the reward needs to share a part with other agents and only gets a part of it (corresponding to the first point); Moreover, in a multi-agent scenario, a reward may come from the behavior of multiple producers, so it needs to share the same reward with multiple agents (corresponding to the second point). Footnote 3: But, this does not mean that the orientation function will converge to some trivial function, such as the agent giving itself an infinite reward. Because the final reward received by the agent is its reward multiplied by the sharing ratio, and the sharing ratio is in the closed range of 0 to 1. Similar with Lupu and Precup (2020); Yang et al. (2020), \(w_{\eta_{i}}\) is separate from the agent's conventional policy and is learned via direct gradient descent on the agent's extrinsic objective to reduce the learning difficulty. Specifically, at each timestep \(t\), each recipient \(j\) receives a total reward \[r_{j}(\boldsymbol{\eta},\boldsymbol{r}):=w^{j}_{\eta_{i}}[j]\cdot r_{j}+\sum_ {i\neq j}w^{i}_{\eta_{i}}[j]\cdot r_{i}, \tag{1}\] where \(w^{j}_{\eta_{i}}[j]\) and \(w^{i}_{\eta_{i}}[j]\) denotes the \(j\)-th elements of \(w^{j}_{\eta_{i}}\) and \(w^{i}_{\eta_{i}}\) respectively, \(\boldsymbol{r}:=[r_{0},\cdots,r_{N}],\boldsymbol{\eta}=[\eta_{1},\cdots,\eta_ {N}]\). Although the sharers' rewards appear in Equation 1, the recipients can only see the sharers' discounted rewards when implemented. Each agent \(j\) learns a SVO-based role conditioned policy \(\pi_{j}(\cdot\mid o_{j},e_{j}(\boldsymbol{\eta}))\) parameterized by \(\theta_{j}\), where \(e_{j}(\cdot)\) is the SVO-based role embedding of agent \(j\). After each agent has updated its policy to \(\hat{\pi}_{j}\), parameterized by new \(\hat{\theta}_{j}\), with role-based policy optimization (Section 3.2) via trajectories \(\tau_{i}\) sampled by joint policies \(\{\pi_{j}\}\), we sample a set of new trajectories with new joint policy \(\{\hat{\pi}_{j}\}\). Using these trajectories, each agent \(i\) updates the individual orientation parameters \(\eta_{i}\) to maximize the following objective \[\max_{\eta_{i}}J^{\text{svo}}(\hat{\tau}_{i},\tau_{i},\hat{\boldsymbol{\theta }},\boldsymbol{\eta}):=\mathbb{E}_{\hat{\boldsymbol{\pi}}}\left[\sum_{t=0}^{T} \gamma^{t}\hat{r}_{i}^{t}\right],\quad\text{s.t.}\operatorname{rank}(W^{t}_{ \boldsymbol{\eta}})=k,\forall t\in[0,T), \tag{2}\] where \(\hat{r}_{i}^{t}\) is the newly sampled extrinsic reward in \(\hat{\theta}_{i}\), \(W^{t}_{\boldsymbol{\eta}}=\{w^{i,t}_{\eta_{i}}\}_{i=1}^{N}\) is the matrix composed of the reward sharing ratios of all agents at timestep \(t\), and \(k\leq N\) is a hyperparameter. To ensure that the emergent SVO can effectively represent the different roles of agents, RESVO introduces a novel rank constraint on the SVO matrix \(W^{t}_{\boldsymbol{\eta}}\) of all agents, and \(k\) can be regarded as the theoretical optimal number of roles. To be able to optimize (2) with an automatic differentiation toolkit in an end-to-end manner, we transform (2) into the following unconstrained optimization problem based on projected gradient descent by introducing an intrinsic reward \[\max_{\eta_{i}}J^{\text{svo}}:=\mathbb{E}_{\hat{\boldsymbol{\pi}}}\left[\sum_{t =0}^{T}\gamma^{t}\left(\hat{r}_{i}^{t}-\alpha\|W^{i,t}_{\eta_{i}}-W^{i,t}_{k} \|_{2}^{2}\right)\right], \tag{3}\] where \(W^{t}_{k}\) is the \(k\)-rank approximation of \(W^{t}_{\boldsymbol{\eta}}\) obtained with SVD algorithm and \(\alpha^{t}\) is another hyperparameter, and superscription \(i\) denotes the \(i\)-th column of the matrix. In practice, following the derivation process of (Yang et al., 2020), one can define the loss as \[-\sum_{t=0}^{T}\sum_{j=1}^{N}\log\pi_{\hat{\theta}_{j}}^{j}\left( \hat{a}_{j}^{t}\mid\hat{o}_{j}^{t},\hat{e}_{j}^{t}(\boldsymbol{\eta})\right)\cdot \tag{4}\] \[\sum_{\ell=t}^{T}\gamma^{\ell-t}\left(\hat{r}_{i}^{\ell}-\alpha\| \Delta^{i,\ell}(W,k)\|_{2}^{2}\right)-2\alpha\nabla_{\eta_{i}}W^{i,t}_{\eta_{ i}}\Delta^{i,t}(W,k),\] and directly minimize it via automatic differentiation, where \(\Delta^{i,\ell}(W,k):=W^{i,\ell}_{\eta_{i}}-W^{i,\ell}_{k}\) and \(\Delta^{i,t}(W,k):=W^{i,t}_{\eta_{i}}-W^{i,t}_{k}\). Crucially, \(\hat{\theta}^{j}\) must preserve the functional dependence of the policy update step (7) on \(\eta_{i}\) within the same computation graph. It is worth noting that the agent's role representation \(e_{i}(\mathbf{\eta})\) is **NOT** composed of its reward _sharing_ ratios \(w^{i}_{\eta_{i}}\). However, its reward _recipient_ ratios, i.e., we take the \(i\)-th row in the matrix \(W^{t}_{\mathbf{\eta}}\) as the role representation of agent \(i\) at timestep \(t\), as shown by interdependence theory in Figure 1. Intuitively, agents with similar role representations have similar divisions of labor and thus receive similar rewards. In other words, RESVO decomposes the extrinsic reward received by all agents into several parts according to the functional composition of the agent during the SVO emergence learning stage. The difference in rewards will directly lead to differences in the agents' policies, thereby encouraging the formation of the division of labor among the agents. ### Role-based Policy Optimization Introducing SVO-based role embedding and conditioning individual policies on this embedding explicitly establishes the connection between the role and the individual policies to encourage the division of labor through the diversity of roles. However, this does not enable the role to constrain the agent's long-term behavior. Intuitively, conditioning roles on local observations and actions4 enables roles to be responsive to the changes in the environment but may cause roles to change quickly. Thus roles and responsibilities cannot be effectively associated. To address this problem, we expect SVO-based roles to be temporally stable. Footnote 4: The orientation function \(w^{i}_{\eta_{i}}\) maps agent \(i\)’s observation \(o_{i}\) and all other agents’ actions \(a_{-i}\) to a vector of reward-sharing ratios for all \(N\) agents. Drawing inspiration from Eysenbach et al. (2019); Wang et al. (2020), we propose to learn SVO-based roles that are identifiable by agents' long-term behaviors, which can be achieved by maximizing \(I(\tau_{i};e_{i}\mid\mathbf{o},\mathbf{a})\), the conditional mutual information between the individual trajectory and the role given the current _joint_ observation and _joint_ action. The conditional mutual information calculated here is based on the joint actions and observations of all agents because \(e_{i}\) cannot be generated by local observations \(o_{i}\). The role representation of agent \(i\), \(e_{i}\), is the \(i\)-th **row** in the matrix \(W_{\mathbf{\eta}}\). Based on variational inference, a variational posterior estimator can be proposed to derive a tractable lower bound for the mutual information objective \[\max_{\mathbf{\eta},\phi}\;J^{\text{mi}}_{i}:=I(e^{t}_{i};\tau^{t-1}_{i}\mid\mathbf{o} ^{t},\mathbf{a}^{t})\geq\mathbb{E}_{e^{t}_{i},\tau^{t-1}_{i},\mathbf{o}^{t},\mathbf{a}^{t }}\left[\log\frac{q_{\phi}\left(e^{t}_{i}\mid\tau^{t-1}_{i},\mathbf{o}^{t},\mathbf{a} ^{t}\right)}{W_{\mathbf{\eta}}\left(e^{t}_{i}\mid\mathbf{o}^{t},\mathbf{a}^{t}\right)} \right], \tag{5}\] where "mi" stands for "mutual information", \(\tau^{t-1}_{i}=\left(o^{0}_{i},a^{0}_{i},\cdots,o^{t-1}_{i},a^{t-1}_{i}\right) ^{5}\) and \(q_{\phi}\) is the variational estimator parameterized by \(\phi\). Here we use \(W_{\mathbf{\eta}}\) to represent orientation function \(w^{i}_{\eta_{i}}\) of all agent \(i\). For \(q_{\phi}\), we use a causal transformer (Chen et al., 2021) (shared by all agents) to encode an agent's history of observations and actions. The lower bound in (5) can be further rewritten as a loss function to be minimized via automatic differentiation \[\begin{split}\mathrm{L}^{\text{mi}}\left(\mathbf{\tau};\mathbf{\eta}, \phi\right)&=\frac{1}{n}\sum_{i=1}^{N}\mathrm{L}^{\text{mi}}_{i} \left(\tau_{i};\mathbf{\eta},\phi\right)\\ &=\mathbb{E}\left[D_{\text{KL}}\left[W_{\mathbf{\eta}}\left(e^{t}_{i }\mid\mathbf{o}^{t},\mathbf{a}^{t}\right)\|q_{\phi}\left(e^{t}_{i}\mid\tau^{t-1}_{i}, \mathbf{o}^{t},\mathbf{a}^{t}\right)\right]\right],\end{split} \tag{6}\] where \(D_{\text{KL}}[\cdot\|\cdot]\) is the KL-divergence operator, and the conditioned terms in \(p,q_{\phi}\) are omitted for convenience. See the Appendix for the detailed derivation. In addition, to further improve learning stability, the individual policies are no longer based on the current role embedding but on the latest \(m\) role embeddings to make decisions. That is, each agent \(j\) learns a SVO-based role conditioned policy \(\pi_{j}(a^{t}_{j}\mid o^{t}_{j},\{e^{l}_{j}(\mathbf{\eta})\}^{t}_{l=t-m+1})\) parameterized by \(\theta_{j}\). Our experiments found that for complex ISDs with long episodes, mutual information constraints and decision-making based on historical role embeddings can be crucial in improving performance. Thus, the objective of role-based policy optimization for each agent \(j\) is shown as follows based on the multi-agent policy gradient theorem (Lowe et al., 2017) \[\max_{\theta_{j}}J^{\text{policy}}(\theta_{j},\mathbf{\eta}):=\mathbb{E}_{\mathbf{\pi}( \cdot|\cdot,e_{j}(\mathbf{\eta}))}\left[\sum_{t=0}^{T}\gamma^{t}r_{j}^{t}(\mathbf{\eta}, \mathbf{r}^{t})\right]. \tag{7}\] ### Algorithm Summary The learning process of RESVO consists of two main steps, namely, SVO-based role emergence (Eq. 2) and role-based policy optimization (Eq. 7). To improve the expressiveness and responsibility of role representation, we additionally impose rank constraints (Eq. 2) and mutual information regularization (Eq. 5) to the objectives of role emergence and policy optimization. Therefore, the final learning objective of RESVO for each agent \(i\) is \[\max_{\theta_{i},\eta_{i},\phi}J:=\overbrace{\lambda_{\text{svo}}J^{\text{ svo}}+\lambda_{\text{mi}}J^{\text{mi}}}^{\text{role emergence}}+\underbrace{\lambda_{\text{p}}J^{\text{policy}}}_{\text{policy optimization}}, \tag{8}\] where \(\lambda_{\text{svo}},\lambda_{\text{mi}},\lambda_{\text{p}}\) are scaling factors. Furthermore, the pseudo-code of the proposed RESVO is shown in Algorithm 1. ``` 0: Initialize parameters of policies \(\theta_{i}\), orientation function \(\eta_{i}\), variational estimator \(\phi\); 1:for each iteration do 2: Collect interaction \(\tau_{i}\) with \(\pi_{\psi_{i}}\) and update replay buffer \(\mathcal{D}\leftarrow\{\tau_{i}\}\); 3: Generate trajectories \(\{\tau_{i}\}\) using \(\mathbf{\theta}\) and \(\mathbf{\eta}\), and for all reward receivers \(j\), update \(\hat{\theta}_{j}\) via (7); 4: Generate new trajectories \(\{\hat{\tau}_{i}\}\) using new \(\hat{\mathbf{\theta}}\) and for reward sharers \(i\), compute \(\hat{\eta}_{i}\) via (2),(5); 5: For variational estimator, compute \(\hat{\phi}\) via (5), \(\theta_{i}\leftarrow\hat{\theta}_{i}\), \(\eta_{i}\leftarrow\hat{\eta}_{i}\) for all \(i\), \(\phi\leftarrow\hat{\phi}\). 6:endfor ``` **Algorithm 1** Leanring Role with Emergent SVO (RESVO) ## 4 Experiments Although studies on social dilemmas have contributed significantly to the research of cooperation emergence for decades (Axelrod and Hamilton, 1981; Peysakhovich and Lerer, 2018a; Anastassacos et al., 2020), they focus on matrix games and fixed binary policies. To be more realistic as in real-world situations, the MARL community considers the intertemporal social dilemmas (ISDs, Leibo et al. (2017); Hughes et al. (2018), see Appendix 2 for the formal definition) which can be modeled as a partially observable general-sum Markov game (Hansen et al., 2004). Figure 5: Three different environments with increasing complexity. Our experiments demonstrate that the division of labor can effectively solve the ISDs of different difficulties. Moreover, compared with other mechanisms, the division of labor can converge to mutual cooperation faster and more stably. Our method is tested in the following three typical public good dilemmas with increasing complexity (see Figure 5) against several baselines. * The first one, Iterated Prisoner's Dilemma (IPD) (Foerster et al., 2018), is often regarded as the canonical and most difficult of 2-by-2 game-theoretic cooperation problems. The prisoner setting may seem contrived, but there are, in fact, many examples of human interaction as well as interactions in nature that have the same payoff matrix. Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma, e.g., the climate-change politics in environmental studies (Rehmeyer, 2012), the reciprocal food exchange of vampire bats (Davis, 2017), the doping in sport (Schneier, 2012) and the coherence of strategic realism in the international political (Majeski, 1984). This broad applicability of the PD gives the game substantial importance. In an IPD, agents observe the joint action taken in the previous round and receive rewards in Figure 5. Although, by definition, a public good dilemma needs to contain more than 2 agents (Kollock, 1998), the prisoner's dilemma can be viewed as a simplified version. The agent that selects the "cooperate" action corresponds to the "producer", and the agent that selects the "defect" corresponds to the "consumer". In this simplified version of the public good dilemma, defect or free-riding is the dominant strategy. * The second one, \(N\)-Player Escape Room (ER) (Yang et al. (2020, Figure 1)), is a discrete \(N\)-player Markov game with parameter \(M<N\). ER is a more complex public good dilemma. Since \(M=2\), \(N=3\), the optimal joint policy of the game has an asymmetric division of labor, which is also widespread in the economic activities of human society. Specifically, there are 3 discrete states in this game, i.e., "start", "door" and "lever". An agent gets \(+10\) extrinsic reward for exiting a "door" and ending the game, but the "door" can only be opened when \(M\) other agents cooperate to pull the "lever". However, an extrinsic penalty of \(-1\) for any movement discourages all agents from taking this cooperative action. * In the third one, Cleanup (Leibo et al., 2017; Hughes et al., 2018) is a high-dimensional grid-world intertemporal social dilemma that serves as a problematic benchmark for independent learning agents. Moreover, the Cleanup can also be considered as a simplified version of the tax simulator, _Gather-Trade-Build (GTB)_(Zheng et al., 2022), but since there is no significant division of labor in the tasks involved in the latter, it is not considered in this paper. In the Cleanup, agents get \(+1\) (for the small 10 by 10 map) or \(+0.25\) (for the big 48 by 18 map) reward by collecting apples, which spawn on the map at a linear decay rate as the amount of waste approaches a depletion threshold. Each episode starts with a waste level above the threshold and no apple present. An agent can contribute to the public good by firing a cleaning beam to clear waste (no reward). This would enable other agents to be free riders, resulting in a problematic public good dilemma. We selected the aforementioned role learning algorithm, **ROMA**(Wang et al., 2020), and sanction-based algorithm, **LIO**(Yang et al., 2020), as the baselines because our RESVO draws on the core ideas of them in the SVO-based role emergence and the role-based policy optimization6 respectively. In addition, we selected several aforementioned SVO-based algorithms, including **LToS**(Yi et al., 2021) and **D3C**(Gemp et al., 2022) as baselines. Below we will briefly introduce the core idea of each algorithm again. ROMA designs an end-to-end specialization learning objective to encourage the emergence of roles in general MARL tasks for better cooperation and generalization and avoid the requirement of prior or expert knowledge. LIO enables each agent to punish, thereby implementing the sanction mechanism in a decentralized manner. LToS uses a bi-level optimization scheme similar to LIO but uses meta-gradients instead of data sampled by other agents' updated policies for the learning of SVOs; D3C uses the price of anarchy instead of the joint expected cumulative reward in LToS as the optimization objective of SVOs. Meanwhile, the SVO-based role emergence mechanism in RESVO can be introduced into other SVO-based methods as a plug-and-play module. In order to verify the general promotion effect of the division of labor on SVO-based methods, we added the role learning mechanism in RESVO to two SVO-based works, denoted as **LToS+r** and **D3C+r** respectively. ### Role Emergence in the Classic Tasks We first analyze the performance of each algorithm on the 2-players task, IPD. Figure 6 shows the performance and emerged SVOs of RESVO and the performance of four baselines in the IPD environment. In the IPD, a simplified version of the public good dilemma, we want both agents to be producers (that is, to cooperate with each other) to achieve the highest social welfare. Therefore, we set the rank constraint in RESVO to 1. This shows that the optimal joint policy of IPD requires only one role: the producer or the cooperator. It can be seen from the figure that the ROMA algorithm based only on the division of labor cannot stably converge to mutual cooperation. Both players choose the cooperate action and receive a \(-1\) environmental reward. In the early stages of training, ROMA can learn to cooperate. Nevertheless, once a player chooses the "defect" action, ROMA's role learning will fix the player's character, so she will always choose the "defect" action and make mutual cooperation unsustainable. The LToS and D3C algorithms based only on the SVO mechanism can converge to mutual cooperation quickly, but this equilibrium also cannot be maintained for a long time. However, the reasons for the inability of these two baselines to maintain mutual cooperation are not the same. The LToS algorithm is that because the agent does not form a stable SVO, agent 1 no longer shares rewards with agent 2 from the middle of training (Figure 5(d)), so that agent 2 begins to explore other strategies. Although the D3C algorithm forms a stable SVO, the agent's policy and SVO have no conditional dependence, causing the policy to diverge at the end of the training. The LIO algorithm and the RESVO algorithm show the best results. However, LIO still has performance fluctuations after a training period and cannot maintain stable mutual cooperation. This suggests that the sanction-based approach does not sustain stable cooperation in IPD. Moreover, an interesting phenomenon can be found in Figure 6(c-d), where RESVO and other SVO-based baselines (i.e., LToS and D3C) learn two completely different SVOs to try to maintain cooperation. Figure 6: Extrinsic (a-b) and received rewards (c-d) of different algorithms in Iterated Prisoner’s Dilemma. Since ROMA is not designed based on SVO, rewards will not be transmitted between agents. As seen from the figure, after RESVO converges to mutual cooperation, the agent no longer needs to receive rewards, nor does it share rewards. However, agents trained by other baselines have always received and shared rewards during the training process. This illustrates that while RESVO maintains cooperation through individualism orientation, other baselines achieve the same result through martyrdom orientation, where the coefficients in the transformation matrices are all negative7. Our analysis suggests that another reason for the inability of LToS and D3C to maintain stable cooperation may be due to this martyrdom orientation, an SVO that causes rewards to be passed between agents all the time. After policies converge to the equilibrium of mutual cooperation, these rewards will become noise and cause instability in the training process. In contrast, RESVO shifts from a cooperative orientation to an individualism orientation after the policies reach equilibrium, thus making the equilibrium remain stable. Although LIO is not an SVO-based method, the sanction mechanism also allows reward transmission between agents all the time, thus leading to the same instability of the training process. Footnote 7: In IPD, the external rewards of the agents are all negative, so only negative coefficients can deliver positive rewards. Next, we expand our study scenario into a more complex 3-player Escape Room. Figures 7 shows the performance of RESVO and baselines in the 3-Player Escape Room with \(M=2\). This means that two agents must pull the lever for the agent to open the door. Figures 7(b) shows the satisfaction of the rank constraints of the RESVO algorithm during training. In the 3-player Escape Room, we set the rank constraint of RESVO to 2. Specifically, we hope 2 roles emerge from the three agents through SVO, the lever-puller, and the door-opener. As can be seen from the figure, as the training progresses, the rank constraint of the RESVO algorithm can be well satisfied. A simple analysis of this task shows that the optimal policy only requires each agent to perform one-step action. This is because it only takes one timestep for any agent to move from the "initial" to the "lever" or "gate". At the same time, each move will bring a reward of \(-1\), and a more than \(1\) move will be a social welfare decline. The Figure 7(a) records the steps required by different algorithms to complete the task. Since ROMA performs poorly on the most straightforward IPD task, we no longer show the performance of ROMA on the more complex Escape Room and Cleanup tasks. It can be seen that RESVO can converge to the optimal policy fastest. D3C and LIO take about 4 times as many samples to converge to equilibrium compared to RESVO. LToS falls into a locally optimal solution early and cannot escape from it. Figure 7: (a) Steps per episode. (b) The average rank of the SVO-based role matrix. (c-e) The number of levers each agent pulls of different algorithms in 3-Player Escape Room. Figure 7(c-e) shows the division of labor of different agents in the 3-Player Escape Room with different algorithms. It can be seen from these 3 figures that whether the division of labor is formed and whether the division of labor is stable dramatically affects the performance and convergence speed of the algorithm. RESVO has converged to a stable division of labor: agent 1 opens the door, and agents 2 and 3 pull the lever. Therefore, RESVO can converge to equilibrium the fastest and maintain it compared with baselines. For the D3C algorithm, agent 2 is stably assigned the role of a lever-puller, but there is no stable division of labor between agent 1 and agent 3. The two oscillate between the roles of lever-puller and door-opener. None of the three agents have their own fixed roles for the LIO algorithm. Moreover, comparing Figure 7(a) and Figure 7(c-e), it can be seen that even after the LIO algorithm converges to equilibrium, the three agents are still dynamically allocated between the two roles of lever-puller and door-opener. The instability of the division of labor between D3C and LIO also affects their convergence speed, and LIO converges more slowly than D3C. For the LToS, the three agents have been unable to form a correct division of labor, and the average number of lever pulls greater than twice, indicating that the average number of the lever-puller is less than 2. This will make an agent need to change from the "door-opener" to the "lever-puller" to complete the task successfully. The role change process mentioned above makes the timestep of the LToS algorithm to complete the task more significant than 1 on average. In order to explore the reason why RESVO can form a stable division of labor, we show in Figure 8 how different agents share and receive rewards in different algorithms. That is, the emerged SVOs of three SVO-based methods (including RESVO) and LIO. It is worth noting that in more complex Escape Room and Cleanup environments, in order to make SVO have powerful representation capabilities, RESVO uses Figure 8: Shared and received rewards of different algorithms 3-Player Escape Room. the transition matrix coefficients of multiple consecutive time steps as SVO. This differs from the setting in our IPD experiments, making it impossible to classify the agent as a certain kind of SVO. Therefore, we indirectly analyze the agent's SVO from the pattern of sharing rewards in the following. A counter-intuitive phenomenon can be seen in the figure. Those who pull the lever have no profit, and those who open the door can obtain more significant benefits. Therefore, intuitively, to maintain a stable division of labor, the door opener should share her reward with the lever-puller so both parties can get rewards. At the same time, since each agent is self-motivated, this can form a stable role division. However, the three SVO-based algorithms, RESVO, LToS, and D3C, share the rewards in turn. Those who pull the levers give rewards to those who open the door. The difference between these three algorithms is the size of the reward shared. A plausible explanation is that the SVO-based algorithm learned a more "aggressive" approach. The agent that discovers the action of pulling the lever gives the agent that discovers the action of opening the door a positive reward so that the role of the door opener is fixed. Then determine the role of the lever-puller. This method can instead promote the rapid formation of the division of labor and regional stability for RESVO. The rank constraints in RESVO and the policy conditioned on SVO enable another role can also be fixed once a role is fixed. Nevertheless, other SVO-based methods do not have this advantage, so the algorithm converges slowly during the training process, and the division of labor cannot be maintained stably. It can be seen from the figure that at the beginning of training, RESVO agents share a large-scale reward value, thus promoting the algorithm to converge quickly. After the algorithm converges, similar to the IPD task, the agent almost no longer shares and receives rewards, thus maintaining the stability of the policies and division of labor. The sanction-based LIO method exhibits a similar pattern to that in the IPD environment. During the entire training process, even after the policies converges to equilibrium, the agents continue to transmit large rewards, maintaining the stability of the division of labor through a high "cost" way. One defect of this method is that the module that generates the sanction will always receive a large gradient during the training process, which makes the optimization process unstable. The algorithm converges slowly and cannot maintain a stable division of labor for a long time. ### Role Emergence in the Cleanup We finally test our method in Cleanup and start from a map size of 10 by 10 and the number of agents 2. Although there are only 2 agents, it is still much more complicated than \(N\)-Players Escape Room because Cleanup has far more timesteps per episode than the former. In this task, similar to the 3-Player Escape Room, there is also an apparent division of labor between the two agents under the optimal cooperative policy: one agent needs to clean up wastes (producer), and the other agent needs to collect apples (consumer). We can judge the division of labor or roles of the two agents from the amount of waste they clean up. As seen from Figure 9, different algorithms also show remarkable differences in the 2-agents Cleanup task. On the one hand, LToS and D3C have not learned a good division of labor, and both agents hardly clean up waste (Figure 9(c) and (f)), so no apples grow. This makes the extrinsic reward for both agents small (Figure 9(b) and (e)), and the joint policy suffers from a public good dilemma. As can be seen from Figure 9(a) and (d), the two agents hardly share rewards, and their SVOs are almost the same, which is why LToS and D3C cannot learn a good division of labor or roles. On the other hand, RESVO and LIO have successfully formed a reasonable division of labor. Agent 1 is responsible for collecting apples (consumer), and agent 2 is responsible for cleaning up wastes (producer, see Figure 9(c) and (f)). At the same time, to maintain a stable division of labor or roles, agent 1 as a consumer will continue to share rewards with agent 2 as a producer (Figure 9(a) and (d)), to achieve a larger average extrinsic reward, or the social welfare. That is, agent 2 maintains a stable division of labor by showing a cooperative orientation to agent 1. However, similar to the previous tasks, the sanction-based LIO method is very different from RESVO in the way that roles emerge. The two algorithms exhibit different robustness in maintaining the division of labor. As can be seen from Figure 9(d), on the one hand, for LIO, agent 2, which is the producer, also needs to share the reward with agent 1. RESVO, on the other hand, uses a more "energy-efficient" approach to promoting role emergence: Since producers receive no rewards, there is no need to share rewards with consumers who can receive large extrinsic rewards. This sparsity of reward sharing between agents also enables RESVO to maintain a more stable division of labor while receiving greater social welfare. It can be seen from Figure 9(c) that the agent 2 trained by the LIO algorithm does not always clean up the waste, and its behavior shows a significant variance. This also makes the external reward of agent 1 unable to maintain a high level all the time and also has a significant variance (Figure 9(e)). ### Static versus Dynamic Division of Labor In the three tasks in the previous two sections, we find that the sanction-based LIO can also effectively learn a reasonable division of labor or roles. However, due to the difference between the sanction mechanism and SVO, LIO needs to transmit rewards densely between agents, which makes it impossible to maintain a stable division of labor. In other words, RESVO realizes a _static_ division of labor through the emergence of SVO, but LIO realizes a _dynamic_ division of labor. In the static division of labor, the role of each agent is fixed while completing the task; on the contrary, the agent's role will change in the dynamic division of labor. In addition, from the experiments in the previous two sections, we find preliminary evidence that static division of labor can lead to better social welfare. Nevertheless, the above tasks only contain \(2-3\) agents, Figure 9: Shared and received rewards, extrinsic rewards, and waste cleared of different algorithms in \(10\times 10\) map size, 2-Player Cleanup. and the impact of dynamic division of labor cannot be fully reflected. For example, for the IPD or Cleanup task that only consists of 2 agents, the dynamic division of labor is the role reversal of the two agents. To this end, we compare the performance of the algorithms in a larger Cleanup environment with a map size of 48 by 18 and many agents of 10. There are only two roles in the Cleanup: the waste cleaner and the apple picker. When the number of agents exceeds 2, multiple agents will have the same role. Cooperation among roles and agents with the same role is required to get rid of the public good dilemma and achieve greater social welfare. At this time, for the cooperation of the same role, the static role or division of labor has more advantages. Because dynamic roles involve role changes, that is, the composition of group members with the same role changes. This will make the cooperation of the same role unstable, affecting the algorithm's performance. The larger the number of agents, the more severe the problem of unstable cooperation will become, which will also cause more significant performance degradation. To test the above hypothesis, we conducted multiple random experiments in 10-player Cleanup to ensure the reliability of the results. As seen from Figure 10, RESVO shows a clear performance advantage (about two times) compared to LIO in the more complex Cleanup environment. Figure 10: The (a) boxplot, (b) histogram of performance, and the (c) influence of \(k\) value for extrinsic rewards under 10 runs of different algorithms in \(48\times 18\) map size, 10-Player Cleanup. Figure 10 verifies the impact of the dynamic division of labor in the task completion process of different algorithms on performance. We also find that the dynamics of the division of labor are not only reflected in the completion of one task but also in the completion of different tasks by recording the dynamics of labor division of different algorithms under different random seeds. Specifically, for each random experiment, we count the number of waste cleaned by each agent, similar to Figure 9(c) and (f). If the agent is cleaning a low amount of waste (closer to 0), the counter for the agent's role as a cleaner is incremented by one. Otherwise, the picker counter is incremented by one. Figure 11 shows the average dynamic where 10 agents are assigned the role of cleaning wastes by different algorithms under 10 random seeds. As seen from the figure, in different random experiments, the SVO-based methods, LToS, D3C, and RESVO, all show better static division of labor than the sanction-based LIO. However, LToS and D3C converge to a suboptimal static division of labor. Most of the agents are assigned the role of collecting apples. Both LIO and RESVO converge to a better division of labor. More agents choose to clean up waste, but the former maintains its dynamics. Combined with the results of Figure 10, it can be seen that in tasks involving more complex optimal division of labor patterns, the static division of labor learned by RESVO can be more efficient than the dynamic division of labor in LIO. To more intuitively demonstrate the policies learned by different algorithms in the 10-player Cleanup, we selected keyframes from the rendered results, as shown in Figure 12 and 13. The arrow in the figure indicates the reward sharing. The policies shown in the figure match the performance results and related analysis presented earlier. conditional policy based on the mutual information constraint are shown in Figure 10(b). We conduct different randomized experiments in the 10-player Cleanup environment under 10 different random seeds and count the average extrinsic reward of the agents after the algorithm converges for each experiment. As can be seen from the figure, both the LToS+r algorithm and the D3C+r algorithm show significant and stable performance improvements. ### The Impact of Rank Constraints Finally, we perform an ablation analysis of rank constraints in RESVO. Intuitively, the number of roles, or the pattern of division of labor, will primarily affect the completion of tasks. The rank constraint \(k\) represents a priori knowledge of the optimal number of roles in the task. In IPD, 3-player Escape Room, and Cleanup of different complexity, we set \(k\), or the number of roles, to 1, 2, and 2, respectively. In this section, we want to verify the sensitivity of RESVO to the hyperparameter \(k\). Similarly, we conduct 10 randomized experiments in the 10-player Cleanup environment for different \(k\) and count the average external reward of the agents, and the results are shown in Figure 10(c). It can be seen from the results in the figure that the RESVO algorithm is sensitive to the size of \(k\). When the selection of \(k\) is too large or too small, the performance will decrease significantly. The optimal number of roles in the 10-player Cleanup environment should be 2. However, it can be seen from the figure Figure 12: Selected keyframes from the rendered results of RESVO and D3C in the 10-player Cleanup. Arrows indicate reward sharing. that when \(k\) is set to \(3\) or \(4\), the algorithm can also show promising results, indicating that the RESVO algorithm can show good robustness _near_ the optimal \(k\). We then present a more in-depth visual analysis of the SVO-based embeddings of emergent roles, which more intuitively shows the robustness of the RESVO algorithm around the optimal value of \(k\). Figure 14-17 show the SVO emergence in the training procedure of RESVO under different rank constraints in Cleanup with a map size of \(48\times 18\) and \(10\) agents. We visualize the SVO of each agent at the first timestep of a particular episode. We randomly map the SVO embedding of each agent to a \(2\,D\) space using a fixed random neural network. This dimensionality reduction method can ensure that similar SVOs are also close together in \(2D\) space. As seen from Figure 14, when the rank constraint is very low, all agents learn similar SVOs, that is, similar roles. This either means that all agents are free-riders or all agents have a composite role that needs to both collect apples and clean up waste. In either case, the overall performance will be poor. When the size of the rank constraint is within a reasonable range, the algorithm can converge to the best performance, as shown in Figures 15 and 16. In previous results, the algorithm can converge to a better result when the rank constraint is near the optimal value (\(k=2\)). Through the visualization results in Figures 15 and 16, we can propose an explanation experimentally. It can be seen from the two figures that although the agent is divided into more roles when \(k=4\), the similarity of these roles is different. Some roles are more similar, and others are less similar. Therefore, in the Cleanup, when \(k=4\), some two roles Figure 13: Selected keyframes from the rendered results of LToS and LIO in the 10-player Cleanup. Arrows indicate reward sharing. may show a similar division of labor, so the performance will not be affected when there are more roles. However, when the rank constraint increases (\(k=9\)), the situation worsens. As shown in Figure 17, when the agent has too many roles, the SVO of the agent presents strong randomness, which makes the policy of the agent constrained by the SVO also present greater randomness. In the Cleanup task, the number of Figure 14: SVO-based role emergence with the rank \(k=1\) constrain. Figure 15: SVO-based role emergence with the rank \(k=2\) constrain. agents that collect apples or clean up wastes will be small, and some agents will follow random policies and do useless work, which will not improve the algorithm's performance. Figure 16: SVO-based role emergence with the rank \(k=4\) constrain. Figure 17: SVO-based role emergence with the rank \(k=9\) constrain. Closing Remarks In this paper, we introduce a typical mechanism of human society, i.e., division of labor, to solve intertemporal social dilemmas (ISDs) in multi-agent reinforcement learning. A novel learning framework (RESVO) transforms role learning into a social value orientation emergence problem. RESVO solves it symmetrically by endowing agents with altruism to learn to share rewards with different weights with other agents. Numerical experiments on three tasks with different complexity in the presence of ISD show that RESVO can emerge stable roles and efficiently solve the ISD through the division of labor. Meanwhile, the SVO-based role emergence mechanism in RESVO can be introduced into other SVO-based methods as a plug-and-play module and bring a significant performance boost. For the classic example, iterated prisoner's dilemma, we provide formal analyses and numerical results for the effect of social value orientation and division of labor in Appendix C. However, for more complex ISDs, i.e., \(N\)-Player Escape Room and Cleanup, we only provide empirical results in Section. 4. We aim to formally analyze the evolutionary dynamics of environmental behaviors, social value orientations, and roles of ISDs. However, the environmental policies, roles, and sharing policies are interdependent, and it is not easy to accurately account for the effect of the division of labor when considering environmental policy updates. These challenges have perplexed the researchers for a long time (Hirsch et al., 2012; Gould et al., 2016; Dong et al., 2021), and we believe that the solution to these questions is an essential and promising direction for future work. In experiments, we find that the sanction-based IIO, the SVO-based LToS, D3C, and our RESVO take a completely different approach to maintain the division of labor. The former achieves a dynamic division of labor by continuously passing rewards among the agents, while the latter achieves a static division of labor by more sparse reward passing. In the IPD, 3-player Escape Room, and Cleanup tasks of varying complexity involved in the experiments, we find that static division of labor exhibits better performance compared to dynamic division of labor in tasks where the same role corresponds to multiple agents, such as the 3-player Escape Room, and the 10-player Cleanup, converging faster to better social welfare. However, in this paper, we obtain the above conclusions by the social welfare of the algorithm only in a limited task. We believe the dynamic division of labor will be more advantageous than the static division of labor in specific tasks and certain evaluation metrics. For example, dynamic division of labor may be more robust in tasks that involve roles that change dynamically; furthermore, static division of labor may pose fairness issues because some agents receive lower extrinsic rewards than others. We leave the above questions for future exploration.
2309.00094
Thermoacoustic Instability Suppression and Heat-Release Forcing of a Laminar Flame Using Ionic Wind
Advancements in combustion technologies are often impeded by complex combustion dynamics. Active control has proven effective at mitigating these dynamics in the lab, but mass adoption requires more affordable, lightweight, and reliable actuators. Here, a new actuator concept is presented which utilizes sub-breakdown electric fields, the inherent plasma nature of flames, and the electrohydrodynamic effect to create flame stabilization points. These electrically controlled stabilization points allow variable distortion of a laminar flame and bidirectional forcing of the flame heat release. The electric field-based actuator is combined with a simple feedback controller to demonstrate suppression of a thermoacoustic instability. The instability sound pressure level was reduced by 27 dB and in less than 60 ms upon enabling the controller. The use of a sub breakdown electric field requires a mere 40 mW to stabilize a 3.4 kW thermal power flame. The absence of any moving parts and low electrical power required make this a promising actuator concept for many combustion applications.
Dustin L. Cruise, Aman Satija, Galen B. King
2023-08-31T19:27:22Z
http://arxiv.org/abs/2309.00094v1
# Thermoacoustic Instability Suppression and Heat-Release Forcing of a Laminar Flame Using Ionic Wind ###### Abstract Advancements in combustion technologies are often impeded by complex combustion dynamics. Active control has proven effective at mitigating these dynamics in the lab, but mass adoption requires more affordable, lightweight, and reliable actuators. Here, a new actuator concept is presented which utilizes sub-breakdown electric fields, the inherent plasma nature of flames, and the electrohydrodynamic effect to create flame stabilization points. These electrically controlled stabilization points allow variable distortion of a laminar flame and bidirectional forcing of the flame heat-release. The electric field-based actuator is combined with a simple feedback controller to demonstrate suppression of a thermoacoustic instability. The instability sound pressure level was reduced by 27 dB and in less than 60 ms upon enabling the controller. The use of a sub-breakdown electric field requires a mere 40 mW to stabilize a 3.4 kW thermal power flame. The absence of any moving parts and low electrical power required make this a promising actuator concept for many combustion applications. ## 1 Introduction Combustion, the process of burning fuels to release energy, is society's primary energy source but also stands as the largest contributor of greenhouse gas emissions. Many efforts are underway to reduce these emissions, but approaches are commonly impeded by complex combustion dynamics that manifest as difficulties with flame anchoring, flashback, or thermoacoustic instabilities [1]. Active control, which forces the flame with various forms of actuators, has proven effective at addressing these issues but has not seen widespread adoption due to actuator weight, cost, and reliability concerns [2]. A promising alternative utilizes electric fields to affect flames through their inherent plasma nature [3]. Flames are weak plasmas containing charged particle densities on the order of 10\({}^{9\text{--}10}\) cm\({}^{-3}\)[4]. In CH4/air flames, the dominate positive charge carrier (cation) is hydronium (H\({}_{3}\)O\({}^{+}\)), while the dominant negative charge carrier is the electron [5, 6]. Application of an external electric field accelerates the charged particles through the Lorentz force. Elastic collisions between the accelerated cations and neutral gas molecules result in a considerable amount of momentum transfer that alters the flow field; an electrohydrodynamic effect referred to as Ionic Wind [7]. In a pair of studies by Ren et al. [8, 9], particle image velocimetry (PIV) and electric-field-induced second-harmonic generation (ESHG) captured ionic wind creating a local and controllable flow velocity reduction. Reducing the local flow velocity to the laminar flame speed caused the flame to propagate and stabilize in the induced low velocity region, similar to flame stabilization in the recirculation region of an aerodynamic bluff-body. The speed with which the flame front propagated upstream in response to a step increase of the electric field strength was also captured. Analysis showed the flame front propagated following standard laminar flame mechanics, moving at a speed equal to the difference between the laminar flame speed and the local flow velocity. This indicated that the primary effect of sub-breakdown electric fields in premixed CH4/air flames is the electrohydrodynamic effect.
2302.14393
A modified probabilistic amplitude shaping scheme to use sign-bit-like shaping with a BICM
On the one hand, sign-bit shaping is a popular shaping scheme where the conditional probability of the sign bit is made non-equiprobable. On the other hand, probabilistic amplitude shaping (PAS) is a popular coding scheme, to combine shaping and a bit-interleaved coded modulation (BICM), where the sign bit should not be involved in the shaping. Indeed, with the PAS scheme the sign bit is the parity bit, i.e., the output of the systematic error-correcting code. As a result, sign-bit shaping has been used with multilevel coded modulations rather than BICM. In this paper, we show that with minor modifications it is possible to use sign-bit-like shaping with a BICM. Simulation results are provided with the 5G NR LDPC BICM scheme.
Vincent Corlay, Hamidou Dembele
2023-02-28T08:22:40Z
http://arxiv.org/abs/2302.14393v1
# A modified probabilistic amplitude shaping scheme to use sign-bit-like shaping with a BICM ###### Abstract On the one hand, sign-bit shaping is a popular shaping scheme where the conditional probability of the sign bit is made non-equiprobable. On the other hand, probabilistic amplitude shaping (PAS) is a popular coding scheme, to combine shaping and a bit-interleaved coded modulation (BICM), where the sign bit should not be involved in the shaping. Indeed, with the PAS scheme the sign bit is the parity bit, i.e., the output of the systematic error-correcting code. As a result, sign-bit shaping has been used with multilevel coded modulations rather than BICM. In this paper, we show that with minor modifications it is possible to use sign-bit-like shaping with a BICM. Simulation results are provided with the 5G NR LDPC BICM scheme. Probabilistic shaping, sign-bit shaping, bit-interleaved coded modulation. ## I Introduction The channel capacity characterizes the highest information rate that can be achieved for a fixed average transmit power while maintaining a small error probability. When standard constellations are used, such as the amplitude-shift keying (ASK), the channel capacity cannot be reached if each symbol is transmitted with equal probability. Hence, the transmitter should process the data such that the symbols of the constellation are transmitted according to a probability distribution which enables to approach the capacity. This operation is called probabilistic shaping. In addition to shaping, the message should also be protected with an error-correcting code. Combining shaping and coding is not trivial and requires a specific algorithm. There exist two main techniques to build high-rate coded modulations: the BICM [3][1] and multilevel coding [9][13][5]. The popular PAS scheme [1] (see Section II), to combine shaping and coding, uses a BICM. With the PAS scheme, the parity bits, at the output of the error-correcting code, are used as sign bits (i.e., the bit determining the sign of the symbols). Consequently, the sign bit cannot be considered for the shaping operation as its distribution is independent of the value of the other labelling bits of the symbol. Nevertheless, "sign-bit shaping" is a popular shaping technique where the conditional distribution of the sign bit is made non-equiprobable [7][13][2][4]. Hence, it cannot be considered as a distribution matcher (DM) for the PAS scheme. As a result, (to the best of our knowledge) sign-bit shaping has always been considered jointly with multilevel coding1. As an example, in our previous paper on sign-bit shaping [4], a reviewer asked us to explicitly state that the proposed technique is restricted to a multilevel coding scheme. Footnote 1: Sign-bit shaping is conveniently implemented with multilevel coding as the last level (the sign-bit level with natural labelling) does not need to be coded. Indeed, the mutual information of the last bit level equals the entropy. In this paper, we propose modifications to the conventional PAS scheme (Section III) and to the probabilistic sign-bit shaping scheme proposed in [4] (Section IV). These modifications enable to use the paradigm of sign-bit shaping with a BICM. We also describe a mechanism to make the shaping scheme compatible with the puncturing of systematic bits, as done e.g., in the current 5G NR LDPC BICM scheme (Section III-C). ## II The PAS scheme The principle of PAS, illustrated on Figure 1, is the following: A DM outputs symbols according to one side (negative or positive) of the target shaping distribution. Then, the bits corresponding to the labelling of the symbols2, without the sign bit, are used as inputs of a systematic error-correcting code. The encoding process outputs parity bits, one per symbol, which determine the sign of the symbols to be transmitted. The key ideas underlying PAS are the following: Footnote 2: Each symbol is labelled with several bits. See Figure 2 for an exemple, where bit level 4 is the sign bit. * Since systematic encoding is used, the distribution of the shaping bits is not changed by the error-correcting code. They can be non-i.i.d. * The parity bits of an error-correcting code have an equiprobable distribution [12, Theorems 1,2]. This is suited to symmetric shaping distributions: The symbols have the same probability to be positive and negative and the sign bits should therefore remain equiprobable. Consequently, the PAS scheme successfully combines shaping and coding. Fig. 1: PAS scheme. The DM generates symbols according to one side of a target shaping distribution. The function \(b(\cdot)\) outputs the labelling bits of a given symbol \(x_{i}\). The block “P” computes and outputs the parity bits (i.e., implements the systematic error-correcting code). The function \(s(\cdot)\) outputs the sign corresponding to a bit \(b_{s}^{i}\). ## III The modified PAS scheme ### _The quantized Maxwell-Boltzmann like distribution_ The constellation \(\mathcal{X}\) considered in this paper is a \(M\)-ASK constellation. The symbols of a \(M\)-ASK constellation, where \(M=2^{m}\), are \[\mathcal{X}=\{-2^{m}+1,..,-3,\ -1,+1,+3,\ldots,+2^{m}-1\}. \tag{1}\] The discrete Maxwell-Boltzmann (MB) distribution, which is a quasi-optimal input distribution for the symbols of \(\mathcal{X}\) on the Gaussian channel [10], can be quantized at the cost of negligible performance loss [8][4]. As an example, the distribution of the 16-ASK constellation shown on Figure 3 (left) exhibits quasi-optimal performance in terms of mutual information: The loss is less than 0.1 dB for information rates smaller than 3 bits per channel use (bpcu), see Figure 4 in [4]. Consequently, this quantized MB-like distribution can be taken as the target shaping distribution. ### _Using the quantification bit as parity bit_ #### Iii-B1 The principle The quantized constellation \(\mathcal{X}\) can be expressed as the union of two shifted versions of a reference sub-constellation, say \(\mathcal{X}_{r}\). \[\mathcal{X}=\mathcal{X}_{r}\ \cup\ (\mathcal{X}_{r}\ +2). \tag{2}\] With the 16-ASK, \(\mathcal{X}_{r}=\{-15,-11,-7,-3,1,5,9,13\}\). Moreover, a transmitted symbol belongs equiprobably to one of the two sub-constellations. Consequently, one can proceed as follows: * First, perform the shaping of the target sub-constellation \(\mathcal{X}_{r}\), shown on Figure 3 (right). * Then, obtain the \(m-1\) bits corresponding to each shaped symbol. Use them as input of a systematic error-correcting code. * Finally, use the parity bit as quantification bit, i.e., to decide if the symbol belongs to the first or the second sub-constellation. Consequently, the major difference between the PAS scheme and the proposed scheme is the following: The parity bit discriminates between the two sub-constellations. It does not determine the sign of the symbols. #### Iii-B2 Gray labelling If the coding scheme is a BICM, the labelling of the symbols has a significant impact on the performance. For instance, Figure 8 in Sec VLC of [1] reports a 1 dB loss with natural labelling compared to Gray labelling. We also observed this difference in our simulations. Consequently, unlike in [4] where natural labelling (suited to a multilevel coding scheme) is used, Gray labelling of the symbols must be considered. For \(\mathcal{X}\) chosen as the \(M=16\)-ASK, this latter labelling is provided in Figure 2 (top). Natural labelling is also shown on the figure (bottom). As with natural labelling, given a symbol \(x_{i}\in\mathcal{X}_{r}\) the bit \(b_{1}\) discriminates between the two sub-constellations: All adjacent symbols with the same probability (according to Figure 3 (left)) have a different value for \(b_{1}\). Consequently, the bits \(b_{2}\),\(b_{3}\),\(b_{4}\) are used to label the symbols in \(\mathcal{X}_{r}\). However, unlike with natural labelling, the rule to discriminate between the two sub-constellations depends on the value of \(b_{2}\),\(b_{3}\),\(b_{4}\). Given \(x\in\mathcal{X}_{r}\): Fig. 4: Modified PAS scheme. The DM generates symbols according to the distribution of the sub-constellation \(\mathcal{X}_{r}\). The function \(b(\cdot)\) outputs the labelling bits of a given symbol \(x_{i}\). The block “P” computes and outputs the parity bits. Fig. 3: Left: Example of a quantized target shaping distribution for \(\mathcal{X}\). Right: Target shaping distribution for the sub-constellation \(\mathcal{X}_{r}\) Fig. 2: Gray labelling (top) and natural labelling (bottom) of a 16-ASK. * If \(b_{1}\oplus b_{2}\oplus b_{3}\oplus b_{4}=0\) then transmit \(x\). * If \(b_{1}\oplus b_{2}\oplus b_{3}\oplus b_{4}=1\) then transmit \(x+2\). This modified PAS scheme is illustrated on Figure 4. This trick enables for instance to use sign-bit shaping via trellis shaping with a BICM (described in [7] and applied in the famous paper [13] with multi-level coding). In the following section, we discuss the adaptation of another sign-bit shaping scheme: probabilistic sign-bit shaping as introduced in [4]. Note also that this modified PAS scheme allows the shaping of (quantized) non-symmetric distributions, which is not possible with the PAS scheme. ### _Decreasing the code rate and puncturing systematic bits_ With the PAS scheme, the baseline rate of the code used is \(R~{}=~{}\frac{m-1}{m}\). Higher rates codes can be considered by using some sign bits as systematic bits (see Sec. IV.D in [1]). However, it is not possible to use lower rate codes with this standard scheme. Moreover, some BICM coding schemes include the puncturing of some systematic bits. It is the case of the 5G NR LDPC coding scheme where the first systematic bits are always punctured (see e.g., Chap. 9 in [6]). We show below how the quantification bit enables to address both issues. Let us consider a shaping scheme which does not change the distribution of the sign bit and the quantification bit (i.e., performs the shaping via \(b_{2}\) and \(b_{3}\) on Figure 2). Moreover, let us assume that \(c\) systematic bits are punctured by the coding scheme. Let \(k\) be the number of symbols transmitted. Then, one can proceed as follows: * Use \(c\leq k\) sign bits and/or quantification bits as systematic bits (in addition to the other bits). * Put these bits at the systematic puncturing location. * Generate \(2k\) parity bits (with the channel code). * Puncture \(c\) systematic bits. * Put the parity bits at the proper location (the one of \(b_{1}\) and \(b_{4}\) on Figure 5). Figure 5 summarizes the process in the scope of the 5G NR LDPC coding scheme. The rate of the code is \(R=\frac{(m-1-q)k+c}{mk+c}\leq\frac{m-1}{m}\), where \(q\geq 1\) is the number of quantification bits. Note that one could also have \(c^{\prime}\) bits \(b_{4}\), \(c<c^{\prime}\leq k\), instead of the exact number of systematic punctured bits \(c\), and diminish the number of generated parity bits accordingly. ## IV Modified probabilistic sign-bit shaping We now discuss a specific implementation of the DM of Figure 4 to implement the target distribution of \(\mathcal{X}_{r}\). With sign-bit shaping3, the probability of the sign bit depends on the values of the bits at the previous levels. Sign-bit shaping for the distribution of \(\mathcal{X}_{r}\) can be realized as follows. Footnote 3: See [4] for a more detailed explanation of sign-bit shaping. As mentioned above, since this sub-constellation has only 8 symbols (or more generally \(M^{\prime}=2^{m^{\prime}}=M/2\) symbols, with \(m^{\prime}=m-1\)), only \(b_{2}\),\(b_{3}\),\(b_{4}\) are used for the labelling. Let us first consider the natural labelling, shown in Figure 2 (bottom), as in [4]. Then, sign-bit shaping consists in adapting \(p(b_{4}|b_{1},b_{2})\). The probability of each symbol of \(x_{i}\in\mathcal{X}_{r}\) becomes \[\begin{split}&\forall~{}1\leq i\leq\frac{M^{\prime}}{2},~{}p(x_{i })=p_{i}\cdot\left(\frac{1}{2}\right)^{m^{\prime}-1},\\ &\forall~{}\frac{M^{\prime}}{2}+1\leq i\leq M^{\prime},~{}p(x_{i })=(1-p_{i-\frac{M^{\prime}}{2}})\cdot\left(\frac{1}{2}\right)^{m^{\prime}-1},\end{split} \tag{3}\] Fig. 5: Proposed scheme to decrease the rate of the binary code, for a 16-ASK. The right-most part is compliant with the 5G NR BICM scheme, where the 5G NR interleaver (row/column) is represented by a demultiplexer. The block “P” computes and outputs the parity bits (i.e., implements the systematic error-correcting code) and the puncturing is performed by the rate matcher (RM). The only difference with the standard, in this right-most part, is the block \(\pi\), which puts some parity bits at the location of the punctured systematic bits. Fig. 6: Sub-constellation \(\mathcal{X}_{r}\) with natural labelling and corresponding values \(p_{i}\). where the parameters \(p_{i}=p(b_{4}|b_{2},b_{3})\), \(0\leq p_{i}\leq 1\), are to be optimized. An illustration is provided with \(\mathcal{X}_{r}\) on Figure 6. Since the target shaping distribution is symmetric, \(p_{3}\) and \(p_{4}\) are replaced by \(1-p_{2}\) and \(1-p_{1}\) on the figure. The distributions of Figure 3 are obtained with \(p_{1}=0.08\) and \(p_{2}=0.28\). Unfortunately, with Gray labelling this mapping does not hold. For instance, the two symbols of \(\mathcal{X}_{r}\) with \(b_{2}=0,b_{3}=0\) are -15 and 13, which should have the same probability. Nevertheless, if we replace \(p_{i}=p(b_{4}|b_{2},b_{3})\) by \(p_{i}=p(b_{3}|b_{2},b_{4})\) (or simply permute bit level 3 and bit level 4 in the Gray labelling) we get the mapping of Figure 7 which does not change the distribution of the symbols. As a result, we obtain a sign-bit-like shaping scheme where the probability of bit level 3 of the Gray labelling, conditioned on the value of \(b_{2}\) and \(b_{4}\), is non-equiprobable. With this new mapping, we see that \(p(b_{3}|b_{2},b_{4})=p(b_{3}|b_{2})\). Hence, the sign bit \(b_{4}\) can be removed from the shaping process. Note that this would not be the case if the target shaping distribution was not symmetric. ### _Implementation_ Regarding the implementation of the "sign-bit-like shaping" DM, we proceed as follows: The second and fourth bit levels are equiprobable and independent. Therefore, a binary source \(S_{0}\) outputs two bits with equiprobable probability. The binary source for \(b_{3}\) is chosen based on the value of \(b_{2}\) (and is independent of the value of \(b_{1}\) and \(b_{4}\)), i.e., \(S_{1}\) has a distribution \(p_{1}=p(b_{3}|b_{2}=0)\) and \(S_{2}\) a distribution \(p_{2}=p(b_{3}|b_{2}=1)\). Then, a symbol mapping outputs the symbols \(x_{i}\in\mathcal{X}_{r}\) based on the values of \(b_{2}^{i},b_{3}^{i},b_{4}^{i}\) and according to the Gray labelling. Finally, \(x_{i}\) is shifted if \(b_{1}^{i}\oplus b_{2}^{i}\oplus b_{3}^{i}\oplus\dot{b}_{4}^{i}=1\). The full system is shown on Figure 8 (with \(c^{\prime}=k\)). ## V Simulation results For the simulations, we use the 5G NR systematic LDPC code [11]. The baseline rate of the code is 1/3. As mentioned above, this rate is obtained by (always) puncturing the first \(2Z\) systematic bits, where \(Z\) is the lifting value which depends on the block length used [14]. Hence, the standard PAS scheme cannot be used (without altering the distribution of the symbols) with the 5G NR LDPC coding scheme. We need the trick of Section III-C. Rate Matching, to increase the rate, is done as specified in TS 38.212 [14] by discarding the last parity bits. As reported e.g., in R1-1706971 (by Huawei) [15] (and confirmed by our own simulations) for a block length \(n=7875\) bits and a rate \(R=0.75\) (non-BICM case), the considered code achieves a block error rate of 10\({}^{-2}\) at a SNR of approximately 4 dB (1.4 dB away from the Shannon limit). For the implementation of the shaping, we puncture the \(2Z\) first systematic bits and keep \(k+2Z\) parity bits after the RM. The extra \(2Z\) parity bits are used as the missing sign bits \(b_{4}\). Figure 9 presents the simulation results for a rate \(R=2.63\) bpcu with a 16-ASK. The blue curve shows the performance of the standard 5G NR LDPC BICM scheme. The red curve shows the performance of the same system with shaping as described in the paper. We observe a gain of approximately 0.9 dB. This is consistent with what is expected: The information-theoretic study (see Figure 4 in [4]) tells us that the optimal shaping gain at this rate is 1 dB and 0.1 dB is lost due to the quantified shaping distribution. Fig. 8: Modified PAS scheme with sign-bit-like shaping. The sources \(S_{0}\), \(S_{1}\), and \(S_{2}\) generate a bit equal to 0 with probability \(1/2\), \(p_{1}\), and \(p_{2}\), respectively. The non-equiprobable sources \(S_{1}\) and \(S_{2}\) can be obtained from the binary source \(S_{0}\) via binary DMs (see also [4]). Fig. 9: Performance of the 5G NR LDPC BICM scheme with and without shaping for a rate \(R=2.63\) bpcu and block length \(n=7875\) bits. ## VI Conclusions In this paper, we showed how the quantified target distribution adds flexibility to the PAS scheme: 1-It enables to use the quantification bit as parity bit and thus use the sign bit as shaping bit if needed. 2-It can be used to decrease the rate of the code. 3-It also allows to have an i.i.d systematic bit, useful in the case of systematic bit puncturing (as e.g., in the 5G NR coding scheme). Moreover, we explained how probabilistic sign-bit shaping, originally used with natural labelling and thus multilevel coding, can be adapted to Gray labelling suited to a BICM. Finally, simulation results are provided with the 5G NR LDPC BICM scheme.
2309.11199
A Numerical Study of Relativistic Oblique Shock Reflection
Shocks are ubiquitous in astrophysical sources, many of which involve relativistic bulk motions, leading to the formation of relativistic shocks. Such relativistic shocks have so far been studied mainly in one dimension, for simplicity, but the complex nature of the relevant astrophysical flows often requires higher dimensional studies. Here we study the two-dimensional problem of the reflection of a planer shock off of a wall for a general incidence angle and a cold unshocked medium. We use primarily relativistic hydrodynamic numerical simulations, and elaborately compare the results to an analytic treatment. The simulations are performed both in the rest frame S of the unshocked fluid, where the dimensionless proper speed of the singly shocked fluid is $u_1=\Gamma_1\beta_1$ and the shock incidence angle is $\alpha_1$, and in the rest frame S$^\prime$ of the point P of intersection of the incident shock and the wall for regular reflection (RR). Good agreement is obtained between the simulations in these two frames and with the analytic solution. The establishment of a steady flow in frame S$^\prime$ is explored, along with the transition between the strong and weak shock RR solutions. The transition line between RR and Mach reflection (MR) is studied numerically in the $u_1$-$\alpha_1$ plane and found to coincide with the analytic detachment/sonic line. The flow properties along the sonic line are investigated in detail focusing on how they vary between the Newtonian and relativistic limits.
Prasanta Bera, Jonathan Granot, Michael Rabinovich, Paz Beniamini
2023-09-20T10:36:30Z
http://arxiv.org/abs/2309.11199v2
# A Numerical Study of Relativistic Oblique Shock Reflection ###### Abstract Shocks are ubiquitous in astrophysical sources, many of which involve relativistic bulk motions, leading to the formation of relativistic shocks. Such relativistic shocks have so far been studied mainly in one dimension, for simplicity, but the complex nature of the relevant astrophysical flows often requires higher dimensional studies. Here we study the two-dimensional problem of the reflection of a planet shock off of a wall for a general incidence angle and a cold unshocked medium. We use primarily relativistic hydrodynamic numerical simulations, and elaborately compare the results to an analytic treatment. The simulations are performed both in the rest frame S of the unshocked fluid, where the dimensionless proper speed of the singly shocked fluid is \(u_{1}=\Gamma_{1}\beta_{1}\) and the shock incidence angle is \(\alpha_{1}\), and in the rest frame S\({}^{\prime}\) of the point P of intersection of the incident shock and the wall for regular reflection (RR). Good agreement is obtained between the simulations in these two frames and with the analytic solution. The establishment of a steady flow in frame S\({}^{\prime}\) is explored, along with the transition between the strong and weak shock RR solutions. The transition line between RR and Mach reflection (MR) is studied numerically in the \(u_{1}\) - \(\alpha_{1}\) plane and found to coincide with the analytic detachment/sonic line. The flow properties along the sonic line are investigated in detail focusing on how they vary between the Newtonian and relativistic limits. keywords: shock waves - relativistic processes - methods: numerical - hydrodynamics ## 1 Introduction A steady single-phase subsonic inviscid flow maintains a smooth variation over different locations, excluding interfaces or boundaries. However, supersonic fluid velocities (with relative speeds between different parts of the fluid that exceed the sound speed) may form a discontinuity in matter density, pressure and velocity, which is termed a _shock_. The location of the discontinuity (i.e., the shock) generally moves in space. The fluid crosses the shock from the upstream region to the downstream region and in the process, its density, pressure and specific entropy increase (see e.g., Landau & Lifshitz, 1987; Thorne & Blandford, 2017). Rankine-Hugoniot conditions specify the relationship between the fluid variables across the discontinuity (Rankine, 1870; Hugoniot, 1887). In the rest frame of the upstream fluid, the shock front moves supersonically, and the downstream shocked fluid carries nonzero momentum, kinetic energy and thermal energy. Shocks are very abundant in terrestrial and astrophysical fluids in supersonic motion. In terrestrial phenomenon, the motion of a fluid (e.g. air, water) can attain a speed larger than the respective sound speed in the medium and this can form a shock. The head-on interaction of this discontinuity with a rigid wall, produces a reflected shock with a subsonic downstream region. In the case of an oblique incidence, the strength of the incident shock and the angle of the incidence determine the characteristics of the reflection. When the reflected shock and the incident shock intersect at a reflection point P on the wall, it is said to be regular reflection (RR). Otherwise, it is considered to be irregular reflection (IR), the most common configuration of which is called a Mach reflection (MR). In the case of MR, there exists a triple point ahead of the wall, where three lines intersect: the incident shock, the reflected shock, and a Mach stem (Von Neumann, 1963; Courant & Friedrichs, 1948; Chester, 1954; Hornung, 1986; Olim & Dewey, 1992; De Rosa et al., 1992; Tabak & Rosales, 1994). For large values of the shock incidence angle (defined as the angle between the shock and the wall), only IR/MR is possible, whereas for small incidence angle values only RR is possible. Shock reflection of non-relativistic oblique shocks was investigated in different experimental setups (Heilig, 1969; Itoh et al., 1981; Henderson & Gray, 1981) and numerical studies (Mignone et al., 2007; Gvozdeva & Chulyunin, 2015; Wu et al., 2019). One of the main purposes of these studies was to pursue the transition criteria from RR to MR, and vice versa. Some theoretical criteria for this transition are known in the literature (see, e.g. Von Neumann, 1963; Hornung et al., 1979; Ben-Dor, 1987). For our purposes in this paper, the important criterion is the sonic criterion, which is discussed in detail below. In an astrophysical environment, a fluid element can achieve a speed close to the speed of light, \(c\), and form relativistic shocks, capable of generating significant radiation (Blandford & McKee, 1976). The microscopic properties of the fluid are affected by relativistic thermal particle motions as reflected in the equation of state (Taub, 1948; Thorne, 1973). Relativistic bulk motions have observational implications such as relativistic beaming effects (Rees, 1966; Gold, 1969). The strength (i.e. the Lorentz factor) of relativistic shocks may be inferred from the modeling of astrophysical objects, such as gamma-ray burst afterglows (Sari, 1997; Rees & Meszaros, 1998). Shocks play an important role in various astrophysical scenarios, such as: i) accretion by compact object (Salpeter, 1964) ii) free-falling accretion onto the surface of a star, iii) interaction of stellar wind with the interstellar medium, iv) high-velocity ejecta from explosive transients, e.g. a nova, supernova or magnetar giant flare, v) in relativistic jets or outflows, such as gamma-ray bursts (GRBs), micro-quasars, active galactic nuclei (AGN), tidal disruption events, fast radio bursts or pulsar wind nebulae (PWNe), where shocks can form either due to collisions between different parts of the outflow (internal shocks) or due to its interaction with the ambient medium (external shocks). These astrophysical sources form some regions with very high internal energy density and the shocks accelerate both thermal and non-thermal electrons that produce bright radiation. Therefore the shock dynamics play a significant role in generating the observable radiation from many astrophysical sources. Such astrophysical shocks may experience reflection by an obstacle. Some possible examples are: i) reflection of a supernova shock by the companion star in a binary stellar system (Istomin & Soloviev, 2008), ii) reflection of a GRB afterglow shock (Lamberts & Daigne, 2018), iii) reflection of shock formed at the magnetosphere by the stellar surface of a neutron star or the Sun, iv) reflection of a collimation shock at the jet-cocoon interface with a cocoon in the cylindrical phase (Adamson & Nicholls, 1958; Norman et al., 1982). To understand the underlying physics we can build a theoretical model relating the flow dynamics to the expected observable emission signatures. We follow a simplified approach to study the fluid dynamics of shock reflection in relativistic and non-relativistic regimes. We consider a perfectly reflecting wall as the reflector of the incident shocks. In particular, in this work we numerically study the reflection of an incident oblique shock having Newtonian up to relativistic speeds, at different incidence angles. From direct relativistic hydrodynamic numerical simulation, we identify the characteristics of the reflected shock and find the criteria of RR. We compare our numerical results to analytic results derived in a companion paper (Granot & Rabinovich, 2023, hereafter GR23) Initially, in SS 2, we describe the physical setup of the numerical experiments. The underlying basic mathematical formulation is presented in SS 3. Our results are presented in SS 4 and the conclusions are discussed in SS 5. ## 2 Physical setup ### Lab frame S & steady-sate frame S\({}^{\prime}\) The shock reflection is studied in two different frames of reference: i) the lab-frame S, where the unshocked region 0 is at rest, and ii) the moving frame S\({}^{\prime}\), where the flow is steady for RR (Figure 1). Initially, we set up the problem in the lab frame S. There are two shocks labeled 1 (incident shock) and 2 (reflected shock), which divide the flow into three regions, labeled 0, 1 and 2, corresponding to the number of times the fluid in each region was shocked. The unshocked region 0 is adjacent to a perfectly reflecting static wall and considered to be at rest in frame S (velocity \(v_{0}=0\)) and cold (pressure \(p_{0}/\rho_{0}c^{2}\ll\min(1,u_{1}^{2})\), where \(\rho_{0}\) is its proper rest-mass density and \(u_{1}=\Gamma_{1}\beta_{1}\) is the dimensionless proper speed of region 1). In frame S, the incident shock ('shock 1') moves with a velocity \(v_{s1}\) along its normal and makes an angle \(\alpha_{1}\) with respect to the wall. It can be thought of as generated by a piston moving at velocity \(v_{1}<v_{s1}\) and driving a shock with a velocity \(v_{s1}\) (see Fig. 1). The proper rest-mass density \(\rho_{1}\) and pressure \(p_{1}\) in region 1 are determined by the jump conditions of shock 1. The collision of the incident shock 1 with the wall creates a Figure 1: Schematic diagram of our setup for the shock reflection problem for RR, showing the location of the discontinuities. _Left_: In the lab frame S the unshocked cold fluid (region 0) is at rest and a piston moving at speed \(v_{1}\) at an angle of \(\alpha_{1}\) relative to a wall drives a shock (\(s1\)) into it (the shock front moving at speed \(v_{s1}\)) creating a singly shocked fluid region 1. The shock \(s1\) hits the wall producing a reflected shock (\(s2\)) with a shock front moving at speed \(v_{s2}\), and a doubly-shocked fluid region 2, whose velocity \(v_{2}\) is parallel to the wall. The point \(P\) where the two shocks intersect at the wall moves along the wall at a speed \(v_{p}=v_{s1}/\sin\alpha_{1}=v_{s2}/\sin\alpha_{2}\). _Right_: In the rest frame S\({}^{\prime}\) of point P the flow is steady, and the fluid velocity in regions 0 and 2 is parallel to the wall. This rest frame exists only in the sub-luminal regime where \(v_{p}<c\Leftrightarrow u_{s1}<\tan\alpha_{1}\) (\(u_{s1}\) being the proper speed of shock s1). reflected'shock 2', with a velocity \(v_{s2}\) along its normal, and making an angle \(\alpha_{2}\) with respect to the wall. A post-shock region 2 forms between the wall and shock 2, with proper rest-mass density \(\rho_{2}\), pressure \(p_{2}\) and velocity \(v_{2}\). As the incident shock 1 is oblique, \(\alpha_{1}>0\), it intersects the wall at a point P, which moves along the wall at a velocity \(\mathbf{v}_{p}\),whose magnitude is given by \[v_{p}=\frac{v_{s1}}{\sin\alpha_{1}}=\frac{v_{s2}}{\sin\alpha_{2}}. \tag{1}\] In the case of a semi-infinite steady oblique incident shock, the incident and reflected shocks move in a self-similar pattern with respect to point P. In the sub-luminal regime that corresponds to \(\beta_{p}=v_{p}/c<1\), one can transform (through a boost at \(-\mathbf{v}_{p}\)) to a rest frame S', in which for RR, point P is at rest and the flow is steady. In frame S\({}^{\prime}\), region 0 moves with a velocity \(\mathbf{v}_{0}^{\prime}=-\mathbf{v}_{p}\). Similarly, the velocity of region 1 and the incident shock are Lorentz-boosted by \(-\mathbf{v}_{p}\) from the S-frame values. The proper rest-mass densities \((\rho_{0},\rho_{1})\) and pressures \((p_{0},p_{1})\) are invariant. The angles formed by the incident and reflected shocks with the wall in S\({}^{\prime}\) are also Lorentz boosted, i.e., \[\tan\alpha_{i}^{\prime}=\frac{\tan\alpha_{i}}{\Gamma_{p}}\qquad(i=1,\,2)\, \tag{2}\] where \(\Gamma_{p}=(1-\beta_{p}^{2})^{-1/2}\). ### The General Structure of the Parameter Space We study oblique reflected shocks with different incidence angles, \(\alpha_{1}\), and different proper velocities of the incident fluid, \(u_{1}=\Gamma_{1}\beta_{1}\). Figure 2 shows the analytic expectation (as derived in GR23) for the different regions in the \(u_{1}\) - \(\alpha_{1}\) parameter space, and the critical lines that separate between them. This is displayed by showing \(\log_{10}(u_{1})\) in the \(y\)-axis versus \(\log_{10}(\tan\alpha_{1})\) in the \(x\)-axis. The luminal line (_in black_; defined by the condition \(v_{p}=c\) or, equivalently, \(u_{s1}=\tan\alpha_{1}\)) separates the super-luminal region (in _cyan shading_) and the sub-luminal regions. The sonic line for the weak shock RR solution (_in dashed blue_; defined by \(\beta_{2,w}^{\prime}=\beta_{c,2,w}\) where the subscript 'w' stands for the weak shock RR solution) is found (GR23) to almost coincide with the detachment line (_in dashed red_), which bounds the region with RR solutions. We shall therefore not make the distinction between them here, and refer mainly to the sonic line. The sonic line always lies in the sub-luminal region1 and separates between the sub-sonic (or detachment) region (_in white_), where there is no RR solution (but instead, there is IR - a more complicated type of shock reflection, such as MR), and the super-sonic (or attachment) regions. The region between these two critical lines - the sub-luminal supersonic (or attachment) region is marked in _green shading_. Footnote 1: This is since it corresponds to \(\beta_{p}=(\beta_{2,w}+\beta_{c_{s},2,w})/(1+\beta_{2,w}\beta_{c_{s},2,w})<1\), i.e. the sonic condition implies that \(v_{p}\) is equal to the lab-frame speed of a sound wave moving in region 2 parallel to the wall, which must therefore be less than \(c\). In the following, we find numerically that, as expected analytically, the sonic line bounds the region of RR. There can in principle also be a dual region where both MR and RR are possible for the same \((u_{1},\alpha_{1})\) values. Such a dual region borders the sonic line on the super-sonic side but is located well within the sub-luminal region. The fact that we do not find such a dual region might be since the RR weak shock solution is a more stable attractor solution, such that the MR solution is not found in the simulations, similar to the RR strong shock solution that is discussed in SS4.1.1. Therefore, the sonic line is of particular physical importance. While it was extensively studied in the Newtonian regime, it was not studied before in the relativistic regime. We study it here in detail, stressing the differences between the Newtonian and relativistic regimes, and how the system transitions between these two limits. We study shock reflection in the \((u_{1}-\alpha_{1})\) parameter space. We use the above-mentioned reference frames S and S\({}^{\prime}\). Figure 3 shows the points for which we performed special relativistic hydrodynamic numerical simulations (section 3) to obtain the outcome of the shock reflection by a wall. ## 3 Numerical Method The conservation equations for total mass, momentum and energy in the special theory of relativity may be written as: \[\partial_{\mu}(\mu\mu^{\mu}) =\frac{\partial(\rho\Gamma)}{\partial t}+\nabla\cdot(\rho\mathbf{u})= 0\, \tag{3}\] \[\partial_{\nu}T^{i\nu} =\frac{\partial(w\Gamma\mathbf{u})}{\partial t}+\nabla\cdot(w\mathbf{u} \mathbf{u}+p\mathbf{I})=0\,\] (4) \[\partial_{\nu}T^{0\nu} =\frac{\partial(w\Gamma^{2}-p)}{\partial t}+\nabla\cdot(w\Gamma \mathbf{u})=0\, \tag{5}\] where \(T^{\mu\nu}=wu^{\mu}u^{\nu}+p\eta^{\mu\nu}\) is the stress-energy tensor, \(\eta^{\mu\nu}\) is the Minkowski metric, \(u^{\mu}\) is the 4-velocity, \(\mathbf{u}=\Gamma\mathbf{\beta}=u\hat{\mathbf{u}}\) is the proper velocity of the fluid, \(\mathbf{\beta}=\nabla/c=\beta\beta\), \(\Gamma=(1-\beta^{2})^{-1/2}\) is the Lorentz factor, \(\rho\) is the proper rest mass density, \(p\) is the pressure, \(w=e+p=\rho c^{2}+e_{\rm int}+p\) is the proper enthalpy density, and \(e\) (\(e_{\rm int}\)) is the proper (internal) energy density. Here \(\frac{\partial}{\partial t}\), \(\nabla\) and \(\mathbf{I}\) are the time derivative, spatial derivative and the unit \(3\times 3\) matrix, respectively. In the presence of a 1D shock, the fluid variables on its Figure 2: The different regions and critical lines in the \(u_{1}\) –\(\alpha_{1}\) parameter space, shown in terms of \(\log_{10}(u_{1})\) versus \(\log_{10}(\tan\alpha_{1})\). The luminal line (_in black_; \(\beta_{p}=1\Leftrightarrow u_{s1}=\tan\alpha_{1}\)) separates the super-luminal region (_cyan shading_) and the sub-luminal attachment region (_green shading_), which is in turn separated from the detached region (_in white_, where there is no regular reflection – RR) by the detachment line (_in dashed red_), which almost coincides with the sonic line for the weak RR solution (_in dashed blue_; \(\beta_{2,w}^{\prime}=\beta_{c,2,w}\), see GR23 for details). two sides (0,1: pre- & post-shock regions) satisfy the following (Rankine-Hugoniot) jump conditions conditions, \[\rho_{0}\Gamma_{0,s1}\beta_{0,s1} =\rho_{1}\Gamma_{1,s1}\beta_{1,s1}\, \tag{6}\] \[w_{0}\Gamma_{0,s1}^{2}\beta_{0,s1}^{2}+p_{0} =w_{1}\Gamma_{1,s1}^{2}\beta_{1,s1}^{2}+p_{1}\,\] (7) \[w_{0}\Gamma_{0,s1}^{2}\beta_{0,s1} =w_{1}\Gamma_{1,s1}^{2}\beta_{1,s1}\, \tag{8}\] where quantities relating to the upstream (pre-shock) region and the downstream (post-shock) region are denoted by subscripts 0 and 1, respectively. These jump conditions may be obtained by equating the fluxes of matter, momentum and energy on the two sides of the shock, in the frame where the shock front is at rest and the fluid velocities are normal to it. The velocities \(\beta_{0,s1}\) and \(\beta_{1,s1}\) are those of regions 0 and 1 in the shock 1 rest-frame, while \(\Gamma_{i,s1}=(1-\beta_{i,s1}^{2})^{-1/2}\) are the corresponding Lorentz factors. The pressure, density and normal component of velocity are discontinuous across the shock. To solve the above set of fluid equations and to obtain the values of downstream fluid for the given upstream values we need to provide the equation of state (EoS). To capture the relativistic and non-relativistic regimes we consider the following equation (Mignone & McKinney, 2007): \[(h-\Theta)\left(h-4\Theta\right)=1\, \tag{9}\] where \(\Theta=p/\rho c^{2}\) and the enthalpy per unit rest energy \(h\) and the effective adiabatic index \(\hat{\gamma}\) are give by \[h = 1+\frac{\hat{\gamma}\Theta}{\hat{\gamma}-1}=\frac{5}{2}\Theta+ \sqrt{1+\frac{9}{4}\Theta^{2}}\, \tag{10}\] \[\hat{\gamma} = \frac{\frac{\partial h}{\partial\Theta}}{\frac{\partial h}{ \partial\Theta}-1}=\frac{1}{6}\left(8-3\Theta+\sqrt{4+9\Theta^{2}}\right). \tag{11}\] This EoS satisfies the Taub (1948) inequality of relativistic matter. The corresponding dimensionless sound speed, \(\beta_{c_{s}}=c_{s}/c\), is given by (Ryu et al., 2006) \[\beta_{c_{s}}^{2}=\frac{\partial p}{\partial e}=\frac{\Theta}{h}\frac{\frac{ \partial h}{\partial\Theta}}{\frac{\partial h}{\partial\Theta}-1}=\frac{3 \Theta^{2}+5\Theta\sqrt{\Theta^{2}+4/9}}{12\Theta^{2}+2+12\Theta\sqrt{\Theta^{ 2}+4/9}}. \tag{12}\] Here we aim to find the values of the downstream quantities from the direct hydrodynamic simulations. To this end we study the shock reflection in frames S & S' described above. ### Numerical setup in frame S In order to study the shock reflection problem described in SS 2 in frame S, we set up the incident shock s1 in this frame by prescribing regions 0 and 1 in the computation domain. We then numerically solve the time evolution of the computation domain, identifying the different regions and the critical lines that separate them. In particular, we track the formation of the reflected shock s2 and the doubly shocked region 2 as it is described in SS 2.1. To reduce the artifacts from the numerical scheme, we avoid ultra-low values of pressure in region 0 and choose a moderately low value of \(\Theta_{0}=p_{0}/\rho_{0}c^{2}\sim 10^{-9}\) to represent a cold medium of region 0. The fluid is at rest in region 0 (\(\mathbf{v}_{0}=0\)) while the velocity of region 1 is \(\mathbf{v}_{1}\). The pressure (\(p_{1}\)) and proper rest-mass density (\(\rho_{1}\)) in region 1, as well as the velocity of shock s1 along its normal in frame S (\(\mathbf{v}_{s1}\)) are obtained by solving the shock jump conditions (equations (6)-(8)). We use the pluto code (Mignone et al., 2007) to solve the hydrodynamic equations (3)-(5) in a fixed linear spaced grid. We use the piece-wise parabolic reconstruction scheme, with the second-order Runge-Kutta time integration and HLLC Reimann solver. We choose the initial shock location along the diagonal of the computational domain connecting the top-left and the bottom-right. We consider a few hundreds to thousands of grid points in each side of the two-dimensional computational domain maintaining near near-equal aspect ratio of grid-spacing. The presence of the wall at the r.h.s. boundary is obtained by implementing reflecting boundary conditions. The region 1 inflow boundary conditions are implemented at the left and at the bottom of the computation domain. The top of the computation domain is maintained with a free outflow condition. In the lab frame S, region 1 increases its area as the point of contact P moves along the wall. The shape of the post-shock region i.e. the angle \(\alpha_{2}\) is not known a priori. To start the simulation we use the inflow condition from the left and lower boundaries. As the doubly-shocked region 2 develops and it interferes with this fixed inflow condition along the lower boundary. To prevent the impact of the boundary conditions on the results we focus our analysis on a region sufficiently close to the point of contact P, such that it is not affected by the lower boundary condition. Figure 3: Scatter marks represent the parameter space coverage of the numerical calculations presented in section 4. The blue, magenta and red circles correspond to the snapshots of RR in S and S\({}^{\prime}\) and MR in S\({}^{\prime}\) (section 4.1) respectively. For a small inclination angle, the S-frame captures RR effectively. The frame S\({}^{\prime}\) is applicable for the incident angle higher than the luminal boundary. ### Numerical setup in frame S\({}^{\prime}\) By construction, the shock s1 remains static in frame S\({}^{\prime}\) and the region 0 is confined between the shock s1 and the wall which makes an angle \(\alpha_{1}^{\prime}\) at P, given by \(\tan\alpha_{1}^{\prime}=\Gamma_{p}^{-1}\tan\alpha_{1}\). Region 0 has proper rest mass density \(\rho_{0}\), pressure \(p_{0}\) and velocity \(\mathbf{v}_{0}^{\prime}=-\mathbf{v}_{p}\). Region 1 has proper rest mass density \(\rho_{1}\), pressure \(p_{1}\) and the velocity is given by a Lorentz boost by \(-\mathbf{v}_{p}\) from the frame S value \(\mathbf{v}_{1}=\beta_{1}c(-\cos\alpha_{1},\,\sin\alpha_{1})\), \[\mathbf{v}_{1}^{\prime}=\frac{[-v_{1}\cos\alpha_{1},\,\Gamma_{p}(v_{ 1}\sin\alpha_{1}-v_{p})]}{\Gamma_{p}(1-\beta_{p}\beta_{1}\sin\alpha_{1})}. \tag{13}\] In the pluto setup we start the numerical simulation with the fluids in regions 0 and 1. The top and left edges of the computational domain maintain inflow boundary conditions of regions 0 and 1, respectively, while the bottom edge maintains a free outflow boundary condition. As the incident shock s1 impacts the wall, it forms the reflected shock s2 and the doubly-shocked region 2 develops. We also test the dynamical stability of the RR strong shock solution, by adding the corresponding algebraic solution for region 2 to the initial conditions of the simulation (as this solution is unstable and does not otherwise develop naturally in the numerical simulation). In this case the point of transition between the inflow boundary conditions of regions 0 and 1 is no longer at the top left corner, but is instead located at a fixed point along the top edge of the computational domain. We consider the evolution in frame S\({}^{\prime}\) in the vicinity of point P, as much as possible. The advantage of this frame is that the incident and reflected shocks are static, and the flow is steady for RR. However, since \(\beta_{p}=\beta_{1s}/\sin\alpha_{1}\), for high incident shock speeds \(\beta_{1s}\) and/or small incidence angles \(\alpha_{1}\), the velocity of point P might rise above the speed of light, and in this super-luminal regime frame S\({}^{\prime}\) does not exist. We start a numerical evolution of fluid in region 1 and gradually it develops the region 2. As the post-shock region (region 2) develops, we select a region away from the boundary and find the location of the discontinuity. When performing simulations in frame S\({}^{\prime}\) we inject fluid into region 0 and 1 with velocities \(\mathbf{v}_{0}^{\prime}\) and \(\mathbf{v}_{1}^{\prime}\), from the upper and left boundaries of the simulation box, respectively. We have the freedom to choose arbitrary time units, \(t_{\rm unit}\) in S and \(t_{\rm unit}^{\prime}\) in S\({}^{\prime}\) (corresponding to length units \(l_{\rm unit}=ct_{\rm unit}\) and \(t_{\rm unit}^{\prime}=ct_{\rm unit}^{\prime}\)), to design the frame for the direct numerical study. In frame S we measure the simulation time in units of shock crossing time, \(t_{p}=L_{y}/v_{p}\), where \(L_{y}\) is the simulation box size along the wall. In frame S\({}^{\prime}\) we use as our time unit the sound crossing time of the doubly shocked region 2, \(t_{sc}^{\prime}=L^{\prime}/c_{s,2}\), where \(L^{\prime}\) is its length along the wall and \(c_{s,2}\) is the sound speed in region 2. ## 4 Results ### The \(u_{1}-\alpha_{1}\) parameter space Here we summarize the results obtained from our numerical simulations of shock reflection for different proper speeds \(u_{1}\) of the singly shocked region 1 and different incidence angles \(\alpha_{1}\). For a given \(u_{1}\), RR is expected for small enough values \(\alpha_{1}\) (see Figure 2). #### 4.1.1 Regular Reflection (RR) We have captured the time evolution of incident shock s1 and the development of reflected shock s2 by performing numerical simulations in two different rest frames. In the lab frame S, both the incident and the reflected shocks move (_left panel_ of Figure 1 and snapshots from the numerical study in Figure 4). The same incident and reflected shocks remain steady at the moving frame \(S^{\prime}\) (_right panel_ of Figure 1 and snapshots from the numerical study in Figure 5). In frame S we start the numerical simulation of an incident shock s1 with \(u_{1}=1,\beta_{1}=1/\sqrt{2}\approx 0.7071,\beta_{1s}=4\sqrt{2}/7\approx 0.0801, \rho_{1}/\rho_{0}=4\sqrt{2}\approx 5.657,\,p_{1}=4/3\)) and an incidence angle \(\alpha_{1}=0.3\) relative to the reflecting wall (at the bottom right corner of Figure 4a). Therefore, initially there is no doubly-shocked region 2. The intersection point P of the incident shock s1 and the wall moves along the wall at a speed \(v_{p}=\beta_{1s}/\sin\alpha_{1}\). We consider a computation box (a \(568\times 918\) grid in the \(x\)-\(y\) plane) with its vertical length (\(L_{y}\)) along the wall being 3.23 times larger than the horizontal length (\(L_{x}\)). We measure the time in units of the shock crossing time i.e. \(t_{p}=L_{y}/v_{p}\). As time evolves, a high-density doubly-shocked region 2 develops, between the wall and the reflected shock s2, which makes an angle \(\alpha_{2}=0.137\) relative to the wall (Figure 4b & 4c). The similarity between the Figures 4b & 4c indicates the self-similar nature of the flow with respect to the point P. The fluid in region 2 moves along Figure 4: Snapshots from a numerical simulation of shock reflection in the lab frame S for RR, at different times: (a) \(t/t_{p}=0.0\). (b) \(t/t_{p}=0.57\), (c) \(t/t_{p}=0.86\), where \(t_{p}=L_{y}/v_{p}\) is the box crossing time of point P (using equal aspect ratio). The computation domain of lengths ratio \(L_{y}:L_{x}=3.23:1\), where the reflecting wall is along its right boundary while the incident shock s1 is initially along its top-left bottom-right diagonal. The unit of length is arbitrary. The incident shock s1 leaves the computation domain at \(t=t_{p}\). The color scale indicates the fluid’s proper rest-mass density while the red arrows show its velocity vector. This simulation is initialized with \(\alpha_{1}=0.3\) and \(\mu_{1}=1\), where the latter implies \(u_{1s}=1.37\) and \(\rho_{1}/\rho_{0}=4\Gamma_{1}=4\sqrt{2}\) for \(p_{0}\ll\rho_{0}c^{2}u_{1}^{2}\). Panel (a) shows the initial conditions. Other snapshots (panels (b), (c)) indicate the gradual growth of the doubly shocked region 2 with a higher density. The reflected shock forms an angle \(\alpha_{2}=0.137\) with the wall. Region 2 remains uniform unless it is affected by the lower boundary condition. the wall, relatively slowly, at a proper speed \(u_{2}=0.247\) for \((u_{1},\alpha_{1})=(1,0.3)\). The inflow boundary condition at the lower boundary is unphysical within region 2, and its effects become more significant for a higher value of \(\alpha_{1}\). In frame S' we initialized the numerical simulations as described in SS 3.2. Figure 5 shows snapshots from such a simulation with \((u_{1},\alpha_{1})=(1,1)\) (or at \(\tan\alpha_{1}=1.557\)), such that \((\rho_{0}=1\), \(p_{0}=10^{-9}\rho_{0}c^{2})\) imply \((\beta^{\prime}_{0y}=-\beta_{p}=-0.907\), \(\rho_{1}=5.657\), \(p_{1}=1.333\), than \(\alpha^{\prime}_{1}=0.434\)). We start the numerical simulation with regions 0 and 1 (Figure 5a) in the computational box (of \(252\times 1038\) grid points) with its vertical length along the wall (\(L^{\prime}_{y}\)) being 4.13 times the horizontal length (\(L^{\prime}_{s}\)). The doubly-shocked region 2 develops with time (Figure 5b & 5c) as the incident shock s1 remain static in this frame. The reflected shock s2 settles down at an angle \(\alpha^{\prime}_{2}\) (with \(\tan\alpha^{\prime}_{2}=0.33\)) relative to the wall. The matter in region 2 (\(\rho_{2}/\rho_{0}=16.171\), \(p_{2}/\rho_{0}c^{2}=7.039\)) moves along the wall with a proper speed \(u^{\prime}_{2}=1.207\). Figure 6 shows that the doubly-shocked region 2 forms and settles down over a timescale close to its sound crossing time, \(t^{\prime}_{sc}=L^{\prime}/c_{s,2}\), where \(L^{\prime}\) is its length along the wall. We identify region 2 through its higher density relative to region 1. Region 2 contains proper-density fluctuations of a few percent (\(\lesssim 5\%\)) relative to the mean value. The non-uniformity in region 2 is due to the gradual transition at the boundary and the fluctuations. Figure 7 shows the snapshots displaying the density and velocity of the doubly-shocked region 2 at different times. Frame S' is suitable to study the shock interaction for high enough \(\alpha_{1}\) values, corresponding to the sub-luminal region. For RR, our simulations (in frame S') that initially did not contain the doubly-shocked region 2, evolved to and settled at the 'weak' shock RR solution. From the algebraic solution, one may also obtain a'strong' shock RR solution (in the super-sonic sub-luminal region - see GR23), corresponding to higher values of \(\alpha^{\prime}_{2}\) (and therefore \(\alpha_{2}\)), \(\rho_{2}\) and \(p_{2}\). To study the stability of this strong shock solution we numerically evolve the fluid variables starting from an initial configuration that also includes region 2 with properties corresponding to the algebraic strong shock solution (see Table 1, Figures 8 and 9). Figure 8 shows the resulting evolution of the mean density and pressure of the doubly-shocked region, in terms of \begin{table} \begin{tabular}{c c c c} \hline \multicolumn{2}{c}{Frame S} & \multicolumn{2}{c}{Frame S′} \\ \hline \hline \(\rho_{0}\) & 1 & \(\rho_{0}\) & 1 \\ \hline \(\beta_{0y}\) & 0 & \(\beta^{\prime}_{0y}\) & -\(v_{p}\)=\(0.9555363\) \\ \hline \(\alpha_{1}\) & 1.0079245 & \(\alpha^{\prime}_{1}\) & 0.43718 \\ \hline \(u_{1}\) & 1 & \(u^{\prime}_{1}\) & 1.795769 \\ \hline \(\rho_{1}\) & 5.6568 & \(\rho_{1}\) & 5.6568 \\ \hline \(p_{1}\) & 1.33333 & \(p_{1}\) & 1.33333 \\ \hline & Weak & Strong & Weak & Strong \\ \hline \hline \(\alpha_{2}\) & 0.8942 & 1.40331 & \(\alpha^{\prime}_{2}\) & 0.35191 & 1.05019 \\ \hline \(\beta_{2}\) & 0.737 & 0.9033 & \(\beta^{\prime}_{2}\) & -0.73887 & -0.38151 \\ \hline \(\frac{p_{2}}{\rho_{0}c^{2}}\) & 7.127 & 18.8817 & \(\frac{p_{2}}{\rho_{0}c^{2}}\) & 7.127 & 18.8817 \\ \hline \(\rho_{2}/\rho_{0}\) & 16.3425 & 27.6949 & \(\rho_{2}/\rho_{0}\) & 16.3425 & 27.6949 \\ \hline \end{tabular} \end{table} Table 1: Weak and strong solutions of a shock collision. Figure 5: Snapshots from a numerical simulation of shock reflection in the frame S’ for RR, at different times: (a) \(t^{\prime}/t^{\prime}_{sc}=0\), (b) \(t^{\prime}/t^{\prime}_{sc}=0.72\), (c) \(t^{\prime}/t^{\prime}_{sc}=1.43\) (where \(t^{\prime}_{sc}\) is defined in § 2.1). The height-to-width ratio of the computation domain is \(L^{\prime}_{y}/L^{\prime}_{x}=4.13\). The doubly-shocked region 2 develops between the wall and the reflected shock s2, which forms an angle \(\alpha^{\prime}_{2}\) (with \(\tan\alpha^{\prime}_{2}=0.33\)) relative to the wall. Figure 7 shows zoomed-in snapshots (of the region below the dashed white line in panel (b)), more densely sampled in time to better illustrate the formation of region 2. Figure 6: Average density (\(\rho_{2}\)), pressure (\(p_{2}\)) and velocity component along the wall (\(\beta_{2y}\)) of the doubly-shocked region 2, as it forms and settles down (to \(\rho_{2,\rm{set}}\), \(p_{2,\rm{set}}\), and \(\beta_{2y,\rm{set}}\), respectively) over about a sound crossing time (\(t^{\prime}_{sc}\)). The vertical dotted lines indicate the time stamps of the snapshots shown in Figure 7. its fractional deviation from the weak and strong shock solutions. The system quickly transitions from the algebraic strong shock RR solution to its numerical counterpart, in which the density and pressure in region 2 differ by \(\sim\) 1.5-2%. The system then starts to linearly deviate from this solution with a growth rate of about 0.5 \(t_{sc}^{\prime}\)\({}^{-1}\) or \(e\)-folding time about 2\(t_{sc}^{\prime}\) (two sound crossing times). Subsequently, the transition between the strong and weak shock solutions enters a non-linear phase. Finally, the weak shock RR solution is approached at about 10 sound crossing time (\(t_{sc}^{\prime}\)), and the deviation from this solution appears to decrease exponentially with time. Figure 9 shows snapshots from the corresponding simulation (in frame S') in Figure 9 displaying the fluid variables at different times (indicated by the dashed vertical lines in Figure 8). Panels b & \(\dot{b}\) show a small (linear order) change in density, pressure and shape of the doubly-shocked region 2 from its initial state (which is shown in panels a & a). Panels c & \(\ddot{c}\) indicate the significant (non-linear) changes with the transient appearance of a new (third) shock and a contact discontinuity, which bound a triply-shocked region at the bottom-right corner of the snapshot. Panels d & \(\dot{d}\) show the system reached the weak shock solution and it stays there. #### 4.1.2 Irregular Reflection (IR) or Mach Reflection (MR) In the sub-sonic region, if there was RR then the dense, high-pressure fluid in the doubly-shocked region 2 would be in causal contact with point P, and cause it to detach from the wall, thereby leading to IR. For this reason there is no RR in the sub-sonic region, and instead only IR types of shock reflection such as MR. As mentioned in SS 2.2, there may be a dual region within the super-sonic region where both RR and IR/MR are possible, but the weak shock RR solution appears to be the most stable attractor solution that generically appears in our simulations. Therefore, we generally expect the formation of IR/MR in our simulations in the sub-sonic regime. For a given \(u_{1}\), this corresponds to sufficiently large incidence angles \(\alpha_{1}\). For such incidence angles, the post-shock region develops multiple zones separated by discontinuities. Figure 10 shows an example of a simulation for such a case, where MR develops. For this numerical simulation we considered an incident shock characterized by (\(\alpha_{1}\), \(u_{1}\)) = (1.1, 1.0), with an unshocked region 0 of (\(\rho_{0}\) = 1, \(p_{0}/\rho_{0}c^{2}\) = 10^{-9}\), \(u_{0}\) = 0). To calculate the fluid variables in frame S' we consider the corresponding boost of \(\beta_{p}\) = 0.9067722, which implies an S' frame incidence angle of \(\tan\alpha_{1}^{\prime}\) = 0.8283838. We evolve the fluid maintaining the boundary conditions mentioned in section 3.2. Figure 10 shows snapshots from this simulation. The meeting point of the incident and the reflected shocks, P, detaches from the reflecting wall (_panels_ b, c) and a Mach stem develops behind which there is singly shocked fluid Figure 8: The evolution of the mean proper rest-mass density (\(\rho_{2}\)) and pressure (\(p_{2}\)) in the doubly-shocked region 2, shown in terms of their fractional deviations from their values for the ‘strong’ (\(\rho_{\rm strong}\), \(p_{\rm strong}\)) and ‘weak’ (\(\rho_{\rm weak}\), \(p_{\rm weak}\)) shock RR solutions. The simulation starts from the algebraic ‘strong’ shock solution and the system moves to the ‘weak’ shock solution within several sound-crossing times of region 2 (\(t_{sc}^{\prime}\)). The vertical dashed lines denote the times of the snapshots shown in Figure 9. Figure 7: The gradual development of the doubly-shocked region 2 is shown in this sequence of snapshots in frame S’, depicting the lower half of the computational domain from Figure 5 at the times indicated by the vertical dotted line in Figure 6. at pressure equilibrium across a contact discontinuity with doubly-shocked fluid behind the reflected shock s2. One can clearly see the development of Kelvin-Helmholtz instability along this contact discontinuity due to the velocity shear (discontinuous parallel velocity component) across it. The reflected shock assumes an irregular non-triangular shape. The Mach stem slowly moves upward in frame S' where this simulation is performed, at a constant speed, such that its length (or the distance of point P from the wall) increases linearly with time. In this paper, we do not explore in detail the characteristics of this IR/MR and instead leave this for a future work. Figure 11: Snapshots (proper rest-mass density colormap and red velocity vectors) from simulations of RR for \((\alpha_{1},\,u_{1})=(0.465,\,0.316)\) performed in: (a) the lab frame S (\(\tan\alpha_{2}=0.281\)) at \(t/t_{p}=0.65\), and (b) the moving frame S’ (\(\tan\alpha_{1}^{\prime}=0.246\), \(\tan\alpha_{2}^{\prime}=0.139\)) at \(t^{\prime}/t^{\prime}_{sc}=1.91\) Figure 10: Snapshots from a simulation with \((\alpha_{1},\,u_{1})=(1.1,\,1.0)\) in which Mach reflection (MR) develops, at times \(t^{\prime}/t^{\prime}_{sc}=0\), \(t^{\prime}/t^{\prime}_{sc}=0.2\), and \(t^{\prime}/t^{\prime}_{sc}=0.4\).Each panel shows a colormap of the proper rest-mass density and red velocity vectors. The side ratio of the computation domain is \(L^{\prime}_{y}/L^{\prime}_{x}=2.46\). Figure 9: Snapshots of proper rest-mass density (_top panels_ a-d) and pressure (_bottom panels_\(\bar{\rm a}\)-\(\bar{\rm d}\)) from a simulation in frame S’ starting from the algebraic strong shock RR solution (_panels_\({\rm a},\bar{\rm a}\)). The red arrows are velocity vectors, whose size indicates the fluid speed at their starting point. This time sequence captures the evolution between the strong and weak shock solutions, in the initial linear phase (_panels_\({\rm b},\bar{\rm b}\)) and subsequent non-linear phase (_panels_\({\rm c},\bar{\rm c}\)). The system finally settles in the weak shock solution (_panels_\({\rm d},\bar{\rm d}\)). #### 4.1.3 consistency of numerical results in frames S and S' Here we show the consistency of the numerical results obtained through relativistic hydrodynamic simulations performed in rest frames S and S\({}^{\prime}\). Figure 11 shows the snapshots from frames S and S\({}^{\prime}\) for the same physical shock reflection case of \((\alpha_{1},\,u_{1})=(0.465,\,0.316)\). In the lab frame S the frame S\({}^{\prime}\) moves with velocity \(v_{p}=0.87c\) upward along the wall, such that the incidence angle \(\tan\alpha_{1}=0.501\) in S transforms to \(\tan\alpha_{1}^{\prime}=0.246\) in S\({}^{\prime}\). From the numerical evolution studies, we obtain the development of the doubly-shocked region 2. In frame S we derive a value of \(\tan\alpha_{2}=0.281\) for the angle of the reflected shock s2 relative to the wall, and in frame S\({}^{\prime}\) we derive a corresponding value of \(\tan\alpha_{2}^{\prime}=0.139\). Using a Lorentz boost from frame S\({}^{\prime}\) to S we obtain \(\tan\alpha_{2}=0.284\) from the evolution in frame S\({}^{\prime}\). The algebraic solution corresponding to the same input parameters gives \(\tan\alpha_{2}=0.282355\). Therefore, the angle of reflection is consistent in both frames S and S\({}^{\prime}\) to within 1%, and both are consistent with the analytic value. #### 4.1.4 consistency of the numerical and analytic results We compare the results of our numerical simulations with the exact algebraic solution for the relatively simpler case Figure 12: Comparing the matter proper rest-mass density in the doubly-shocked region 2 (\(\rho_{2}\), normalized by \(\rho_{0}\)) from the analytic algebraic solution (solid green lines) to our hydrodynamic simulation results, in the lab frame S (blue x symbols) and in the moving frame S\({}^{\prime}\) (red + symbols), for \(u_{1}=10,1,0.1\) (from top to bottom). The vertical dotted black line (dashed cyan line) corresponds to the luminal (sonic) line. Frame S is well-suited for low \(\alpha_{1}\) values, while frame S\({}^{\prime}\) is a favorable option near the sonic line. Figure 13: Similar to Fig. 12 but comparing the region 2 pressure. of RR, for which such an analytic solution can be obtained (GR23). The critical incidence angle along the sonic line, \(\alpha_{1,\mathrm{sonic}}(u_{1})\), below which RR is possible, increases as the incident shock velocity \(\beta_{s1}\) increases. A detailed comparison is shown for the proper rest-mass density \(\rho_{2}\) (Figure 12), and pressure \(p_{2}\) (Figure 13) of the doubly-shocked region 2, as well as the ratio of the tangens of the angles relative to the wall of the reflected and incident shock fronts, \(\tan\alpha_{2}/\tan\alpha_{1}\) (Figure 14). We note that lab frame S simulation is more accurate for a smaller incidence angle \(\alpha_{1}\). As the value of \(\alpha_{1}\) increases the doubly-shocked region 2 in frame S is more strongly affected by the imposed inflow lower boundary condition. Hence, the numerical results deviate from the expected value as \(\alpha_{1}\) approaches the critical value for a RR (i.e. the sonic line). Frame S\({}^{\prime}\) is more suitable for simulations in this regime, and can follow the shock reflection for longer times, as the flow becomes steady in S\({}^{\prime}\) for RR. ### The Sonic Line #### 4.2.1 The Significance of the Sonic Line The sonic line corresponds to the condition \[\beta^{\prime}_{2,w}=\beta_{c_{s},2,w}\ \ \Longleftrightarrow\ \ \beta_{p}=\frac{\beta_{2,w}+\beta_{c_{s},2,w}}{1+\beta_{2,w}\beta_{c_{s},2,w }}. \tag{14}\] The first condition is that in the rest frame S\({}^{\prime}\) where the flow is steady the velocity in region 2 (of the doubly-shocked fluid) for the weak shock RR solution, \(\beta^{\prime}_{2,w}\), is equal to its sound speed, \(\beta_{c_{s},2,w}\). Once \(\beta^{\prime}_{2,w}\) drops below \(\beta_{c_{s},2,w}\), i.e. in the subsonic regime, region 2 comes into causal contact with point \(P\), and can then potentially cause it to separate from the wall resulting in MR. On the other hand, in the supersonic regime (\(\beta^{\prime}_{2}>\beta_{c_{s},2}\)) region 2 is not in causal contact with point \(P\) (for an initial unperturbed weak shock RR solution) so it cannot affect it and therefore point \(P\) cannot separate from the wall and allow a transition to IR/MR. This may potentially suppress a transition between the weak shock RR solution and MR (in the dual region between the sonic line and the mechanical equilibrium line; see e.g. GR23), and require a sufficiently large perturbation for it to occur. The fact that the strong shock RR solution is always subsonic (\(\beta^{\prime}_{2,w}<\beta_{c_{s},2,z}\)) may potentially account for its instability, e.g. as found in SS 4.1.1. Since the sonic condition is that for causality between region 2 and point \(P\), it can also be expressed in the lab frame S such that the speed of a sound wave propagating in region 2 along the wall towards point \(P\), \((\beta_{2}+\beta_{c_{s},2})/(1+\beta_{2}\beta_{c_{s},2})\), equals that of point \(P\), \(\beta_{p}\). #### 4.2.2 The Flow Properties along the Sonic Line The sonic line's physical significance makes it interesting to study in detail the flow properties along it. The analytic solution for the flow properties along the sonic line is derived in Appendix A, along with analytic expressions for all of the flow quantities in the Newtonian and relativistic limits. Figure 15 shows the values of different hydrodynamic variables along the sonic line (\(\beta^{\prime}_{2,w}=\beta_{c_{s},2,w}\)), conveniently parameterised according to the value of \(u_{1}\) along this line. These values are found by numerically solving the set of algebraic equations for RR, together with the sonic condition, in the frame S\({}^{\prime}\) (Appendix A) or equivalently in the lab frame S (as is done in GR23). The weak and strong shock RR solutions exactly coincide at the detachment line, which almost coincides with the sonic line, such that both solutions are extremely close along the sonic line. There is excellent agreement with both the Newtonian and the relativistic limits that are found analytically in Appendix A. Moreover, it can be seen from Fig. 15 that in the relativistic limit \(u_{p}>u_{2}>u_{1}\gg 1\) while \(u_{12}\) is of order unity, such that the first (incident) shock \(s1\) is ultra-relativistic, while the second (reflected) shock \(s2\) is mildly relativistic. Figure 16 shows the flow configurations for the asymptotic Newtonian and relativistic limits along the sonic line, in the rest frame S\({}^{\prime}\) where the flow is steady and point \(P\) is at rest. These limits are particularly simple and can be fully solved analytically (Appendix A). In frame S\({}^{\prime}\) the angles \(\alpha^{\prime}_{1}\), \(\alpha^{\prime}_{2}\) and \(\chi^{\prime}\) do not vary that drastically between these two limits (see also Fig. 15). Figure 17 shows the location of the transition from RR to MR (red + symbols). For each \(u_{1}\) values the incidence angle \(\alpha_{1}\) is gradually increased between different simulations until we find the critical value \(\alpha_{1,\mathrm{crit}}(u_{1})\) at which point P detaches from the wall, signaling the transition from RR to MR. It is determined more accurately by performing several iterations for each \(u_{1}\) value. These numerical values are in excellent agreement with the analytically calculated location of the sonic line (solid cyan line in Fig. 17), \(\alpha_{1,\mathrm{crit}}(u_{1})=\alpha_{1,\mathrm{sonic}}(u_{1})\). These simulations clearly shows Figure 15: The values of different hydrodynamic variables along the sonic line (\(\beta^{\prime}_{2}=\beta_{c_{s},2}\)) are shown as a function of the value of \(u_{1}\) along this line (_thick solid lines_). The Newtonian and relativistic limits from Equations (A6) and (A9), respectively, are indicated by thin dashed lines (which fall on top of the thick solid lines). that: (i) there is indeed no RR in the sub-sonic region, and (ii) in the super-sonic region the simulations reach the weak shock RR solution and not the strong shock RR solution (in the sub-luminal region) or MR (in the dual region). Figure 18 shows the critical transition angle in frame S\({}^{\prime}\), \(\tan\alpha_{\rm i,crit}^{\prime}(u_{1})\), from our numerical simulations (red + symbols), compared to the analytically derived corresponding angle for the sonic line, \(\tan\alpha_{\rm 1,sonic}^{\prime}(u_{1})\), whose asymptotic Newtonian and relativistic limits are shown by horizontal black dotted lines. They agree to within about 1%. This critical angle in frame S\({}^{\prime}\) decreases by about 10% between the Newtonian and relativistic limits (see also Fig. 15). ## 5 Conclusions We have studied relativistic shock reflection, mainly numerically using two dimensional relativistic hydrodynamic simulations, and with detailed comparisons to analytic results. Our simulations were performed in two different rest frames: the lab frame S where the cold unshocked fluid (region 0) is at rest, and the rest frame S\({}^{\prime}\) where for RR the point P of intersection of the incident and reflected shocks with the reflecting wall is at rest, and the flow is in a steady state. Our numerical simulations have validated the analytic results (derived mainly in GR23, but also in Appendix A for the sonic line). We have also pointed out the importance of using a suitable reference frame in different simulations of shock interactions. In particular, for small incidence angles \(\alpha_{1}\) the lab frame S is more suitable, while the frame S\({}^{\prime}\) that exists only in the sub-luminal region is more suitable closer to the sonic line. We have also studied the transition between RR and IR (namely MR). The transition from RR to IR/MR maintains similar characteristics in the Newtonian and relativistic regimes. In our numerical study, this transition occurred exactly at the sonic/detachment line. Moreover, in the super-sonic region the simulations always reached the weak shock RR solution. We have found the alternative strong shock RR solution, which exists in the super-sonic sub-luminal region, to be unstable. Moreover, we numerically studied how it transitions to the weak shock RR solution, which appears to be a stable attractor solution. While a dual region where either RR or MR are Figure 16: The asymptotic Newtonian (_right panel_) and relativistic (_left panel_) flow configurations along the sonic lines, for which the flow parameters are given in Eqs. (14) and (15), respectively. Figure 17: The location of the transition from RR to MR (red + symbols; \(\alpha_{\rm 1,crit}(u_{1})\)) found from our numerical simulations in frame S\({}^{\prime}\), match the analytically calculated location of the sonic line (solid cyan line; \(\alpha_{\rm 1,sonic}(u_{1})\)). The dotted black line indicates the luminal line, which is shown for reference. Figure 18: The critical transition angle in frame S\({}^{\prime}\), \(\tan\alpha_{\rm 1,crit}^{\prime}(u_{1})\), from our numerical simulations (red + symbols), compared to the analytically derived corresponding angle for the sonic line, \(\tan\alpha_{\rm 1,sonic}^{\prime}(u_{1})\), whose asymptotic Newtonian and relativistic limits are shown by horizontal black dotted lines. possible should exist from analytic considerations (on the super-sonic side of the sonic line but well within the sub-luminal region), it was never reached in our simulations, suggesting that it is not an attractor solution (and may also be unstable). ## Acknowledgement P. Bera is supported by the Israel Academy of Sciences and Humanities & Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researchers. This research was funded in part by the ISF-NSFC joint research program under grant no. 3296/19 (J.G.) and by the United States-Israel Binational Science Foundation (BSF) under grant no. 2020747 (P. Beniamini). ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author. ## Appendix A The solution along the sonic line Here we derive an analytic solution for RR along the sonic line. The conditions in region 1, for a cold region 0 (\(p_{0}=0\), \(e_{0}=w_{0}=\rho_{0}c^{2}\) and \(h_{0}=1\)), can be conveniently calculated in the lab frame S and are given by \[\rho_{1}=4\Gamma_{1}\rho_{0}\,\quad p_{1}=\tfrac{4}{3}u_{1}^{2}\rho_{ 0}c^{2}\,\quad e_{1}=4\Gamma_{2}^{2}\rho_{0}c^{2}\,\] \[e_{\rm int,1}=\tfrac{4\Gamma_{1}u_{1}^{2}}{\Gamma_{1}+1}\rho_{0} c^{2}\,\quad w_{1}=4\Gamma_{2}^{3}\left(1+\tfrac{\beta_{1}^{2}}{3}\right)\rho_{0}c^{2 }\, \tag{15}\] \[\beta_{s1}=\tfrac{4\Gamma_{1}u_{1}}{4\Gamma_{1}^{2}-1}\,\quad u_{1}= \tfrac{1}{2}\sqrt{u_{s1}^{2}-2+\sqrt{4+5u_{s1}^{2}+u_{s1}^{4}}}\,\] \[\beta_{1,s1}=\tfrac{\beta_{s1}-\beta_{1}}{1-\beta_{1}\beta_{s1}} =\tfrac{\beta_{1}}{3}\,\] (GR23) where the last equation means that for our equation of state, in the rest frame of the downstream fluid (region 1), the speed at which the shock is receding is a third of the incoming upstream speed. Since the sonic line is always in the sub-luminal regime, it can conveniently be analyzed in frame S\({}^{\prime}\). In this frame the flow is steady and there are two oblique shocks: s1 and s2, at angles \(\alpha_{1}^{\prime}\) and \(\alpha_{2}^{\prime}\), respectively, relative to the wall. The velocity of region 1 in frame S\({}^{\prime}\) can be expressed through \[\mathbf{u}_{1}^{\prime} = \left[-u_{1}\cos\alpha_{1},\ \Gamma_{p}\Gamma_{1}(\beta_{1}\sin \alpha_{1}-\beta_{p})\right]\,\] \[\Gamma_{1}^{\prime} = \sqrt{1+{u_{1}^{\prime}}^{2}}=\Gamma_{1}\Gamma_{p}(1-\beta_{1} \beta_{p}\sin\alpha_{1}) \tag{16}\] \[= \frac{3\,\Gamma_{1}\sin\alpha_{1}}{\sqrt{(4\Gamma_{1}^{2}-1)^{2} \sin^{2}\alpha_{1}-16\Gamma_{1}^{2}u_{1}^{2}}}\,\] \[\tan\chi^{\prime} = \frac{u_{1z}^{\prime}}{u_{1y}^{\prime}}=\frac{\beta_{1}\cos\alpha _{1}}{\Gamma_{p}(\beta_{p}-\beta_{1}\sin\alpha_{1})}\,\] \[\tan\alpha_{1}^{\prime} = \frac{\tan\alpha_{1}}{\Gamma_{p}}\,\qquad\tan\alpha_{2}^{\prime}= \frac{\tan\alpha_{2}}{\Gamma_{p}}\.\] The remaining conditions are the oblique shock jump conditions in frame S\({}^{\prime}\) and the sonic condition, which read \[\rho_{1}u_{1}^{\prime}\sin\alpha_{+}^{\prime} = \rho_{2}u_{2}^{\prime}\sin\alpha_{2}^{\prime}\,\] \[{w_{1}}{u_{1}^{\prime}}^{2}\sin\alpha_{+}^{\prime}+p_{1} = w_{2}{u_{2}^{\prime}}^{2}\sin^{2}\alpha_{2}^{\prime}+p_{2}\,\] \[w_{1}\Gamma_{1}^{\prime}u_{1}^{\prime}\sin\alpha_{+}^{\prime} = w_{2}\Gamma_{2}^{\prime}u_{2}^{\prime}\sin\alpha_{2}^{\prime}\, \tag{17}\] \[\beta_{1}^{\prime}\cos\alpha_{+}^{\prime} = \beta_{2}^{\prime}\cos\alpha_{2}^{\prime}\,\] \[\beta_{2}^{\prime} = \beta_{c_{s},2}\,\] where we denote \(\alpha_{+}^{\prime}=\chi^{\prime}+\alpha_{2}^{\prime}\) and \[\beta_{c_{s},2}^{2}=\frac{3\Theta_{2}^{2}+5\Theta_{2}\sqrt{\Theta_{2}^{2}+4/9 }}{12\Theta_{2}^{2}+2+12\Theta_{2}\sqrt{\Theta_{2}^{2}+4/9}}\,\qquad\Theta_{2}=\frac{p_{2}}{\rho_{2}c^{2}}\.\] In the Newtonian limit (\(\beta_{1}<\beta_{p}\ll 1\)) the adiabatic index is \(\dot{\gamma}=5/3\) and the equations reduce to \[\tfrac{\rho_{1}}{\rho_{0}}=4\,\quad\tfrac{p_{1}}{\rho_{0}}=\tfrac{4}{3 }v_{1}^{2}\,\quad\tfrac{e_{\rm int,1}}{\rho_{0}}=2v_{1}^{2}\,\quad\beta_{s1}=\tfrac{4}{3}\beta_{1}\,\] \[\mathbf{v}_{1}^{\prime}=v_{1}\left(-\cos\alpha_{1},\ \sin\alpha_{1}-\tfrac{4}{3\sin\alpha_{1}}\right)\,\] \[\qquad\tan\chi^{\prime}=\tfrac{v_{1z}^{\prime}}{v_{1y}}=\tfrac{3 \cos\alpha_{1}\sin\alpha_{1}}{1+3\cos^{2}\alpha_{1}}\, \tag{18}\] \[\rho_{1}v_{1}^{\prime}\sin\alpha_{+}^{\prime}=\rho_{2}v_{1}^{ \prime}\sin\alpha_{2}^{\prime}\,\] \[\rho_{1}v_{1}^{\prime}\sin^{2}\alpha_{+}^{\prime}+p_{1}=\rho_{2}v_{ 1}^{\prime}\sin^{2}\alpha_{2}^{\prime}\,\] \[\rho_{1}v_{1}^{\prime}\sin^{2}\alpha_{+}^{\prime}+p_{1}=\rho_{2}v_{ 1}^{\prime}\sin^{2}\alpha_{2}^{\prime}+p_{2}\,\] \[\tfrac{5\,p_{1}}{\rho_{1}}+v_{1}^{\prime}\sin^{2}\alpha_{+}^{ \prime}=5\tfrac{p_{2}}{\rho_{2}}+v_{2}^{\prime}\sin^{2}\alpha_{2}^{\prime}\,\] \[v_{1}^{\prime}\cos\alpha_{+}^{\prime}=v_{2}^{\prime}\cos\alpha_{2}^{ \prime}\,\] \[v_{2}^{\prime}=\tfrac{5\,p_{2}}{\beta_{2}}\,\] which have the following simple solution: \[v_{p}=v_{0}^{\prime}=\tfrac{4}{\sqrt{3}}v_{1}=\sqrt{3}\,v_{s1}= \tfrac{4}{\sqrt{11}}v_{1}^{\prime}=2v_{2}^{\prime}=2v_{2}\,\] \[\rho_{2}=\tfrac{5}{2}\rho_{1}=10\rho_{0}\,\qquad p_{2}=6p_{1}=8\rho_{0}v_{1}^{2}\,\] \[\tfrac{u_{2}}{u_{1}}\to\tfrac{\beta_{2}}{\beta_{1}}=\tfrac{2}{ \sqrt{3}}\,\qquad\tfrac{u_{2}^{\prime}}{u_{1}^{\prime}}\to\tfrac{\beta_{2}^{ \prime}}{\beta_{1}}=\tfrac{2}{\sqrt{11}}\,\] \[\tfrac{u_{1z}}{u_{1}}\to\tfrac{\beta_{2}}{\beta_{1}}=1\,\qquad\tfrac{u_{2}}{u_{1}}\to\tfrac{\beta_{p}}{\beta_{1}}=\tfrac{4}{\sqrt{3}}\, \tag{19}\] \[\tan\alpha_{1}^{\prime}=\tan\alpha_{1}=\tan\alpha_{2}^{\prime}= \tan\alpha_{2}=\tfrac{1}{\sqrt{2}}\,\] \[\sin\alpha_{1}^{\prime}=\sin\alpha_{1}=\sin\alpha_{2}^{\prime}= \sin\alpha_{2}=\tfrac{1}{\sqrt{3}}\,\] \[\cos\alpha_{1}^{\prime}=\cos\alpha_{1}=\cos\alpha_{2}^{\prime}= \cos\alpha_{2}=\sqrt{\tfrac{2}{3}}\,\] \[\tan\chi^{\prime}=\tfrac{\sqrt{2}}{3}\,\qquad\sin\chi^{\prime}= \sqrt{\tfrac{2}{11}}\,\qquad\cos\chi^{\prime}=\tfrac{3}{\sqrt{11}}\,\] From Fig. 15 it can be seen that in the relativistic limit (\(u_{p}>u_{2}>u_{1}\gg 1\)) \(u_{12}\) is of order unity, such that while the first (incident) shock \(s1\) is ultra-relativistic (with a relative upstream to downstream proper speed of \(u_{1}\gg 1\)), the second (reflected) shock \(s2\) is only mildly relativistic. Therefore, while region 0 is cold, both regions 1 and 2 are relativistically hot, with an adiabatic index of \(\dot{\gamma}=4/3\) and \(p=e_{\rm int}/3\gg\rho c^{2}\) while \( equations reduce to: \[\begin{array}{c}\frac{\rho_{1}}{\rho_{0}}=4\Gamma_{1}\,\ \ \ \frac{4w}{3\rho \sigma^{2}}\approx\frac{c_{1,11}}{\rho\sigma^{2}}\approx\frac{c_{1}}{\rho \sigma^{2}}=4\Gamma_{1}^{2}\,\\ \frac{p_{1}}{\rho_{0}c^{2}}=\frac{1}{3}u_{1}^{2}\approx\frac{1}{4}\Gamma_{1}^{2 }\,\ \ \ u_{s1}\approx\sqrt{2}u_{1}\,\ \ \ \beta_{1,s1}=\frac{\beta_{1}}{3}\approx\frac{1}{3}\,\\ \rho_{1}u_{1}^{\prime}\sin\alpha_{+}^{\prime}=\rho_{2}\frac{1}{\sqrt{2}}\sin \alpha_{2}^{\prime}\,\\ 4p_{1}u_{1}^{\prime}\sin^{2}\alpha_{+}^{\prime}+p_{1}=2p_{2}\sin^{2}\alpha_{2}^ {\prime}+p_{2}\,\\ 4p_{1}\Gamma_{1}^{\prime}u_{1}^{\prime}\sin\alpha_{+}^{\prime}=2\sqrt{3}p_{2} \sin\alpha_{2}^{\prime}\,\\ \beta_{1}^{\prime}\cos\alpha_{+}^{\prime}=\frac{1}{\sqrt{3}}\cos\alpha_{2}^{ \prime}\.\end{array} \tag{10}\] In the relativistic limit \(\alpha_{1},\,\alpha_{2}\approx\frac{\pi}{2}\) along the sonic line, so it is convenient to use the angle \(\bar{\alpha}_{1}=\frac{\pi}{2}-\alpha_{1}\) to express the solution to the above equations in terms of \[a\equiv\Gamma_{1}\bar{\alpha}_{1} \rightarrow \sqrt{\frac{49\sqrt{3}+\sqrt{6195-3576\sqrt{3}-84}}{16(9-5\sqrt{ 3})}} \tag{11}\] \[\approx 0.6004187327198\,\] where \(a=\Gamma_{1}\bar{\alpha}_{1}\approx u_{1}\bar{\alpha}_{1}\approx u_{1}\cos \alpha_{1}\approx u_{1}/\tan\alpha_{1}\) approaches a constant values in this relativistic limit, \[\frac{\Gamma_{p}}{\Gamma_{1}} \rightarrow \frac{u_{p}}{u_{1}}\rightarrow\sqrt{\frac{2}{1-2a^{2}}}\approx 2.677423238004\,\] \[\frac{\Gamma_{p}}{\Gamma_{2}} \rightarrow \frac{u_{p}}{u_{2}}\rightarrow\frac{\sqrt{2}}{\sqrt{3}-1}\approx 1.931851652578\,\] \[\frac{\Gamma_{2}}{\Gamma_{1}} \rightarrow \frac{u_{2}}{u_{1}}\rightarrow\frac{\sqrt{3}-1}{\sqrt{1-2a^{2}}} \approx 1.385936251591\,\] \[\Gamma_{12} \rightarrow \frac{3\sqrt{3}-1-4a^{2}}{4\sqrt{1-2a^{2}}}\approx 1.303551928752944\,\] \[\beta_{1}^{\prime} \rightarrow \frac{1}{3}\sqrt{1+16a^{2}}\approx 0.867182056601\,\] \[u_{1}^{\prime} \rightarrow \sqrt{\frac{1+16a^{2}}{8(1-2a^{2})}}\approx 1.741360042442\,\] \[\Gamma_{1}^{\prime} \rightarrow 3/\sqrt{8(1-2a^{2})}\approx 2.008067428503\,\] \[\sin\alpha_{1}^{\prime} \rightarrow \sqrt{1-2a^{2}}\approx 0.528199480119\,\] \[\cos\alpha_{1}^{\prime} \rightarrow \sqrt{2a}\approx 0.849120314915\,\] \[\tan\alpha_{1}^{\prime} \rightarrow \sqrt{1-2a^{2}/\sqrt{2}a}\approx 0.622054932430\, \tag{12}\] \[\sin\alpha_{2}^{\prime} \rightarrow \frac{1-\sqrt{3}+4a^{2}}{\sqrt{2(2-\sqrt{3})(1+4a^{2})}}\approx 0.6206 09996819\,\] \[\cos\alpha_{2}^{\prime} \rightarrow \frac{a\sqrt{8(1-2a^{2})}}{\sqrt{2(2-\sqrt{3})(1+4a^{2})}}\approx 0.7 84119398975\,\] \[\tan\alpha_{2}^{\prime} \rightarrow \frac{1-\sqrt{3}+4a^{2}}{a\sqrt{8(1-2a^{2})}}\approx 0.791473846497\,\] \[\sin\chi^{\prime} \rightarrow \frac{a\sqrt{8(1-2a^{2})}}{\sqrt{1+16a^{2}}}\approx 0.344798730926\,\] \[\cos\chi^{\prime} \rightarrow \frac{1+4a^{2}}{\sqrt{1+16a^{2}}}\approx 0.938676640357\,\] \[\tan\chi^{\prime} \rightarrow \frac{a\sqrt{8(1-2a^{2})}}{1+4a^{2}}\approx 0.367324290498\,\] \[\frac{\rho_{2}}{\rho_{1}} \rightarrow \frac{1-\sqrt{3}+4(4-\sqrt{3})a^{2}}{2(1-\sqrt{3}+4a^{2})\sqrt{1-2a ^{2}}}\approx 3.384471042815\,\] \[\frac{p_{2}}{p_{1}} \rightarrow \frac{\sqrt{3}-3+4(4\sqrt{3}-3)a^{2}}{4(1-\sqrt{3}+4a^{2})(1-2a^{2 })}\approx 5.549111674227\.\]
2305.20091
Humans in 4D: Reconstructing and Tracking Humans with Transformers
We present an approach to reconstruct humans and track them over time. At the core of our approach, we propose a fully "transformerized" version of a network for human mesh recovery. This network, HMR 2.0, advances the state of the art and shows the capability to analyze unusual poses that have in the past been difficult to reconstruct from single images. To analyze video, we use 3D reconstructions from HMR 2.0 as input to a tracking system that operates in 3D. This enables us to deal with multiple people and maintain identities through occlusion events. Our complete approach, 4DHumans, achieves state-of-the-art results for tracking people from monocular video. Furthermore, we demonstrate the effectiveness of HMR 2.0 on the downstream task of action recognition, achieving significant improvements over previous pose-based action recognition approaches. Our code and models are available on the project website: https://shubham-goel.github.io/4dhumans/.
Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, Jitendra Malik
2023-05-31T17:59:52Z
http://arxiv.org/abs/2305.20091v3
# Humans in 4D: Reconstructing and Tracking Humans with Transformers ###### Abstract We present an approach to reconstruct humans and track them over time. At the core of our approach, we propose a fully "transformerized" version of a network for human mesh recovery. This network, HMR 2.0, advances the state of the art and shows the capability to analyze unusual poses that have in the past been difficult to reconstruct from single images. To analyze video, we use 3D reconstructions from HMR 2.0 as input to a tracking system that operates in 3D. This enables us to deal with multiple people and maintain identities through occlusion events. Our complete approach, 4DHumans, achieves state-of-the-art results for tracking people from monocular video. Furthermore, we demonstrate the effectiveness of HMR 2.0 on the downstream task of action recognition, achieving significant improvements over previous pose-based action recognition approaches. Our code and models are available on the project website: [https://shubham-goel.github.io/ddhumans/](https://shubham-goel.github.io/ddhumans/). ## 1 Introduction In this paper, we present a fully transformer-based approach for recovering 3D meshes of human bodies from single images, and tracking them over time in video. We obtain unprecedented accuracy in our single-image 3D reconstructions (see Figure 1) even for unusual poses where previous approaches struggle. In video, we link these reconstructions over time by 3D tracking, in the process bridging gaps due to occlusion or detection failures. These 4D reconstructions can be seen on the project webpage. Our problem formulation and approach can be conceived as the "transformerization" of previous work on human mesh recovery, HMR [29] and 3D tracking, PHALP [62]. Since the pioneering ViT paper [14], the process of "transformeration", _i.e_., converting models from CNNs or LSTMs to transformer backbones, has advanced rapidly across multiple computer vision tasks, _e.g_., [8, 15, 23, 39, 58, 74]. Specifically for 2D pose (2D body keypoints) this has already been done by ViTPose [78]. We take that as a starting point and through careful design and experimen tation, we develop a new version of HMR, which we call HMR 2.0 to acknowledge its antecedent. We use HMR 2.0 to build a system that can simultaneously reconstruct and track humans from videos. We rely on the recent 3D tracking system, PHALP [62], which we simplify and improve using our pose recovery. This system can reconstruct Humans in 4D, which gives the name to our method, 4DHumans. 4DHumans can be deployed on any video and can jointly track and reconstruct people in video. The functionality of creating a tracking entity for every person is fundamental towards analyzing and understanding humans in video. Besides achieving state-of-the-art results for tracking on the PoseTrack dataset [1], we also apply HMR 2.0 on the downstream application of action recognition. We follow the system design of recent work, [60], and we show that the use of HMR 2.0 can achieve impressive improvements upon the state of the art on action recognition on AVA v2.2 dataset. This paper is unabashedly a systems paper. We explored various choices and put together the best combination. Our model will be made publicly available. There is an emerging trend, in computer vision as in natural language processing, of large pretrained models (sometimes also called "foundation models") which find widespread downstream applications and thus justify the scaling effort. HMR 2.0 is such a large pre-trained model which could potentially be useful not just in computer vision, but also in robotics [52, 59, 70], computer graphics [73], bio-mechanics, and other fields where analysis of the human figure and its movement from images or videos is needed. Our contributions can be summarized as follows: 1. We propose an end-to-end "transformerized" architecture for human mesh recovery, HMR 2.0. Without relying on domain-specific designs, we outperform existing approaches for 3D body pose reconstruction. 2. Building on HMR 2.0, we design 4DHumans that can jointly reconstruct and track humans in video, achieving state-of-the-art results for tracking. 3. We show that better 3D poses from HMR 2.0 result in better performance on the downstream task of action recognition, finally contributing to the state-of-the-art result (42.3 mAP) on the AVA benchmark. ## 2 Related Work **Human Mesh Recovery from a Single Image.** Although, there have been many approaches that estimate 3D human pose and shape relying on iterative optimization, _e.g_., SMPLify [7] and variants [21, 37, 54, 69, 82], for this analysis we will focus on approaches that directly regress the body shape from a single image input. In this case, the canonical example is HMR [29], which uses a CNN to regress SMPL [44] parameters. Since its introduction, many improvements have been proposed for the original method. Notably, many works have proposed alternative methods for pseudo-ground truth generation, including using temporal information [3], multiple views [38], or iterative optimization [34, 28, 55]. SPIN [34] proposed an in-the-loop optimization that incorporated SMPLify [7] in the HMR training. Here, we also rely on pseudo-ground truth fits for training, and we use [36] for the offline fitting. More recently, there have been works that propose more specialized designs for the HMR architecture. PyMAF [86, 85] incorporates a mesh alignment module for the regression of the SMPL parameters. PARE [33] proposes a body-part-guided attention mechanism for better occlusion handling. HKMR [19] performs a prediction that is informed by the known hierarchical structure of SMPL. HoloPose [22] proposes a pooling strategy that follows the 2D locations of each body joints. Instead, we follow a design without any domain-specific decisions and we show that it outperforms all previous approaches. Many related approaches are making non-parametric predictions, _i.e_., instead of estimating the parameters of the SMPL model, they explicitly regress the vertices of the mesh. GraphCMR [35] uses a graph neural network for the prediction, METRO [41] and FastMETRO [9] use a transformer, while Mesh Graphormer [42] adopts a hybrid between the two. Since we regress the SMPL model parameters, instead of the locations of mesh vertices, we are not directly comparable to these. However, we show how we can use a fully "transformerized" design for HMR. **Human Mesh & Motion Recovery from Video.** To extend Human Mesh Recovery over time, most methods use the basic backbone of HMR [29] and propose designs for the temporal encoder that fuses the per-frame features. HMMR [30] uses a convolutional encoder on features extracted from HMR [29]. VIBE [32], MEVA [47] and TCMR [10] use a recurrent temporal encoder. DSD [68] combines convolutional and self-attention layers, while MAED [72] and t-HMMR [55] employ a transformer-based temporal encoder. Baradel _et al_. [5, 4] also used a transformer for temporal pose prediction, while operating directly on SMPL poses. One key limitation of these approaches is that they often operate in scenarios where tracking is simple [30, 87], _e.g_., videos with a single person or minimal occlusions. In contrast to that, our complete 4DHumans approach is also solving the tracking problem. **Tracking People in Video.** Recently, there have been approaches that demonstrate state-of-the-art performance for tracking by relying on 3D human reconstruction from HMR models, _i.e_., T3DP [61] and PHALP [62]. In these methods, every person detection is lifted to 3D using an HMR network [55] and then tracking is performed using the 3D representations from lifting [61] and prediction [62] to track people in video. Empirical results show that PHALP works very well on multiple tracking benchmarks (the main requirement is that the images have enough spatial resolution to permit lifting of the people to 3D). We use these tracking pipelines, and particularly PHALP, as a task to evaluate methods for human mesh recovery. **Action Recognition.** Action recognition is typically performed using appearance features from raw video input. Canonical examples in this category include SlowFast [17] and MViT [15]. Simultaneously, there are approaches that use features extracted from body pose information, _e.g_., Pion [11] and JMRN [65]. A recent approach [60] demonstrates state-of-the-art performance for action recognition by fusing video-based features with features from 3D human pose estimates. We use the pipeline of this approach and employ action recognition as a downstream task to evaluate human mesh recovery methods. ## 3 Reconstructing People ### Preliminaries **Body Model.** The SMPL model [45] is a low-dimensional parametric model of the human body. Given input parameters for pose (\(\theta\in\mathbb{R}^{24\times 3\times 3}\)) and shape (\(\beta\in\mathbb{R}^{10}\)), it outputs a mesh \(M\in\mathbb{R}^{3\times N}\) with \(N=6890\) vertices. The body joints \(X\in\mathbb{R}^{3\times k}\) are defined as a linear combination of the vertices and can be computed as \(X=MW\) with fixed weights \(W\in\mathbb{R}^{N\times k}\). Note that pose parameters \(\theta\) include the body pose parameters \(\theta_{b}\in\mathbb{R}^{23\times 3\times 3}\) and the global orientation \(\theta_{g}\in\mathbb{R}^{3\times 3}\). **Camera.** We use a perspective camera model with fixed focal length and intrinsics \(K\). Each camera \(\pi=(R,t)\) consists of a global orientation \(R\in\mathbb{R}^{3\times 3}\) and translation \(t\in\mathbb{R}^{3}\). Given these parameters, points in the SMPL space (_e.g_., joints \(X\)) can be projected to the image as \(x=\pi(X)=\Pi(K(RX+t))\), where \(\Pi\) is a perspective projection with camera intrinsics \(K\). Since \(\theta\) already includes a global orientation, in practice we assume \(R\) as identity and only predict camera translation \(t\). **HMR.** The goal of the human mesh reconstruction (HMR) task is to learn a predictor \(f(I)\) that given a single image I, reconstructs the person in the image by predicting their 3D pose and shape parameters. Following the typical parametric approaches [29, 34], we model \(f\) to predict \(\Theta=[\theta,\beta,\pi]=f(I)\) where \(\theta\) and \(\beta\) are the SMPL pose and shape parameters and \(\pi\) is the camera translation. ### Architecture We re-imagine HMR [29] as an end-to-end transformer architecture that uses no domain specific design choices. Yet, it outperforms all existing approaches that have heavily customized architectures and elaborate design decisions. As shown in Figure 2, we use (i) a ViT [14] to extract image tokens, and (ii) a standard transformer decoder that cross-attends to image tokens to output \(\Theta\). **ViT.** The Vision Transformer, or ViT [14] is a transformer [71] that has been modified to operate on an image. The input image is first patchified into input tokens and passed through the transformer to get output tokens. The output tokens are then passed to the transformer decoder. We use a ViT-H/16, the "Huge" variant with \(16\times 16\) input patch size. Please see SupMat for more detail. **Transformer decoder.** We use a standard transformer decoder [71] with multi-head self-attention. It processes a single (zero) input token by cross-attending to the output image tokens and ends with a linear readout of \(\Theta\). We follow [34] and regress 3D rotations using the representation of [88]. Figure 2: **Overview of our approach. Left: HMR 2.0 is a fully “transformerized” version of a network for Human Mesh Recovery. Right: We use HMR 2.0 as the backbone of our 4DHumans system, that builds on PHALP [62], to jointly reconstruct and track humans in 4D.** ### Losses Following best practices in the HMR literature [29, 34], we train our predictor \(f\) with a combination of 2D losses, 3D losses, and a discriminator. Since we train with a mixture of datasets, each having different kinds of annotations, we employ a subset of these losses for each image in a mini-batch. We use the same losses even with pseudo-ground truth annotations. Given an input image \(I\), the model predicts \(\Theta=[\theta,\beta,\pi]=f(I)\). Whenever we have access to the ground-truth SMPL pose parameters \(\theta^{*}\) and shape parameters \(\beta^{*}\), we bootstrap the model predictions using an MSE loss: \[\mathcal{L}_{\texttt{smpl}}=||\theta-\theta^{*}||_{2}^{2}+||\beta-\beta^{*}||_ {2}^{2}.\] When the image has accurate ground-truth 3D keypoint annotations \(X^{*}\), we additionally supervise the predicted 3D keypoints \(X\) with an L1 loss: \[\mathcal{L}_{\texttt{kp3D}}=||X-X^{*}||_{1}.\] When the image has 2D keypoints annotations \(x^{*}\), we supervise projections of predicted 3D keypoints \(\pi(X)\) using an L1 loss: \[\mathcal{L}_{\texttt{kp2D}}=||\pi(X)-x^{*}||_{1}.\] Furthermore, we want to ensure that our model predicts valid 3D poses and use the adversarial prior in HMR [29]. It factorizes the model parameters into: (i) body pose parameters \(\theta_{b}\), (ii) shape parameters \(\beta\), and (iii) per-part relative rotations \(\theta_{i}\), which is one 3D rotation for each of the 23 joints of the SMPL model. We train a discriminator \(D_{k}\) for each factor of the body model, and the generator loss can be expressed as: \[\mathcal{L}_{\texttt{adv}}=\sum_{k}(D_{k}(\theta_{b},\beta)-1)^{2}.\] ### Pseudo-Ground Truth fitting We scale to unlabelled datasets (_i.e_., InstaVariety [30], AVA [20], AI Challenger [75]) by computing pseudo-ground truth annotations. Given any image, we first use an off-the-shelf detector [39] and a body keypoints estimator [78] to get bounding boxes and corresponding 2D keypoints. We then fit a SMPL mesh to these 2D keypoints using ProHMR [36] to get pseudo-ground truth SMPL parameters \(\theta^{*}\) and \(\beta^{*}\) with camera \(\pi^{*}\). ## 4 Tracking People In videos with multiple people, we need the ability to associate people across time, _i.e_., perform tracking. For this we build upon PHALP [62], a state-of-the-art tracker based on features derived from HMR-style 3D reconstructions. The basic idea is to detect people in individual frames, and "lift" them to 3D, extracting their 3D pose, location in 3D space (derived from the estimated camera), and 3D appearance (derived from the texture map). A tracklet representation is incrementally built up for each individual person over time. The recursion step is to predict for each tracklet, the pose, location and appearance of the person in the next frame, all in 3D, and then find best matches between these top-down predictions and the bottom-up detections of people in that frame after lifting them to 3D. The state represented by each tracklet is then updated by the incoming observation, and the process is iterated. It is possible to track through occlusions because the 3D representation of a tracklet continues to be updated based on past history. We believe that a robust pose predictor should also perform well, when evaluated on this downstream task of tracking, so we use the tracking metrics as a proxy to evaluate the quality of 3D reconstructions. But first we needed to modify the PHALP framework to allow for fair comparison of different pose prediction models. Originally, PHALP used pose features based on the last layer of the HMR network, _i.e_., a 2048-dimensional embedding space. This limits the ability of PHALP to be used with different pose models (_e.g_., HMR 2.0, PARE, PyMAF etc.). To create a more generic version of PHALP, we perform the modification of representing pose in terms of SMPL pose parameters, and we accordingly optimize the PHALP cost function to utilize the new pose distance. Similarly, we adapt the pose predictor to operate on the space of SMPL parameters. More specifically, we train a vanilla transformer model [71] by masking random pose tokens as shown in the Fig 3. This allows us to predict future poses in time, as well as amodal completion of missing detections. With these modifications, we can plug in any mesh recovery methods and run them on any videos. We call this modified version PHALP\({}^{\prime}\). Figure 3: **Pose prediction: We train a BERT-style [12] transformer model on over 1 million tracks obtained from [60]. This allow us to make future predictions and amodal completion of missing detections using the same model. To predict future poses (\(t+1\), \(t+2\),...), we query the model with a mask-token using corresponding positional embeddings. Similarly for amodal completion, we replace missing detections with a masked token.** 4DHumans.To track people in videos, previous approaches relied on off-the-shelf tracking approaches and used their output to reconstruct humans in videos (, take the bounding boxes from tracking output and reconstruct people). For example, PHD [87], HMMR [30] can run on videos with only single person in the scene. In this work, we combine reconstruction and tracking into a single system and show that better pose reconstructions result in better tracking and this combined system can now run on any videos in the wild. ## 5 Experiments In this section, we evaluate our reconstruction and tracking system qualitatively and quantitatively. First, we show that HMR 2.0 outperforms previous methods on standard 2D and 3D pose accuracy metrics (Section 5.2). Second, we show 4DHumans is a versatile tracker, achieving state-of-the-art performance (Section 5.3). Finally, we further demonstrate the robustness and accuracy of our recovered poses via superior performance on the downstream application of action recognition (Section 5.4). ### Setup **Datasets.** Following previous work, we use the typical datasets for training,, Human3.6M [26], MPI-INF-3DHP [48], COCO [43] and MPII [2]. Additionally, we use InstaVariety [30], AVA [20] and AI Challenger [75] as extra data where we generate pseudo-ground truth fits. **Baselines.** We report performance on benchmarks that we can compare with many previous works (Section 5.2), but we also perform a more detailed comparison with recent state-of-the-art methods,, PyMAF [86], CLIFF [40], HMAR [62] PARE [33], and PyMAF-X [85]. For fairness, we only evaluate the body-only performance of PyMAF-X. ### Pose Accuracy **3D Metrics.** For 3D pose accuracy, we follow the typical protocols of prior work,, [34], and we present results on the 3DPW test split and on the Human3.6M val split, reporting MPJPE, and PA-MPJPE in Table 1. Please notice that we only compare with methods that do not use the training set of 3DPW for training, similar to us. We observe that with our HMR 2.0a model, which trains only on the typical datasets, we can outperform all previous baselines across all metrics. However, we believe that these benchmarks are very saturated and these smaller differences in pose metrics tend to not be very significant. In fact, we observe that by a small compromise of the performance on 3DPW, our HMR 2.0b model, which trains for longer on more data (AVA [20], AI Challenger [75], and InstaVariety [30]), achieves results that perform better on more unusual poses than what can be found in Human3.6M and 3DPW. We observe this qualitatively and from performance evaluated on 2D pose reprojection (Table 2). Furthermore, we observe that HMR 2.0b is a more robust model and use it for evaluation in the rest of the paper. **2D Metrics.** We evaluate 2D image alignment of the generated poses by reporting PCK of reprojected keypoints at different thresholds on LSP-Extended [27], COCO validation set [43], and Posetrack validation set [1]. Since PyMAF(-X) [85, 86] were trained using LSP-Extended, we do not report numbers for that part of the table. Notice in Table 2, that HMR 2.0-b consistently outperforms all previous approaches. On LSP-Extended, which contains unusual poses, HMR 2.0-b achieves [email protected] of 0.54, which is \(2\times\) better than the second best (PARE) with 0.27. For [email protected] on easier datasets like COCO and PoseTrack with less extreme poses, HMR 2.0b still outperforms the second-best approaches but by narrower margins of 8% and \begin{table} \begin{tabular}{l l|c c|c c} \hline \hline \multirow{3}{*}{Method} & \multicolumn{3}{c|}{3DPW} & \multicolumn{2}{c}{Human3.6M} \\ \cline{2-6} & \multicolumn{1}{c|}{MPJPE} & PA-MPJPE & MPJPE & PA-MPJPE \\ \hline \multirow{5}{*}{Datasets} & Kanazawa [30] & 116.5 & 72.6 & - & 56.9 \\ & Doersch [13] & - & 74.7 & - & - \\ & Arnab [3] & - & 72.2 & 77.8 & 54.3 \\ & DSD [68] & - & 69.5 & 59.1 & 42.4 \\ & VIBE [32] & 93.5 & 56.5 & 65.9 & 41.5 \\ \hline \multirow{5}{*}{Datasets} & Pavlakos [57] & - & - & - & 75.9 \\ & HMR [29] & 130.0 & 76.7 & 88.0 & 56.8 \\ & NBF [51] & - & - & - & 59.9 \\ & GraphCMR [35] & - & 70.2 & - & 50.1 \\ & HoloPose [22] & - & - & 60.3 & 46.5 \\ & DenseRaC [79] & - & - & 76.8 & 48.0 \\ & SPIN [34] & 96.9 & 59.2 & 62.5 & 41.1 \\ & DecoMR [83] & - & 61.7 & - & 39.3† \\ & DaNet [84] & - & 56.9 & 61.5 & 48.6 \\ & Song [66] & - & 55.9 & - & 56.4 \\ & I2L-MeshNet [50] & 100.0 & 60.0 & 55.7† & 41.1† \\ & HKMR [19] & - & - & 59.6 & 43.2 \\ & PyMAF [86] & 92.8 & 58.9 & 57.7 & 40.5 \\ & PARE [33] & 82.0 & 50.9 & 76.8 & 50.6 \\ & PyMAF-X [85] & 78.0 & 47.1 & 54.2 & 37.2 \\ & HMR 2.0a & 69.8 & 44.4 & 45.3 & 33.8 \\ & HMR 2.0b & 81.4 & 54.5 & 52.6 & 33.4 \\ \hline \hline \end{tabular} \end{table} Table 1: **Reconstructions evaluated in 3D:** Reconstruction errors (in mm) on the 3DPW and Human3.6M datasets. † † denotes the numbers evaluated on non-parametric results. Lower \(\downarrow\) is better. Please see the text for details. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{LSP-Extended} & \multicolumn{2}{c}{COCO} & \multicolumn{2}{c}{PoseTrack} \\ & @0.05 & @0.1 & @0.05 & @0.1 & @0.05 & @0.1 \\ \hline PyMAF [86] & - & - & 0.68 & 0.86 & 0.77 & 0.92 \\ CLIFF [40] & 0.30 & 0.64 & 0.63 & 0.88 & 0.75 & 0.93 \\ PARE [33] & 0.27 & 0.60 & 0.72 & 0.91 & 0.79 & 0.93 \\ PyMAF-X [85] & - & - & 0.79 & 0.93 & 0.85 & 0.95 \\ HMR 2.0a & 0.38 & 0.72 & 0.79 & 0.95 & 0.86 & 0.97 \\ HMR 2.0b & 0.54 & 0.84 & 0.85 & 0.96 & 0.90 & 0.98 \\ \hline \hline \end{tabular} \end{table} Table 2: **Reconstructions evaluated in 2D.** PCK scores of projected keypoints at different thresholds on the LSP-Extended, COCO, and PoseTrack datasets. Higher † is better. 6% respectively. HMR 2.0a also outperforms all baselines, but is worse than HMR 2.0b, especially on harder poses in LSP-Extended. **Qualitative Results.** We show qualitative results of HMR 2.0 in Figure 4. We are robust to extreme poses and partial occlusions. Our reconstructions are well-aligned with the image and are valid when seen from a novel view. Moreover, we compare with our closest competitors in Figure 5. We observe that PyMAF-X and particularly PARE often struggle with more unusual poses, while HMR 2.0 returns more faithful reconstructions. ### Tracking For tracking, we first demonstrate the versatility of the modifications introduced by PHALP\({}^{\prime}\), which allow us to evaluate 3D pose estimators on the downstream task of tracking. Then, we evaluate our complete system, 4DHumans, with respect to the state of the art. **Evaluation Setting.** Following previous work [61, 62], we report results based on IDs (ID switches), MOTA [31], IDF1 [64], and HOTA [46] on the Posetrack validation set using the protocol of [62], with detections from Mask R-CNN [24]. **Versatility of PHALP\({}^{\prime}\).** With the modifications of PHALP\({}^{\prime}\), we abandon the model-specific latent space of [62] and instead, we operate in the SMPL space, which is shared across most mesh recovery systems. This makes PHALP\({}^{\prime}\) more versatile and allows us to plug in different 3D pose estimators and compare them based on their performance on the downstream task of tracking. We perform this comparison in Table 3 where we use pose and location cues from state of the art 3D pose estimators (while still using appearance from HMAR [62]). We observe that HMR 2.0, PARE [33] and PyMAF-X [85] perform the best on the Posetrack dataset, with minor differences between them. Note that tracking is often most susceptible to errors in predicted 3D locations with body pose having a smaller effect in performance [62]. This means that good tracking performance can indicate robustness to occlusions, so it is helpful to consider this metric, but it is less helpful to distinguish fine-grained differences in pose. As a result, the competitive results of PARE [33] and PyMAF-X [85] indicate that they handle occlusions gracefully, but their pose estimation might still be less accurate (as observed from Table 2). See also Figure 5 and SupMat for more qualitative comparisons. results for a "pose-only" baseline that predicts action labels using only 3D pose and location estimates. We use this setting to compare our model with baselines on the downstream task of action recognition on the AVA dataset [20]. In [60], the authors train a transformer that takes SMPL poses as input and predicts action labels. Following their setup, we train a separate action classification transformer for each baseline. **Comparisons.** Comparing results in Table 5, we observe that HMR 2.0 outperforms baselines on the different class categories (OM, PI, PM) and overall. It achieves an mAP of 22.3 on the AVA test set, which is 14% better than the second-best baseline. Since accurate action recognition from poses needs fine-grained pose estimation, this is strong evidence that HMR 2.0 predicts more accurate poses than existing approaches. In fact, when combined with appearance, the AVA test set is 14% better than the second-best baseline, the AVA test set is 14% better than the second-best baseline. In fact, the AVA test set is 14% better than the second-best baseline. ance features, [60] shows that HMR 2.0 achieves the state of the art of 42.3 mAP on AVA action recognition, which is 7% better than the second-best of 39.5 mAP. ## 6 Conclusion We study the problem of reconstructing and tracking humans from images and video. First, we propose HMR 2.0, a fully "transformerized" version of a network for the problem of Human Mesh Recovery [29]. HMR 2.0 achieves strong performance on the usual 2D/3D pose metrics, while also acting as the backbone for our improved video tracker. The full system 4DHumans, jointly reconstructs and tracks people in video and achieves state-of-the-art results for tracking. To further illustrate the benefit of our 3D pose estimator, HMR 2.0, we apply it to the task of action recognition, where we demonstrate strong improvements upon previous pose-based baselines. Our work pushes the boundary of the videos that can be analyzed with techniques for 3D human reconstruction. At the same time, the improved results also demonstrate the type of limitations that need to be addressed in the future. For example, the use of the SMPL model [44] creates certain limitations, and leveraging improved models would allow us to model hand pose and facial expressions [54], or even capture greater age variation, _e.g_., infants [25] and kids [53, 67]. Moreover, since we consider each person independently, our reconstruction are less successful at capturing the fine-grained nature of people in close proximity, _e.g_., contact [18]. Besides this, our reconstructions "live" in the camera frame, so for proper understanding of the action in a video, we need to consider everyone in a common world coordinate frame, by reasoning about the camera motion too [56, 80, 81]. Finally, lower input resolution can affect the quality of our reconstructions, which could be addressed by more extreme resolution augmentations [77]. Figure 5: **Qualitative comparison of state-of-the-art mesh recovery methods. HMR 2.0 returns more faithful reconstructions for unusual poses compared to the closest competitors, PyMAF-X [85] and PARE [33].** Figure 6: **Qualitative tracking results of 4DHumans. We use head masks (frame number is on the top left). First row: We track people skating on ice with challenging poses and heavy occlusions, in a minute long video without switching identities. Second row: The main person is tracked through multiple interactions with other players. Third row: The person of interest is tracked through long occlusions.** **Acknowledgements** We thank members of the BAIR community for helpful discussions and StabilityAI for their generous compute grant. This work was supported by BAIR/BDD sponsors, ONR MURI (N00014-21-1-2801), and the DARPA MCS program.
2309.12490
Bayesian improved cross entropy method with categorical mixture models
We employ the Bayesian improved cross entropy (BiCE) method for rare event estimation in static networks and choose the categorical mixture as the parametric family to capture the dependence among network components. At each iteration of the BiCE method, the mixture parameters are updated through the weighted maximum a posteriori (MAP) estimate, which mitigates the overfitting issue of the standard improved cross entropy (iCE) method through a novel balanced prior, and we propose a generalized version of the expectation-maximization (EM) algorithm to approximate this weighted MAP estimate. The resulting importance sampling distribution is proved to be unbiased. For choosing a proper number of components $K$ in the mixture, we compute the Bayesian information criterion (BIC) of each candidate $K$ as a by-product of the generalized EM algorithm. The performance of the proposed method is investigated through a simple illustration, a benchmark study, and a practical application. In all these numerical examples, the BiCE method results in an efficient and accurate estimator that significantly outperforms the standard iCE method and the BiCE method with the independent categorical distribution.
Jianpeng Chan, Iason Papaioannou, Daniel Straub
2023-09-21T21:18:32Z
http://arxiv.org/abs/2309.12490v1
# Bayesian improved cross entropy method with categorical mixture models ###### Abstract We employ the Bayesian improved cross entropy (BiCE) method for rare event estimation in static networks and choose the categorical mixture as the parametric family to capture the dependence among network components. At each iteration of the BiCE method, the mixture parameters are updated through the weighted maximum a posteriori (MAP) estimate, which mitigates the overfitting issue of the standard improved cross entropy (iCE) method through a novel balanced prior, and we propose a generalized version of the expectation-maximization (EM) algorithm to approximate this weighted MAP estimate. The resulting importance sampling distribution is proved to be unbiased. For choosing a proper number of components \(K\) in the mixture, we compute the Bayesian information criterion (BIC) of each candidate \(K\) as a by-product of the generalized EM algorithm. The performance of the proposed method is investigated through a simple illustration, a benchmark study, and a practical application. In all these numerical examples, the BiCE method results in an efficient and accurate estimator that significantly outperforms the standard iCE method and the BiCE method with the independent categorical distribution. keywords: network reliability assessment, Bayesian cross entropy method, categorical mixtures, Bayesian information criterion + Footnote †: journal: To be determined ## 1 Introduction In February 2021, three heavy winter storms swept over Texas and triggered one of the worst energy network failures in Texas state history, which soon led to a severe power, food, and water shortage. A conservative estimate of the property damage is over 195 billion US dollars and more than 246 (estimated) people died during this event. These devastating consequences highlight the need for understanding and managing the reliability of infrastructure networks. This requires an effective means for quantifying the probability of survival or, conversely, the probability of failure of network systems. In this context, the network is often simplified as a graph, whose edges or/and nodes are subjected to random failure. The network's performance is therefore a random variable and the probability that the network cannot deliver a certain level of performance is referred to as the failure probability \(p_{f}\). Mathematically, \(p_{f}\) is defined through a performance function, \(g(\cdot)\), which gives the safety margin of the network performance, and through a probabilistic input, \(p_{\mathbf{X}}(\cdot)\), that quantifies the uncertainty of the system state \(\mathbf{X}\triangleq[X_{1},...X_{d},...,X_{D}]^{T}\). \(X_{d}\) represents the state of the \(d\)-th component of the network, either edge or node, and \(D\) is the total number of components. In particular, \(p_{f}\) reads \[p_{f}\triangleq\Pr\{g(\mathbf{X})\leqslant 0\}=\sum_{\mathbf{x}\in\Omega_{\mathbf{X}}} \mathbb{I}\{g(\mathbf{X})\leqslant 0\}p_{\mathbf{X}}(\mathbf{x}), \tag{1}\] where \(\Omega_{\mathbf{X}}\) is the sample space of \(\mathbf{X}\), and \(\mathbb{I}\{\cdot\}\) represents the indicator function. Note that \(\mathbf{X}\) is often discrete in the context of network reliability assessment. Hence, in Eq. (1) the failure probability \(p_{f}\) is written as a summation of the input distribution \(p_{\mathbf{X}}(\cdot)\) over the failure domain \(F\triangleq\{\mathbf{x}\in\Omega_{\mathbf{X}}:g(\mathbf{x})\leqslant 0\}\). The static(or time-independent) performance of networks can often be measured by either connectivity or 'flow' [1]. For computer and communication networks, the connection among different parts of the network is of major concern, resulting in three different types of connectivity-based problems, namely the two terminals, \(K\) terminals, and all terminals connectivity problems [2], while for road networks and food supply chains, one is primarily interested in the 'flow' that a network can deliver, e.g., the maximum flow that can be transported from A to B. These flow-based problems involve multi-state (even continuous) components or/and network performance and can often be regarded as an extension of the connectivity-based problems [3]. In Table 1, we summarize three state-of-art methods for solving connectivity/flow-based problems, where CB is short for the counting-based method [4; 5], SSD is for the state-space decomposition [6; 7; 8; 9; 10], and CP is for creation process embedded methods [11; 12; 13; 14; 15; 16; 17; 18]. Other widely used methods include, sum of disjoint products [19], binary decision diagram [20], domination theory [21], and various minimal-cutsets/pathsets-based methods, e.g., [22; 23; 24; 25; 26]. For power grids and water supply systems, the 'flow' is often driven by the physical law (e.g. Kirchhoff's law for power flow) and operation strategies, and the network is not necessarily coherent. Hence, approaches built on the coherency assumption are not directly applicable. A set of methods have been proposed to solve such problems, among which sampling-based methods feature prominently. These include crude Monte Carlo simulation (MCS) [27; 28], subset simulation [1; 29; 30; 31; 32], adaptive importance sampling (IS) [33; 34; 35], and active learning methods [36; 37]. We mainly focus on the static rare event estimation for network performance in this paper, and therefore, methods for time-dependent network reliability estimation such as the probability density evolution method (PDEM) [38] and modern stochastic process methods [3] are not included here. Recently, the authors employed the improved cross entropy method (iCE) for solving network reliability problems and introduced a Bayesian approach to circumvent the overfitting issue of the standard iCE. The proposed method is termed Bayesian iCE (BiCE) [35]. Therein, the parametric model for approximating the optimal IS distribution is an independent categorical distribution and hence does not account for the dependence among components in the optimal IS distribution. This motivates the idea of employing a more flexible categorical mixture as the parametric model within the BiCE method. This parametric model can be updated at each iteration of the BiCE method by the generalized EM algorithm, which is introduced in this paper to approximate the maximum a posteriori (MAP) estimate of the mixture parameters given weighed samples. Note that the EM algorithm for estimating the MAP of a mixture model is well known [39]; herein we develop a modified \begin{table} \begin{tabular}{l l l l} \hline & CB & SSD & CP \\ \hline introduction & [1,2] & [3-7] & [8-13] \\ not suitable for & small comp. failure prob. & large scale network & costly \(g(\cdot)\) \\ multi-state extension & unknown & possible & possible \\ coherent system & needed & needed & needed \\ error estimate & user-specific & reliability bound & relative error \\ \hline \end{tabular} \end{table} Table 1: Comparison of different methods for connectivity-based problems version that accounts for the sample weights. The major contribution of this paper is to combine this generalized EM algorithm with the BiCE method for handling a more flexible mixture parametric family. We find that the proposed method, termed BiCE-CM, clearly outperforms the BiCE method with a single independent categorical distribution and provides better results than the standard iCE method. The key ingredient of the proposed method is a balanced Dirichlet prior that does not dominate but can still correct the potentially overfitted weighted MLE in the iCE. A number of components \(K\) in the categorical mixture is chosen adaptively through the Bayesian information criterion (BIC). The paper is organized as follows: In Sec. 2, we summarize the basic ideas of iCE, followed by a brief introduction of the categorical mixture model and its approximated inference techniques in Sec. 3. The BiCE method with a categorical mixture parametric family (BiCE-CM) is introduced in Sec. 4. The efficiency and accuracy of the proposed method are demonstrated by a set of numerical examples in Sec. 5. ## 2 Cross-entropy-based importance sampling In this section, we give a brief introduction to CE-based IS [40]. The basic idea is to choose the IS distribution from a predefined parametric family \(h(\cdot;\mathbf{v})\) that best resembles the optimal IS distribution \[p_{\mathbf{X}}^{*}(\mathbf{x})=\frac{p_{\mathbf{X}}(\mathbf{x})\mathbb{I}\{g(\mathbf{x})\leq 0\}}{p _{f}}=p_{\mathbf{X}}(\mathbf{x}|F). \tag{2}\] The similarity between \(p_{\mathbf{X}}^{*}(\cdot)\) and \(h(\cdot;\mathbf{v})\) is measured by the Kullback-Leibler (KL) divergence that is defined as follows: \[D(p_{\mathbf{X}}^{*}(\cdot),h(\cdot;\mathbf{v})) =\mathbb{E}_{p_{\mathbf{X}}^{*}}\left[\ln\left(\frac{p_{\mathbf{X}}^{*}( \mathbf{X})}{h(\mathbf{X};\mathbf{v})}\right)\right]\] \[=\mathbb{E}_{p_{\mathbf{X}}^{*}}[\ln(p_{\mathbf{X}}^{*}(\mathbf{X}))]- \mathbb{E}_{p_{\mathbf{X}}^{*}}[\ln(h(\mathbf{X};\mathbf{v}))]. \tag{3}\] In other words, the CE method determines the optimal parameter vector \(\mathbf{v}^{*}\) in \(h(\cdot;\mathbf{v})\) through minimizing the KL divergence in Eq. (3), i.e., through solving \[\mathbf{v}^{*} =\operatorname*{arg\,min}_{\mathbf{v}\in\mathcal{V}}D(p_{\mathbf{X}}^{*}( \cdot),h(\cdot;\mathbf{v}))\] \[=\operatorname*{arg\,min}_{\mathbf{v}\in\mathcal{V}}-\mathbb{E}_{p_{ \mathbf{X}}^{*}}[\ln(h(\mathbf{X};\mathbf{v}))]\] \[=\operatorname*{arg\,max}_{\mathbf{v}\in\mathcal{V}}\mathbb{E}_{p_{ \mathbf{X}}}[\mathbb{I}\{g(\mathbf{X})\leq 0\}\ln(h(\mathbf{X};\mathbf{v}))]. \tag{4}\] The problem in Eq. (4) cannot be solved in closed form due to the indicator function inside the expectation, so instead we estimate \(\mathbf{v}^{*}\) through optimizing an alternative objective function that substitutes the expectation in Eq. (4) with an IS estimator. That is, we solve \[\widehat{\mathbf{v}}=\operatorname*{arg\,max}_{\mathbf{v}\in\mathcal{V}}\frac{1}{N} \sum_{i=1}^{N}\frac{p_{\mathbf{X}}(\mathbf{x}_{i})\mathbb{I}\{g(\mathbf{x}_{i})\leq 0\}}{p_{ ref}(\mathbf{x}_{i})}\ln(h(\mathbf{x}_{i};\mathbf{v})),\qquad\mathbf{x}_{i}\sim p_{ref}(\cdot). \tag{5}\] \(\{\mathbf{x}_{i}\}_{i=1}^{N}\) are samples from \(p_{ref}(\cdot)\), the IS distribution for estimating the expectation in Eq. (4), which is also known as the reference distribution [40]. Note that \(\widehat{\mathbf{v}}\) can be interpreted as the weighted MLE of the parametric family with weights \(\{w_{i}\propto\frac{p_{\mathbf{X}}(\mathbf{x}_{i})\mathbb{I}\{g(\mathbf{x}_{i})\leq 0\}}{p_{ ref}(\mathbf{x}_{i})}\}_{i=1}^{N}\)[35, 41]. As discussed in [35, 42], one should distinguish the sub-optimal IS distribution \(h(\cdot;\mathbf{v}^{*})\) from the chosen IS distribution \(h(\cdot;\widehat{\mathbf{v}})\) in the CE method. \(h(\cdot;\mathbf{v}^{*})\) is conditional on the predefined parametric family while \(h(\cdot;\widehat{\mathbf{v}})\) additionally depends on the CE procedure, in particular, the choice of the reference distribution \(p_{ref}(\cdot)\) and the number of samples. An appropriate reference distribution leads to an IS distribution \(h(\mathbf{x};\widehat{\mathbf{v}})\) close to \(h(\mathbf{x};\mathbf{v}^{*})\), which is the optimal choice within the given parametric family. For rare event estimation, the reference distribution is chosen in an adaptive way. Let \(p_{\mathbf{X}}^{(t)}(\cdot),t=1,...,T\) denote a sequence of intermediate target distributions that gradually approach the optimal IS distribution \(p_{\mathbf{X}}^{*}(\cdot)\). The CE optimization problem is then solved iteratively for finding a good approximation to each \(t\)-th \(p_{\mathbf{X}}^{(t)}(\cdot)\), and this results in a sequence of CE parameter vectors \(\{\widehat{\mathbf{v}}^{(t)},t=1,...,T\}\) and distributions \(\{h(\cdot;\widehat{\mathbf{v}}^{(t)}),t=1,...,T\}\). The distribution we obtain in the \(t\)-th iteration, i.e., \(h(\cdot;\widehat{\mathbf{v}}^{(t)})\), is used as the reference distribution \(p_{ref}(\cdot)\) for the CE procedure in iteration \(t+1\). In this way, one takes \(h(\cdot;\widehat{\mathbf{v}}^{(T-1)})\) as the reference distribution for Eq. (5), and \(h(\cdot;\widehat{\mathbf{v}}^{(T)})\) as the final IS distribution. For the first iteration, the input distribution \(p_{\mathbf{X}}(\cdot)\) is used as the reference distribution. There are many different ways of designing the intermediate target distributions [40, 43, 44]. For instance, in the iCE method [43], the intermediate target distribution reads \[p_{\mathbf{X}}^{(t)}(\mathbf{x})\triangleq\frac{1}{Z^{(t)}}p_{\mathbf{X}}(\mathbf{x})\Phi\left(- \frac{g(\mathbf{x})}{\sigma^{(t)}}\right),t=1,...,T \tag{6}\] where \(Z^{(t)}\) is the normalizing constant and \(\Phi\) is the cumulative distribution function (CDF) of the standard normal distribution. The distribution sequence is driven by the parameter \(\sigma^{(t)}>0\), and gradually approaches the optimal IS distribution with decreasing \(\sigma^{(t)}\). The CE optimization problem for Eq. (6) reads \[\mathbf{v}^{(t,*)}=\operatorname*{arg\,max}_{\mathbf{v}\in\mathcal{V}}\mathbb{E}_{p_{ \mathbf{X}}}[\Phi(-g(\mathbf{X})/\sigma^{(t)})\ln(h(\mathbf{X};\mathbf{v}))], \tag{7}\] and the sample counterpart of Eq. (7) can be written as \[\widehat{\mathbf{v}}^{(t)}=\operatorname*{arg\,max}_{\mathbf{v}\in \mathcal{V}}\frac{1}{N}\sum_{i=1}^{N}W(\mathbf{x}_{i})\ln(h(\mathbf{x}_{i};\mathbf{v})), \mathbf{x}_{i}\sim h(\cdot;\widehat{\mathbf{v}}^{(t-1)}) \tag{8}\] \[W(\mathbf{x}_{i})\triangleq \frac{p_{\mathbf{X}}(\mathbf{x}_{i})\Phi(-g(\mathbf{x}_{i})/\sigma^{(t)})}{ h(\mathbf{x}_{i};\widehat{\mathbf{v}}^{(t-1)})}. \tag{9}\] Note that \(\widehat{\mathbf{v}}^{(t)}\) is the weighted maximum likelihood estimation (MLE) of \(\mathbf{v}^{(t,*)}\), and for a properly reparameterized exponential family, \(\widehat{\mathbf{v}}^{(t)}\) is also the self-normalized IS estimator of \(\mathbf{v}^{(t,*)}\)[35]. The accuracy of \(\widehat{\mathbf{v}}^{(t)}\) can be measured by the effective sample size (ESS), which is defined as the equivalent sample size required by MCS with the current target distribution to achieve the same variance as the self-normalized IS. The ESS of \(\widehat{\mathbf{v}}^{(t)}\) in Eq. (8) can be approximated by [45] \[ESS\approx\frac{N}{1+\widehat{\delta}^{2}(\{W(\mathbf{x}_{i})\}_{i=1}^{N})},\quad \mathbf{x}_{i}\sim h(\cdot;\widehat{\mathbf{v}}^{(t-1)}) \tag{10}\] where \(\widehat{\delta}(\{W(\mathbf{x}_{i})\}_{i=1}^{N})\) represents the sample coefficient of variation (c.o.v.) of the weights vector \(\{W(\mathbf{x}_{i})\}_{i=1}^{N}\). Although the categorical mixture employed in this paper does not belong to the exponential family, we still expect that a large ESS will generally lead to a more accurate \(\widehat{\mathbf{v}}^{(t)}\). Given the reference distribution \(h(\mathbf{x}_{i};\widehat{\mathbf{v}}^{(t-1)})\), the iCE method fixes \(N\) and changes \(\sigma^{(t)}\) for achieving a constant ESS, and hence an accurate \(\widehat{\mathbf{v}}^{(t)}\). Specifically, the intermediate target distribution in the iCE method is adapted at each \(t\)-th iteration by solving \[\sigma^{(t)}=\operatorname*{arg\,min}_{\sigma\in(0,\sigma^{(t-1)})}|\widehat{ \delta}\left(\{W(\mathbf{x}_{i};\sigma)\}_{i=1}^{N}\right)-\delta_{tar}|,\qquad\mathbf{ x}_{i}\sim h(\cdot;\widehat{\mathbf{v}}^{(t-1)}), \tag{11}\] where \(\widehat{\delta}(\cdot)\) represents the sample c.o.v. of a vector and \(\delta_{tar}\) is the hyperparameter that influences the convergence rate of the intermediate target distributions. A common choice is \(\delta_{tar}=1.5\). The above procedure is iterated until \[\widehat{\delta}\left(\left\{\frac{\mathbb{I}\{g(\mathbf{x}_{i})\leq 0\}}{\Phi(-g( \mathbf{x}_{i})/\sigma^{(t)})}\right\}_{i=1}^{N}\right)\leq\delta_{\epsilon}, \qquad\mathbf{x}_{i}\sim h(\cdot;\widehat{\mathbf{v}}^{(t)}). \tag{12}\] where \(\delta_{\epsilon}\) is another hyperparameter and is often chosen to be the same as \(\delta_{tar}\)[43]. It should be stressed that the standard iCE method may suffer from overfitting when the sample size is small. To mitigate this issue, the BiCE method [35] substitutes the weighted MLE with its Bayesian counterpart; therein the posterior predictive distribution is employed to update a single categorical parametric model in the context of network reliability assessment. In addition, the BiCE method employs an alternative weight function for solving \(\sigma^{(t)}\) through Eq. (11), which is defined as \[W^{(alt)}(\mathbf{x};\sigma)\triangleq\frac{\Phi(-g(\mathbf{x})/\sigma)}{\Phi(-g(\mathbf{x })/\sigma^{(t-1)})}. \tag{13}\] For a more detailed discussion and theoretical justification of Eq. (13), we refer to [46] and [35]. In this paper, we consider a more flexible parametric model, the categorical mixture, in the BiCE method. Before introducing the proposed CE approach, we first give an introduction to the categorical mixture model and its associated inference techniques in the following section. ## 3 The categorical mixture model The categorical mixture model can be defined as: \[h_{cm}(\mathbf{x};\mathbf{\eta})=\sum_{k=1}^{K}\alpha_{k}h_{c}(\mathbf{x};\mathbf{\theta}_{k}) =\sum_{k=1}^{K}\alpha_{k}\prod_{d=1}^{D}\prod_{j=1}^{n_{d}}\theta_{k,d,j}^{ \mathbb{I}\{x_{d}=s_{d,j}\}}. \tag{14}\] The probability distribution \(h_{cm}(\cdot;\mathbf{\eta})\) is modelled as a linear combination of \(K\) independent categorical components, denoted here as \(h_{c}(\cdot;\mathbf{\theta})\). In this paper, \(h_{c}(\cdot;\mathbf{\theta})\) denotes the independent categorical distribution with parameters \(\mathbf{\theta}\). Specifically, in the \(k\)-th mixture component, the probability that the \(d\)-th component \(X_{d}\) takes the \(j\)-th state \(s_{d,j}\) is \(\theta_{k,d,j}\), where \(k=1,...,K;d=1,...,D;j=1,...,n_{d}\). \(D\) and \(n_{d}\) denote the number of input random variables \(X_{d}\) and the number of states for each \(X_{d}\). \(\alpha_{k},k=1,...K\), are the non-negative mixture weights that sum to one. All model parameters are collected in the vector \(\mathbf{\eta}\), i.e., \(\mathbf{\eta}\triangleq\{\alpha_{k},\mathbf{\theta}_{k})\}_{k=1}^{K}\). The mixture model described in Eq. (14) is invariant with respect to the permutation of the component labels. As a result, the parameter estimation is unidentifiable [47]. Additionally, Eq. (14) remains invariant also (1) when adding a mixture component with zero weight, or (2) when replicating any of the mixture components and splitting the associated weight [47], which leads to a broader class of unidentifiability of the model parameters [48]. ### MLE of the categorical mixture and EM algorithm Suppose we want to fit a categorical mixture described in Eq. (14) with \(N\) samples, \(\mathcal{X}\triangleq\{\mathbf{x}_{i}\}_{i=1}^{N}\), and consider the case where the number of mixture components is known to be \(K\). The most common approach is through MLE. The log-likelihood is \[\ln\mathcal{L}(\mathbf{\eta};\mathcal{X})\triangleq\ln\left(\prod_{i=1}^{N}h_{cm}( \mathbf{x}_{i};\mathbf{\eta})\right)=\sum_{i=1}^{N}\ln\left(\sum_{k=1}^{K}\alpha_{k}h_ {c}(\mathbf{x}_{i};\mathbf{\theta}_{k})\right). \tag{15}\] The MLE for the categorical mixture cannot be obtained in closed form. If one observes the allocation variable \(z_{i}\) for each \(i\)-th sample \(\mathbf{x}_{i}\), the log-likelihood function in Eq. (15) takes the following form: \[\ln\mathcal{L}^{(c)}(\mathbf{\eta};\mathcal{X})=\sum_{i=1}^{N}\ln\left(\alpha_{z_ {i}}h_{c}(\mathbf{x}_{i};\mathbf{\theta}_{z_{i}})\right)=\sum_{k=1}^{K}\sum_{i\in \mathcal{C}_{k}}\ln\left(\alpha_{k}h_{c}(\mathbf{x}_{i};\mathbf{\theta}_{k})\right). \tag{16}\] The allocation variable \(z_{i}\) specifies which mixture component generates \(\mathbf{x}_{i}\), and \(\mathcal{C}_{k}\triangleq\{i:i=1,...,N,z_{i}=k\}\) collects the indexes of all the samples generated by the \(k\)-th component of the mixture. Eq. (16) is often termed the complete data log-likelihood in the context of MLE to differentiate it from the log-likelihood in Eq. (15). Maximizing Eq. (16), is equivalent to fitting a categorical distribution \(h_{c}(\cdot;\mathbf{\theta}_{k})\) for each \(\mathcal{C}_{k}\) and letting the associated weight \(\alpha_{k}\) be proportional to \(|\mathcal{C}_{k}|\), the number of samples in \(\mathcal{C}_{k}\). Note that the closed-form solution to the MLE is well known for the single categorical distribution. However, the allocation variables \(\{z_{i}\}_{i=1}^{N}\) are not observed; they are latent variables. One approach is to estimate the latent variables through a clustering algorithm. However, clustering of categorical data is usually not straightforward, especially in the high-dimensional sample space. For finding a mode of the log-likelihood function shown in Eq. (15), one usually resorts to the EM algorithm, which iteratively updates and optimizes the so-called \(Q\) function, an auxiliary function that computes the expectation of the complete data log-likelihood in Eq. (16). That is, \[Q(\boldsymbol{\eta};\{p_{Z_{i}}(\cdot)\}_{i=1}^{N}) =\sum_{i=1}^{N}\mathbb{E}_{Z_{i}\sim p_{Z_{i}}(\cdot)}\left[\ln \left(\alpha_{Z_{i}}h_{c}(\boldsymbol{x}_{i};\boldsymbol{\theta}_{Z_{i}}) \right)\right]\] \[=\sum_{i=1}^{N}\sum_{k=1}^{K}p_{Z_{i}}(k)\ln\left(\alpha_{k}h_{c} (\boldsymbol{x}_{i};\boldsymbol{\theta}_{k})\right), \tag{17}\] where \(p_{Z_{i}}(\cdot)\) is a customary distribution for the \(i\)-th allocation variable \(Z_{i}\). \(p_{Z_{i}}(k)\) represents the probability that the \(i\)-th sample is generated by the \(k\)-th component of the mixture. Note that \(p_{Z_{i}}(\cdot)\) can be an arbitrary distribution without necessarily being related to \(\boldsymbol{\eta}\). According to Jensen's inequality, the log-likelihood function \(\ln\mathcal{L}(\boldsymbol{\eta};\mathcal{X})\) in Eq. (15) is bounded from below by the \(Q\) function plus a constant [39]. That is \[\ln\mathcal{L}(\boldsymbol{\eta};\mathcal{X}) =\sum_{i=1}^{N}\ln\left(\sum_{k=1}^{K}p_{Z_{i}}(k)\frac{\alpha_{ k}h_{c}(\boldsymbol{x}_{i};\boldsymbol{\theta}_{k})}{p_{Z_{i}}(k)}\right)\] \[\geqslant\sum_{i=1}^{N}\left(\sum_{k=1}^{K}p_{Z_{i}}(k)\ln\frac{ \alpha_{k}h_{c}(\boldsymbol{x}_{i};\boldsymbol{\theta}_{k})}{p_{Z_{i}}(k)}\right)\] \[=Q(\boldsymbol{\eta};\{p_{Z_{i}}(\cdot)\}_{i=1}^{N})+\sum_{i=1}^ {N}\mathbb{H}(p_{Z_{i}}(\cdot)). \tag{18}\] \(\mathbb{H}(p_{Z_{i}}(\cdot))\triangleq\sum_{k=1}^{K}-p_{Z_{i}}(k)\ln(p_{Z_{i} }(k))\geqslant 0\) is the entropy of the distribution \(p_{Z_{i}}(\cdot)\) and is a constant with respect to \(\boldsymbol{\eta}\). The inequality (18) takes the equal sign if \[p_{Z_{i}}(k)=\frac{\alpha_{k}h_{c}(\boldsymbol{x}_{i};\boldsymbol{\theta}_{k} )}{\sum_{k^{\prime}=1}^{N}\alpha_{k^{\prime}}h_{c}(\boldsymbol{x}_{i}; \boldsymbol{\theta}_{k^{\prime}})}\triangleq\gamma_{i,k}(\boldsymbol{\eta}) \tag{19}\] holds for each \(k=1,...,K\) and \(i=1,...,N\). \([\gamma_{i,k}(\mathbf{\eta})]_{N\times K}\) is also termed the responsibility matrix in the literature [39]. Eq. (19) indicates that, for any given \(\mathbf{\eta}\) denoted as \(\mathbf{\eta}^{(cur)}\), one can choose \(p_{Z_{i}}(\cdot)=\gamma_{i,\cdot}(\mathbf{\eta}^{(cur)})\) for each \(Z_{i}\), such that \(\ln\mathcal{L}(\mathbf{\eta}^{(cur)};\mathcal{X})=Q\left(\mathbf{\eta}^{(cur)};\mathbf{ \eta}^{(cur)}\right)+C(\mathbf{\eta}^{(cur)})\), where \(Q\left(\mathbf{\eta}^{(cur)};\mathbf{\eta}^{(cur)}\right)\) is short for \(Q\left(\mathbf{\eta}^{(cur)};\{\gamma_{i,\cdot}(\mathbf{\eta}^{(cur)})\}_{i=1}^{N}\right)\) and \(C(\mathbf{\eta}^{(cur)})\triangleq\sum_{i=1}^{N}\mathbb{H}(\gamma_{i,\cdot}(\mathbf{ \eta}^{(cur)}))\). This is also known as the expectation step (E step) of the EM algorithm, in which we compute the responsibility matrix \([\gamma_{i,k}(\mathbf{\eta}^{(cur)})]_{N\times K}\) via Eq. (19) and formulate the \(Q\) function. In the next step, the maximization step or the M step for short, the EM algorithm fixes \(p_{Z_{i}}(k)=\gamma_{i,k}(\mathbf{\eta}^{(cur)})\) for each \(i\) and \(k\) and maximizes the \(Q\) function over \(\mathbf{\eta}\) to find a new \(\mathbf{\eta}\) denoted as \(\mathbf{\eta}^{(nxt)}\) whose \(Q\) function is larger than that of \(\mathbf{\eta}^{(cur)}\), i.e., \(Q\left(\mathbf{\eta}^{(nxt)};\mathbf{\eta}^{(cur)}\right)\geqslant Q\left(\mathbf{\eta}^{ (cur)};\mathbf{\eta}^{(cur)}\right)\). Since the \(Q\) function (plus a constant) is a lower bound of the log-likelihood as shown in Inequality(18), the log-likelihood of \(\mathbf{\eta}^{(nxt)}\) is also larger than that of \(\mathbf{\eta}^{(cur)}\). In fact, we have \(\ln\mathcal{L}(\mathbf{\eta}^{(nxt)};\mathcal{X})\geqslant Q\left(\mathbf{\eta}^{(nxt )};\mathbf{\eta}^{(cur)}\right)+C(\mathbf{\eta}^{(cur)})\geqslant Q\left(\mathbf{\eta}^{ (cur)};\mathbf{\eta}^{(cur)}\right)+C(\mathbf{\eta}^{(cur)})=\ln\mathcal{L}(\mathbf{\eta}^ {(cur)};\mathcal{X})\). The point here is that optimizing the \(Q\) function is much easier than optimizing the log-likelihood function in Eq. (15). Specifically, the M step solves the following optimization problem: \[\mathbf{\eta}^{(nxt)} =\operatorname*{arg\,max}_{\mathbf{\eta}}Q(\mathbf{\eta};\mathbf{\eta}^{(cur)})\] \[=\operatorname*{arg\,max}_{\mathbf{\eta}}\sum_{i=1}^{N}\sum_{k=1}^{K} \gamma_{i,k}(\mathbf{\eta}^{(cur)})\ln\left(\alpha_{k}h_{c}(\mathbf{x}_{i};\mathbf{\theta }_{k})\right). \tag{20}\] For the categorical mixture shown in Eq. (14), the closed-form solution \(\mathbf{\eta}^{(nxt)}=\{\alpha_{k}^{(nxt)},\mathbf{\theta}_{k}^{(nxt)}\}_{k=1}^{K}\) to the optimization problem in Eq. (20) exists and is given by: \[\alpha_{k}^{(nxt)} =\frac{\sum_{i=1}^{N}\gamma_{i,k}(\mathbf{\eta}^{(cur)})}{\sum_{k=1}^ {K}\sum_{i=1}^{N}\gamma_{i,k}(\mathbf{\eta}^{(cur)})}, \tag{21}\] \[\theta_{k,d,j}^{(nxt)} =\frac{\sum_{i=1}^{N}\gamma_{i,k}(\mathbf{\eta}^{(cur)})\mathbb{I}\{ x_{i,d}=s_{d,j}\}}{\sum_{i=1}^{N}\gamma_{i,k}(\mathbf{\eta}^{(cur)})}. \tag{22}\] Note that if there is no sample equal to \(s_{d,j}\), the probability assigned to that state, i.e., \(\theta_{k,d,j}^{(t+1)}\), will become zero in each \(k\)-th mixture component, and this can lead to overfitting, as will be shown later in Sec. 4.1. Through iterating the above two steps by setting \(\mathbf{\eta}^{(cur)}=\mathbf{\eta}^{(nxt)}\), one ends up with a sequence of model parameters, \(\mathbf{\eta}^{(0)},\mathbf{\eta}^{(1)},...,\mathbf{\eta}^{(T)}\), that gradually improves the log-likelihood function. Although this does not strictly imply the convergence of the EM algorithm to a local maximum, usually this is the case. \(\mathbf{\eta}^{(0)}\) represents an initial guess of the model parameters. Given the sample set and the stopping criteria, the final estimate of the model parameters only relates to the choice of \(\mathbf{\eta}^{(0)}\). A common strategy for getting an appropriate starting point is to first launch several short pilot runs of the EM algorithm, each with a different initialization, and then to choose the starting point for which the log-likelihood is the largest. It is noted that the EM algorithm can also start from the M step instead of the E step, which requires an initial guess of the \(p_{Z_{i}}(\cdot)\) for each \(Z_{i}\). ### Bayesian inference In the following, we adopt the Bayesian viewpoint to the inference of mixture models with \(K\) components and interpret the model parameters as random variables, \(\mathbf{E}\), whose prior distribution is denoted as \(p_{\mathbf{E}}(\mathbf{\eta})\). The posterior distribution of parameters \(\mathbf{E}\) given \(\mathcal{X}\) is given by Bayes' rule as \[p_{\mathbf{E}|\mathcal{X}}(\mathbf{\eta}|\mathcal{X})=\frac{\mathcal{L}(\mathbf{\eta}| \mathcal{X})\cdot p_{\mathbf{E}}(\mathbf{\eta})}{p_{\mathcal{X}}(\mathcal{X})}. \tag{23}\] The resulting predictive distribution reads \[p_{\mathbf{X}|\mathcal{X}}(\mathbf{x}|\mathcal{X})=\int_{\Omega_{\mathbf{E}}}h_{cm}(\mathbf{x} |\mathbf{\eta})\cdot p_{\mathbf{E}|\mathcal{X}}(\mathbf{\eta}|\mathcal{X})d\mathbf{\eta}, \tag{24}\] which is an expectation of the mixture model with respect to the posterior distribution of model parameters. \(\Omega_{\mathbf{E}}\) represents the sample space of \(\mathbf{E}\). The posterior distribution, and hence also the predictive distribution, is not analytically tractable. Instead, the posterior distribution can be approximated through MCMC sampling, \[p_{\mathbf{E}|\mathcal{X}}(\mathbf{\eta}|\mathcal{X})\approx\frac{1}{N_{p}}\sum_{i=1} ^{N_{p}}\delta(\mathbf{\eta}-\mathbf{\eta}_{i}), \tag{25}\] where \(\delta(\cdot)\) is the Dirac delta function and \(\{\mathbf{\eta}_{i}\}_{i=1}^{N_{p}}\) denotes the posterior samples. In this way, the predictive distribution is a mixture of mixtures consisting of a total of \(N_{p}\cdot K\) mixture components. The computational cost of computing and sampling from this approximate predictive distribution is roughly \(N_{p}\) times the cost for a \(K\)-component mixture, and \(N_{p}\) is often large, say thousands. Therefore in this paper, we resort to a single point estimate of the model parameter, namely the MAP estimate \(\widetilde{\mathbf{\eta}}\), for which the posterior distribution \(p_{\mathbf{E}|\mathcal{X}}(\mathbf{\eta}|\mathcal{X})\) is maximized. Another benefit of using the MAP is that it can be obtained directly from the EM algorithm [39], which is significantly cheaper than running an MCMC algorithm. The derivative of the EM algorithm for computing the MAP estimate follows the same lines as for the MLE, with a minor modification to account for the prior. Specifically, a log-prior distribution \(\ln(p_{\mathbf{E}}(\mathbf{\eta}))\) is added to the original \(Q\) function in Eq. (17), and the EM algorithm proceeds iteratively with the following two steps: (1) E step: compute the distribution of the allocation variables \(\mathbf{Z}\) through Eq. (19). (2) M step: update the model parameters through maximizing a modified \(Q\) function, i.e., \[\mathbf{\eta}^{(nxt)}=\operatorname*{arg\,max}_{\mathbf{\eta}}\sum_{i=1}^{N}\sum_{k=1} ^{K}\gamma_{i,k}(\mathbf{\eta}^{(cur)})\ln\left(\alpha_{k}h_{c}(\mathbf{x}_{i}|\mathbf{ \theta}_{k})\right)+\ln(p_{\mathbf{E}}(\mathbf{\eta})). \tag{26}\] In particular, for a conjugate prior distribution \(p_{\mathbf{E}}(\mathbf{\eta})\), a closed-form updating scheme can be derived for the categorical mixture parameters. ### Model selection and BIC In this subsection, we discuss how to select the number of components \(K\) in the mixture model \(h_{cm}(\cdot;\mathbf{\eta})\) using the information provided by the samples \(\mathcal{X}\triangleq\{\mathbf{x}_{i}\}_{i=1}^{N}\). Let the initial pool of candidate models be \(\{\mathcal{M}_{K}\}_{k=1}^{K_{max}}\) where \(\mathcal{M}_{K}\) refers to a mixture of \(K\) independent categorical components and \(K_{max}\) is a hyperparameter representing the maximum number of mixture components. From a Bayesian perspective, we favor the model \(\mathcal{M}_{\widetilde{K}}\) with the highest posterior probability, or equivalently with the highest log-posterior. That is \[\widetilde{K} =\operatorname*{arg\,max}_{K}\;\ln p_{\mathcal{M}|\mathcal{X}}( \mathcal{M}_{K}|\mathcal{X})\] \[=\operatorname*{arg\,max}_{K}\;\ln\mathcal{L}(\mathcal{M}_{K}| \mathcal{X})+\ln p_{\mathcal{M}}(\mathcal{M}_{K})\] \[=\operatorname*{arg\,max}_{K}\;\ln\left(\int_{\Omega_{\mathbf{E}}} \mathcal{L}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})p_{\mathbf{E}|\mathcal{M}}(\bm {\eta}|\mathcal{M}_{K})d\mathbf{\eta}\right)+\ln p_{\mathcal{M}}(\mathcal{M}_{K}). \tag{27}\] Here, \(p_{\mathcal{M}}(\mathcal{M}_{K})\) represents the prior probability for each \(k\)-th candidate model, and it is often assumed to be uniformly distributed among all candidates. \(\mathcal{L}(\mathcal{M}_{K}|\mathcal{X})\) denotes the integrated likelihood, or the marginal likelihood, and is the integral of the likelihood function \(\mathcal{L}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})\) multiplied by the parameter prior distribution \(p_{\mathbf{E}|\mathcal{M}}(\mathbf{\eta}|\mathcal{M}_{K})\) over the whole sample space of the parameters \(\Omega_{\mathbf{E}}\). Note that this is actually the normalizing constant of the posterior distribution of the parameters in \(\mathcal{M}_{K}\), i.e., \(p_{\mathbf{E}|\mathcal{X},\mathcal{M}}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})\). Computing the integrated likelihood involves a high dimensional integration whose closed-form solution is not available. Nevertheless, it can be approximated through various sampling-based methods [49; 50; 51]. These methods often rely on computationally expensive MCMC algorithms and are limited to a small \(K\), for example, up to 6 [47]. The Bayesian information criterion (BIC) serves as a crude but computationally cheap proxy of the log-posterior probability when \(p_{\mathcal{M}}(\mathcal{M}_{K})\propto 1\). BIC was first introduced by Schwarz [52] for asymptotically approximating the log-posterior probability of a linear model given observations \(\mathcal{X}\) from a regular exponential family (see the definition in [52]); therein the BIC is defined as \(\ln\mathcal{L}(\widehat{\mathbf{\eta}}|\mathcal{X},\mathcal{M}))-\frac{\dim( \mathcal{M})\ln(N)}{2}\), where \(\ln\mathcal{L}(\widehat{\mathbf{\eta}}|\mathcal{X},\mathcal{M})\) represents the mode of the log-likelihood function evaluated at the MLE point \(\widehat{\mathbf{\eta}}\), and \(\dim(\mathcal{M})\) denotes the number of free parameters in \(\mathcal{M}\). Another commonly used definition is given by \[\text{BIC}(\mathcal{M})\triangleq-2\ln\mathcal{L}(\widehat{\mathbf{\eta}}| \mathcal{X},\mathcal{M})+\dim(\mathcal{M})\ln(N). \tag{28}\] Note that under the definition of Eq. (28), the model with the smallest BIC is favored. The derivation of the BIC relies on the Laplace approximation to the likelihood function \(\mathcal{L}(\mathbf{\eta}|\mathcal{X},\mathcal{M})\), which does not apply to multi-modal posterior distributions, and thus BIC cannot be interpreted as a meaningful approximation to the log-posterior of a mixture model. In spite of this, BIC remains one of the state-of-art techniques for selecting the number of mixture components in practice[53; 54; 47; 55]. Additionally, BIC can be computed directly as a by-product of the EM algorithm without employing any computationally expensive MCMC algorithm. Therefore, throughout this paper, we adopt the BIC as the model selection technique. ## 4 Bayesian improved cross entropy method with the categorical mixture model In this section, we introduce the Bayesian iCE method with the categorical mixture model for network reliability analysis. With slight abuse of notation, we omit the subscript for all prior and posterior distributions, and use, e.g., \(p(\mathbf{\eta})\) to represent \(p_{\mathbf{E}}(\mathbf{\eta})\). ### Motivation As mentioned in Sec. 2, the 'distance' between the optimal IS distribution and the suboptimal IS distribution is only related to the chosen parametric model. For a fixed parametric model, the 'distance' remains fixed assuming that the CE optimization problem is solved exactly. An inappropriate parametric model will lead to an IS estimator with large variance in the final level of CE-based methods. In particular, this can happen when approximating an optimal IS distribution that implies a strong dependence between component states with the independent categorical model. To account for the dependence between the component states, one could use a dependent categorical distribution. However, it is not straightforward to choose an appropriate dependence structure that is both easy to sample from and convenient to update. Instead, we consider the mixture of independent categorical distributions as the parametric model. The flexibility of this mixture model enables capturing arbitrary dependencies between variables in the optimal IS distribution. In the CE-based IS, the parametric model is updated by maximizing a weighted log-likelihood function as shown in Eq. (8). Therefore, techniques for MLE can also be used in the CE-based methods with minor modifications to account for the weights. For instance, Geyer et.al., [41] used the EM algorithm for updating a Gaussian mixture model in the CE method. They found that the Gaussian mixture model performs consistently worse than the single Gaussian model especially when the sample size is small. The reason is that the EM algorithm tends to overfit the weighted samples and hence it is more sensitive to sample sets that misrepresent the target distribution. This can happen when the geometry/shape of the intermediate target distributions changes significantly in CE-based methods, which results in one or more modes of the target distribution being missing or cannot be sufficiently reflected by the weighted samples. The overfitting issue is even more severe for updating the categorical mixture in CE methods. If there is no sample falling into a certain category during the adaptive process, the probability assigned to this category will be zero for all mixture components, resulting in a potentially biased estimate of the final IS estimator. This is also known as the zero count problem in the context of MLE with categorical data [39]. A detailed discussion of the zero count problem for the CE method with the independent categorical parametric model can be found in [35]. ### Bayesian updating for cross-entropy-based methods #### 4.2.1 The basic idea To circumvent the overfitting issue of the weighted MLE, we propose to use the Bayesian approach for updating the categorical mixture in the CE method. At each level, we approximate the weighted MAP of a \(K\)-component mixture, denoted as \(\widetilde{\mathbf{\eta}}|\mathcal{M}_{K}\), through a generalized version of the EM algorithm that works with weighted samples. Here, we use 'approximate' to indicate that the algorithm is prone to get stuck in a local maximum, but this limitation can be alleviated by launching short pilot runs as mentioned in Subsec. 3.1. Model selection is performed for estimating the optimal number of components \(\widetilde{K}\) in the categorical mixture, whereby the number of mixture components leading to the smallest BIC is selected. Next, we employ the \(\widetilde{K}\)-component categorical mixture with its parameters fixed at \(\widetilde{\mathbf{\eta}}|\mathcal{M}_{\widetilde{K}}\) as the reference/sampling distribution at the \((t+1)\)-th level in the CE method. We term the proposed method BiCE-CM. #### 4.2.2 The generalized EM algorithm In this subsection, we introduce a generalized version of the EM algorithm and demonstrate its properties. To this end, we first attach a Dirichlet prior, which is the conjugate prior for categorical distributions, to each model parameter, i.e., \[\mathbf{\alpha} \triangleq\{\alpha_{k}\}_{k=1}^{K}\sim\text{Dir}(\cdot|\mathbf{a})\] \[\mathbf{\theta}_{k,d} \triangleq\left\{\theta_{k,d,j}\right\}_{j=1}^{n_{d}}\sim\text{ Dir}(\cdot|\mathbf{b}_{k,d})\] \[p(\mathbf{\eta}|\mathcal{M}_{K})=\text{Dir}(\mathbf{\alpha}|\mathbf{a}) \prod_{k=1}^{K}\prod_{d=1}^{D}\text{Dir}(\mathbf{\theta}_{k,d}|\mathbf{b}_{k,d}), \tag{29}\] where \(\mathbf{a}=(a_{1},...,a_{k})\) and \(\mathbf{b}_{k,d}=(b_{k,d,1},...,b_{k,d,n_{d}})\) are predefined concentration parameters. We obtain an MAP estimate of the model parameters \(\mathbf{\eta}\) through maximizing the weighted log-posterior distribution \(\ln\left(p^{(w)}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})\right)\), which reads: \[\ln\left(p^{(w)}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})\right) =\ln\mathcal{L}^{(w)}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K}))+ \ln(p(\mathbf{\eta}|\mathcal{M}_{K}))\] \[=\sum_{i=1}^{N}w_{i}\ln\left(h_{cm}(\mathbf{x}_{i}|\mathbf{\eta})\right) +\ln(p(\mathbf{\eta}|\mathcal{M}_{K})), \tag{30}\] where \(\mathcal{L}^{(w)}(\mathbf{\eta}|\mathcal{X},\mathcal{M}_{K})\) is the weighted likelihood with \(w_{i}\triangleq\frac{NW(\mathbf{x}_{i})}{\sum_{j=1}^{N}W(\mathbf{x}_{j})}\) representing the normalized weight of the \(i\)-th sample \(\mathbf{x}_{i}\); herein, the weight function \(W(\cdot)\) defined in Eq. (9) is normalized such that the sum of the weights is equal to \(N\). Note that normalizing the weights \(\{W(\mathbf{x}_{i})\}_{i=1}^{N}\) does not change the solution to the original CE optimization problem in Eq. (8), i.e., \(\widehat{\mathbf{v}}^{(t)}\), but can modify the relative strength between the log-prior and the weighted log-likelihood term in Eq. (30). As the sample size \(N\) increases, the log-prior term will be dominated by the weighted log-likelihood, and hence, the solution to Eq. (30) coincides with the results obtained from Eq. (8) in large sample settings. On the other hand, when the sample size is small/moderate, the prior term serves as a regularizer that penalizes the weighted log-likelihood. Different kinds of prior distributions or regularizers can be applied depending on the problems at hand, but a detailed investigation is left for future work. In this paper, we focus on the Dirichlet prior as shown in Eq. (29). A generalized version of the EM algorithm is employed to maximize Eq. (30), which iteratively updates the following weighted \(Q\) function \[Q^{(w)}(\mathbf{\eta};\{p_{Z_{i}}(\cdot)\}_{i=1}^{N})\triangleq\sum_ {i=1}^{N}w_{i}\mathbb{E}_{Z_{i}\sim p_{Z_{i}}(\cdot)}\left[\ln\left(\alpha_{Z_ {i}}h_{c}(\mathbf{x}_{i};\mathbf{\theta}_{Z_{i}})\right)\right]+\ln(p(\mathbf{\eta}|\mathcal{ M}_{K}))\\ =\sum_{i=1}^{N}w_{i}\sum_{k=1}^{K}p_{Z_{i}}(k)\left[\ln\left( \alpha_{k}h_{c}(\mathbf{x}_{i};\mathbf{\theta}_{k})\right]+\ln(p(\mathbf{\eta}|\mathcal{M} _{K})).\right. \tag{31}\] In the E step, we compute the responsibility matrix \([\gamma_{i,k}(\mathbf{\eta}^{(cur)})]_{N\times K}\) via Eq. (19) and formulate \(Q^{(w)}(\mathbf{\eta};\mathbf{\eta}^{(cur)})\triangleq Q^{(w)}(\mathbf{\eta};\{\gamma_{i, \cdot}(\mathbf{\eta}^{(cur)})\}_{i=1}^{N})\); in the M step, we maximize \(Q^{(w)}(\mathbf{\eta};\mathbf{\eta}^{(cur)})\) over \(\mathbf{\eta}\), resulting in the following updating scheme for the categorical mixture: \[\alpha_{k}^{(nxt)} =\frac{\sum_{i=1}^{N}w_{i}\gamma_{i,k}(\mathbf{\eta}^{(cur)})+a_{k}-1 }{\sum_{k=1}^{K}\sum_{i=1}^{N}w_{i}\gamma_{i,k}(\mathbf{\eta}^{(cur)})+\sum_{k=1}^ {K}a_{k}-K}, \tag{32}\] \[\theta_{k,d,j}^{(nxt)} =\frac{\sum_{i=1}^{N}w_{i}\gamma_{i,k}(\mathbf{\eta}^{(cur)})\mathbb{ I}\{x_{i,d}=s_{d,j}\}+b_{k,d,j}-1}{\sum_{i=1}^{N}w_{i}\gamma_{i,k}(\mathbf{\eta}^{ (cur)})+\sum_{j=1}^{n_{d}}b_{k,d,j}-n_{d}}. \tag{33}\] Similarly to the original EM algorithm, it holds that \[\ln\left(p^{(w)}(\mathbf{\eta}^{(nxt)}|\mathcal{X},\mathcal{M}_{K}) \right)\geqslant Q^{(w)}(\mathbf{\eta}^{(nxt)};\mathbf{\eta}^{(cur)})+C^{(w)}(\mathbf{\eta} ^{(cur)})\\ \geqslant Q^{(w)}(\mathbf{\eta}^{(cur)};\mathbf{\eta}^{(cur)})+C^{(w)}( \mathbf{\eta}^{(cur)})=\ln\left(p^{(w)}(\mathbf{\eta}^{(cur)}|\mathcal{X},\mathcal{M }_{K})\right), \tag{34}\] where \(C^{(w)}(\mathbf{\eta}^{(cur)})\triangleq\sum_{i=1}^{N}w_{i}\mathbb{H}(\gamma_{i, \cdot}(\mathbf{\eta}^{(cur)}))\). We end up with a sequence of parameters \(\mathbf{\eta}^{(0)},...,\mathbf{\eta}^{(T)}\) that converges to one of the modes (or saddle points) of the weighted log-posterior distribution, and \(\mathbf{\eta}^{(T)}\) is regarded as an approximate weighted MAP, \(\widetilde{\mathbf{\eta}}|\mathcal{M}_{K}\). #### 4.2.3 The weighted MAP mitigates the overfitting and is unbiased \(\mathbf{\eta}^{(T)}\) can be written as a linear combination of a data-dependent estimate \(\mathbf{\eta}^{(T;\mathrm{D})}\), which exploits the current data, and a user-defined prior estimate \(\mathbf{\eta}^{(T;\mathrm{pri})}\), which can be designed to explore a wider part of the sample space and thus is capable of finding potentially missing modes. Taking \(\theta^{(T)}_{k,d,j}\) as an example, let \(nxt=T,cur=T-1\) and rearrange Eq. (33) as follows: \[\theta^{(T)}_{k,d,j}=\lambda_{k,d}(\mathbf{\eta}^{(T-1)})\theta^{(nxt;\mathrm{D}) }_{k,d,j}+(1-\lambda_{k,d}(\mathbf{\eta}^{(T-1)}))\theta^{(\mathrm{pri})}_{k,d,j}. \tag{35}\] where \(\theta^{(T;\mathrm{D})}_{k,d,j}\triangleq\frac{\sum_{i=1}^{N}w_{i}\gamma_{i, k}(\mathbf{\eta}^{(T-1)})\mathbb{I}(x_{i,d}=s_{d,j}\}}{\sum_{i=1}^{N}w_{i} \gamma_{i,k}(\mathbf{\eta}^{(T-1)})}\), and \(\theta^{(\mathrm{pri})}_{k,d,j}\triangleq\frac{b_{k,d,j}-1}{\sum_{j=1}^{n_{d }}b_{k,d,j}-n_{d}}\). \(\theta^{(T;\mathrm{D})}_{k,d,j}\) and \(\theta^{(\mathrm{pri})}_{k,d,j}\) are combined via \[\lambda_{k,d}(\mathbf{\eta}^{(T-1)})\triangleq\frac{\sum_{i=1}^{N}w_{i}\gamma_{i, k}(\mathbf{\eta}^{(T-1)})}{\sum_{i=1}^{N}w_{i}\gamma_{i,k}(\mathbf{\eta}^{(T-1)})+\sum_{j=1} ^{n_{d}}b_{k,d,j}-n_{d}}, \tag{36}\] which is a factor indicating the relative strength of the data with respect to the combined information from the data and prior. \(\lambda_{k,d}(\mathbf{\eta}^{(T-1)})\) tunes the exploitation and exploration behaviour of \(\theta^{(T)}_{k,d,j}\); the larger \(\lambda_{k,d}(\mathbf{\eta}^{(T-1)})\) is, the more dominant is \(\theta^{(T;\mathrm{D})}_{k,d,j}\) in Eq. (35). A similar interpretation also applies to \(\alpha^{(T)}_{k}\). Moreover, if we set \(b_{k,d,j}>1\) for each \(k\),\(d\) and \(j\), \(\theta^{(T)}_{k,d,j}\) is always positive even when no samples fall into the category \(s_{d,j}\), i.e., the zero count issue is mitigated in small sample settings. As a result, the sample space of the reference distribution at each intermediate level will no longer shrink even with a small number of samples, which ensures an **unbiased** IS estimator at the final CE level. #### 4.2.4 Implementation details InitializationTo initialize the generalized EM algorithm, we launch several short pilot runs, each from a random realization of the responsibility matrix \([\gamma^{(0)}_{i,k}]_{N\times K}\). The \(i\)-th row of the responsibility matrix is a \(K\)-component vector generated uniformly and independently over the standard \((K-1)\)-simplex, i.e., the vector follows the symmetric Dirichlet distribution \(\text{Dir}(\cdot|[1,...,1])\) The responsibility matrix that achieves the highest weighted log-posterior is chosen as the starting point from which we iteratively perform the M step and E step until convergence. The prior distributionFor selecting an appropriate Dirichlet prior distribution in the BiCE-CM, we rearrange Eq. (36) as follows: \[\sum_{j=1}^{n_{d}}b_{k,d,j}-n_{d}=\left(1-\lambda_{k,d}(\boldsymbol{\eta}^{T-1}) \right)\cdot\sum_{i=1}^{N}w_{i}\gamma_{i,k}\left(\boldsymbol{\eta}^{T-1} \right). \tag{37}\] For simplicity, let \(\gamma_{i,k}\left(\boldsymbol{\eta}^{T-1}\right)=1/K\) for each \(i\) and \(k\), and assume a symmetric Dirichlet prior for each \(\boldsymbol{\theta}_{k,d}\), i.e., \(b_{k,d,j_{1}}=b_{k,d,j_{2}}\) for \(1\leqslant j_{1}\neq j_{2}\leqslant n_{d}\) and \(1\leqslant k\leqslant K,1\leqslant d\leqslant D\). \(\boldsymbol{\theta}_{k,d}\) represents the PMF of \(X_{d}\) implied by the \(k\)-th component of the mixture. As a consequence, Eq. (37) can be written as \[b_{k,d,j}=1+\frac{\left(1-\lambda_{k,d}(\boldsymbol{\eta}^{T-1})\right)\cdot \sum_{i=1}^{N}w_{i}}{K\cdot n_{d}};\qquad j=1,...,n_{d}. \tag{38}\] In general, both the relative strength of the data, \(\lambda_{k,d}(\boldsymbol{\eta}^{T-1})\), and the sum of the weights, \(\sum_{i=1}^{N}w_{i}\), increase with the sample size \(N\), and we replace \(\left(1-\lambda_{k,d}(\boldsymbol{\eta}^{T-1})\right)\cdot\sum_{i=1}^{N}w_{i}\) by a constant \(C\) in Eq. (38), which gives \[b_{k,d,j}=1+\frac{C}{K\cdot n_{d}};\qquad\quad j=1,...,n_{d} \tag{39}\] for each \(\boldsymbol{\theta}_{k,d}\). We will compare different choices of \(C\) in the numerical examples. As for the mixture weights \(\boldsymbol{\alpha}\), we choose \[a_{k}=1+\epsilon;\qquad\quad k=1,...,K, \tag{40}\] where \(\epsilon\) is typically set as a small value, e.g. \(10^{-8}\). In fact, we penalize the weighted log-likelihood with the following log-prior term \[\ln\left(p(\boldsymbol{\eta}|\mathcal{M}_{k})\right)=\ln\text{Dir}( \boldsymbol{\alpha}|\boldsymbol{a})+\sum_{k=1}^{K}\sum_{d=1}^{D}\ln\text{Dir} (\boldsymbol{\theta}_{k,d}|\boldsymbol{b}_{k,d}). \tag{41}\] For symmetric Dirichlet distributions \(\text{Dir}(\boldsymbol{\alpha}|\boldsymbol{a})\) and \(\text{Dir}(\boldsymbol{\theta}_{k,d})\) defined in Eq. (39) and (40), the probability mode is attained when \(\alpha_{k}=1/K,k=1,...,K\) and \(\theta_{k,d,j}=1/n_{d},j=1,...,n_{d}\). In other words, we favor a uniform vector for each \(\boldsymbol{\theta}_{k,d}\), and a larger \(C\) implies a stronger preference. Note that by selecting a small \(\epsilon\), the penalization of non-uniform \(\boldsymbol{\alpha}\) vanishes, so the redundant mixture components can be assigned a small weight. Model selection or not.To discuss whether it is necessary to perform model selection, we consider two categorical mixtures \(f_{m1}(\cdot|\mathbf{\eta},\mathcal{M}_{K1})\) and \(f_{m2}(\cdot|\mathbf{\eta},\mathcal{M}_{K2})\). Let \(K1>K2\) and we refer to \(f_{m1}\), \(f_{m2}\) as the larger mixture, and the smaller mixture, respectively. Through, for example, adding \(K1-K2\) redundant mixture components, each of zero weight to \(f_{m2}\), any distribution that can be represented by the smaller mixture \(f_{m2}\) can also be represented by the larger one \(f_{m1}\). Therefore, the minimum KL divergence between the optimal IS distribution and the larger mixture will be less or equal to that of the smaller mixture, and if we can always find the optimal parameters \(\mathbf{\eta}^{*}\) defined in Eq. (7), the BiCE-CM with a larger mixture will perform better or at least equally well than using a smaller mixture. If the sample size approaches infinity, the distribution implied by either the weighted MLE \(\widehat{\mathbf{\eta}}\) or the weighted MAP \(\widetilde{\mathbf{\eta}}\) converges to the distribution implied by the optimal parameters \(\mathbf{\eta}^{*}\), and if we can always find the weighted MLE or weighted MAP through the generalized EM algorithm, there is no need to perform model selection, since the larger the \(K\), the closer the chosen IS distribution is to the optimal IS distribution, and thus the better the performance of the CE method. In practical settings, the sample size is limited, and the weighted MLE \(\widehat{\mathbf{\eta}}\) can be far away from the optimal parameter \(\mathbf{\eta}^{*}\). Although by introducing the prior information, the overfitting issue of the weighted MLE is mitigated, there is still no guarantee that the distribution implied by the weighted MAP \(\widetilde{\mathbf{\eta}}\) is close to that of \(\mathbf{\eta}^{*}\). Even if the weighted MAP of a mixture can be found, it does not necessarily lead to a closer distribution to the optimal IS distribution than using the weighted MAP of a smaller mixture, especially when an inappropriate prior distribution is chosen, and hence, we cannot simply employ a large \(K\). Another major issue is that in practice the generalized EM algorithm almost always gets stuck at a local maximum and fails to identify the weighted MAP. Note that there are in total \(K^{n}\) terms (usually uni-modal) in the likelihood function. Although some of these terms can be merged, a large sample size \(n\) or number of mixture components \(K\) generally indicates a more complicated and jagged posterior surface, whereby our generalized EM optimizer is more likely to get stuck at a point far from optimal. In such cases, a higher effort is required to find a good local maximum, e.g., by launching more pilot runs or designing a special prior that eliminates some of the modes. In summary, it is challenging to make a general decision on whether or not to perform the model selection, and we select the \(K\) with the highest posterior probability among a set of \(K_{max}\) candidates. The posterior probability can be roughly approximated by twice the negative BIC in Eq. (28). Although such an approximation suffers from major limitations, it remains one of the state-of-art techniques for selecting the number of components in a mixture model. For more details, we refer to Sec. 3.3. The algorithmThe proposed generalized EM algorithm for inference of the categorical mixture is summarized in Algorithm 1. ### Bayesian improved cross entropy method with the categorical mixture model The BiCE method [35] substitutes the weighted MLE of model parameters in the original iCE method with a Bayesian counterpart. In [35], the posterior predictive distribution is derived for updating the independent categorical distribution. However, for the categorical mixture, a closed-form expression of the posterior predictive distribution does not exist, and we use the weighted MAP estimator instead, which can be approximated through a generalized EM algorithm described in Subsec. 4.2. The proposed BiCE method with the categorical mixture model (BiCE-CM) is summarized in Algorithm 2. ### Component importance measures from the BiCE-CM algorithm In the field of network reliability assessment, component importance (CI) measures are employed for ranking components based on their influence on the system failure probability. Commonly used CI measures for binary systems include among others Birnbaum's measure, critical importance factor, risk achievement worth, and Fussel-Vesely measure [56]. These measures can be extended to multi-state or continuous systems [57], e.g., after introducing a performance function \(g_{i}(\cdot)\) at the component level [58], i.e., the \(i\)-th component fails when \(g_{i}(x_{i})\leqslant 0\). The samples from the BiCE-CM method can be used for calculating these CI measures. Taking Birnbaum's measure (BM) as an example, it is defined as the partial derivative of the system failure probability \(p_{f}\triangleq\Pr(g(\mathbf{X})\leqslant 0)\) ``` MainFunc: Input:\(\{\mathbf{x}_{i},W_{i}\triangleq W(\mathbf{x}_{i})\}_{i=1}^{N}\), \(C\), \(\epsilon\), \(K\), \(\Omega_{\mathbf{X}}\triangleq\{s_{d,1},...,s_{d,n_{d}}\}_{d=1}^{D}\) % \(\Omega_{\mathbf{X}}\) is the sample space of \(\mathbf{X}\), \(W(\cdot)\) is defined by Eq. (9) \(w_{i}\gets N\cdot\frac{W_{i}}{\sum_{i=1}^{N}W_{i}}\) for each \(i=1,...,N\) % normalizing the weights \(n_{p}\gets 20\) % the number of the pilot runs \(l_{p}\gets 20\) % the maximum iteration of the pilot run \(l_{o}\gets 500\) % the maximum iteration of the official run \(\mathcal{LP}_{max}\leftarrow-\infty\) % the maximum weighted log-posterior of the pilot runs \(it\gets 1\) % the counter for the pilot run while\(it\leqslant n_{p}\)do for\(i=1,...,N\)do Generate \(\{\gamma_{i,k}^{(0,it)}\}_{k=1}^{K}\) uniformly over the standard \((K-1)\) simplex \((\sim,\mathcal{LP},\sim)=\textbf{Subroutine}\left(\{\mathbf{x}_{i},w_{i}\}_{i=1}^{N },[\gamma_{i,k}^{(0,it)}]_{N\times K},\Omega_{\mathbf{X}},C,\epsilon,l_{p}\right)\) if\(\mathcal{LP}\geqslant\mathcal{LP}_{max}\)then \(\gamma_{i,k}^{(0)}\leftarrow\gamma_{i,k}^{(0,it)}\) for each \(i\) and \(k\), \(\mathcal{LP}_{max}\leftarrow\mathcal{LP}\) \(it=it+1\) \((\mathcal{CL},\mathcal{LP},\widetilde{\mathbf{\mu}}_{K})=\textbf{Subroutine} \left(\{\mathbf{x}_{i},w_{i}\}_{i=1}^{N},[\gamma_{i,k}^{(0)}]_{N\times K},\Omega_ {\mathbf{X}},C,\epsilon,l_{o}\right)\) Compute \(BIC_{K}\) through Eq. (28) Output:\(\widetilde{\mathbf{\mu}}_{K}\), \(BIC_{K}\) ``` **Algorithm 1**The generalized EM algorithm ``` MainFunc: Input:\(\{\mathbf{x}_{i},W_{i}\triangleq W(\mathbf{x}_{i})\}_{i=1}^{N}\), \(C\), \(\epsilon\), \(K\), \(\Omega_{\mathbf{X}}\triangleq\{s_{d,1},...,s_{d,n_{d}}\}_{d=1}^{D}\) % \(\Omega_{\mathbf{X}}\) is the sample space of \(\mathbf{X}\), \(W(\cdot)\) is defined by Eq. (9) \(w_{i}\gets N\cdot\frac{W_{i}}{\sum_{i=1}^{N}W_{i}}\) for each \(i=1,...,N\) % normalizing the weights \(n_{p}\gets 20\) % the number of the pilot runs \(l_{p}\gets 20\) % the maximum iteration of the pilot run \(l_{o}\gets 500\) % the maximum iteration of the official run \(\mathcal{LP}_{max}\leftarrow-\infty\) % the maximum weighted log-posterior of the pilot runs \(it\gets 1\) % the counter for the pilot run while\(it\leqslant n_{p}\)do for\(i=1,...,N\)do Generate \(\{\gamma_{i,k}^{(0,it)}\}_{k=1}^{K}\) uniformly over the standard \((K-1)\) simplex \((\sim,\mathcal{LP},\sim)=\textbf{Subroutine}\left(\{\mathbf{x}_{i},w_{i}\}_{i=1}^{N },[\gamma_{i,k}^{(0,it)}]_{N\times K},\Omega_{\mathbf{X}},C,\epsilon,l_{p}\right)\) if\(\mathcal{LP}\geqslant\mathcal{LP}_{max}\)then \(\gamma_{i,k}^{(0)}\leftarrow\gamma_{i,k}^{(0,it)}\) for each \(i\) and \(k\), \(\mathcal{LP}_{max}\leftarrow\mathcal{LP}\) \(it=it+1\) \((\mathcal{CL},\mathcal{LP},\widetilde{\mathbf{\mu}}_{K})=\textbf{Subroutine} \left(\{\mathbf{x}_{i},w_{i}\}_{i=1}^{N},[\gamma_{i,k}^{(0)}]_{N\times K},\Omega_ {\mathbf{X}},C,\epsilon,l_{o}\right)\) Compute \(BIC_{K}\) through Eq. (28) Output:\(\widetilde{\mathbf{\mu}}_{K}\), \(BIC_{K}\) ``` **Algorithm 2**The generalized EM algorithm ``` Input:\(N\), \(\delta_{tar}\), \(\delta_{\epsilon}\), \(C\), \(\epsilon\), the maximum number of mixture components \(K_{max}\), performance function \(g(\mathbf{x})\), input distribution \(p_{\mathbf{X}}(\mathbf{x})\), \(\mathbf{x}\in\Omega_{\mathbf{X}}\) 1\(t\gets 1\), \(t_{max}\gets 50\), \(\sigma_{0}\leftarrow\infty\) 2\(h(\mathbf{x};\widetilde{\mathbf{\mu}}^{(t-1)})\gets p_{\mathbf{X}}(\mathbf{x})\)whiletruedo 3 Generate \(N\) samples \(\{\mathbf{x}_{k}\}_{k=1}^{N}\) from \(h(\mathbf{x};\widetilde{\mathbf{\mu}}^{(t-1)})\) and calculate the corresponding performance \(\{g(\mathbf{x}_{k})\}_{k=1}^{N}\) Compute the sample c.o.v. \(\widehat{\delta}\) of \(\left\{\frac{\mathbb{I}\{g(\mathbf{x}_{k})\leq 0\}}{\Phi(-g(\mathbf{x}_{k})/\sigma^{(t-1)})} \right\}_{k=1}^{N}\) 4if\(t>t_{max}\) or \(\widehat{\delta}\leq\delta_{\epsilon}\)then 5 Break 6 Determine \(\sigma^{(t)}\) through solving Eq. (11) using the alternative weight function \(W^{(alt)}(\cdot)\) defined in Eq. (13) Calculate \(W(\mathbf{x}_{i})\) for each \(i=1,...,N\) through Eq. (9) 7for\(K=1,...,K_{max}\)do 8 Compute \(\widetilde{\mathbf{\mu}}_{K}\) and \(BIC_{K}\) through Algorithm 1 9\(\widetilde{K}=\arg\min_{K}BIC_{K}\) 10\(\widetilde{\mathbf{\mu}}^{(t)}\leftarrow\widetilde{\mathbf{\mu}}_{\widetilde{K}}\) 11\(t\gets t+1\) 12\(T\gets t-1\) 13 Use \(h(\mathbf{x};\widetilde{\mathbf{v}}^{(T)})\) as the IS distribution and calculate the IS estimator \(\widehat{p}_{f}\) Output:\(\widehat{p}_{f}\) ``` **Algorithm 2**Bayesian improved cross entropy method with the categorical mixture parametric family with respect to the component failure probability \(p_{fi}\triangleq\Pr(g_{i}(X_{i})\leqslant 0)\): \[BM_{i} \triangleq\frac{\partial p_{f}}{\partial p_{fi}}=\Pr(g(\mathbf{X}) \leqslant 0|g_{i}(X_{i})\leqslant 0)-\Pr(g(\mathbf{X})\leqslant 0|g_{i}(X_{i})>0)\] \[=\frac{\Pr(g(\mathbf{X})\leqslant 0,g_{i}(X_{i})\leqslant 0)}{\Pr(g_{i}( X_{i})\leqslant 0)}-\frac{\Pr(g(\mathbf{X})\leqslant 0,g_{i}(X_{i})>0)}{\Pr(g_{i}(X_{i})>0)}\] \[=\frac{\mathbb{E}_{p_{\mathbf{X}}}\left[\mathbb{I}\{g(\mathbf{X}) \leqslant 0\}\mathbb{I}\{g_{i}(X_{i})\leqslant 0\}\right]}{p_{fi}}-\frac{ \mathbb{E}_{p_{\mathbf{X}}}\left[\mathbb{I}\{g(\mathbf{X})\leqslant 0\} \mathbb{I}\{g_{i}(X_{i})>0\}\right]}{1-p_{fi}}. \tag{42}\] The expectation in Eq. (42) can be estimated through IS using the samples from the final level of the BiCE-CM method, and \(p_{fi}\) can be estimated by crude MCS with \(g_{i}(X_{i})\), which is usually cheap to evaluate. According to the definition, the larger the \(BM_{i}\), the more sensitive the failure probability \(p_{f}\) is to the \(i\)-th component, and hence the higher priority the component will have when allocating the system redundancy. ## 5 Numerical examples ### Illustration: a toy connectivity problem We consider a small network consisting of five components. Its configuration is shown in Fig. 1. Each component can either fail or not fail and hence is modeled by a Bernoulli distributed random variable. The topologically most important component, component \(3\), is assigned a failure probability of \(10^{-3}\), while for all other components, the failure probability is set to \(3\cdot 10^{-2}\). The connectivity between points A and B is of interest, and we have three major modes in the failure domain: \((0,0,1,1,1)\), \((1,1,0,1,1)\), and \((1,1,1,0,0)\), corresponding to three minimal cut sets: \((1,2),(3)\), and \((4,5)\), respectively. The probability of each mode equals \(8.46\cdot 10^{-4},8.85\cdot 10^{-4},8.46\cdot 10^{-4}\), respectively, and the total failure probability equals \(2.80\cdot 10^{-3}\). Figure 1: Topology of a five-component network in Example 5.1. #### 5.1.1 The zero count problem for the iCE To illustrate the overfitting issue of the standard iCE method when solving this example, we run it 500 times with the setting \(K=3,\delta_{tar}=\delta_{\epsilon}=1,N=1000\) and plot the histogram of the 500 failure probability estimates in Fig. 2. The figure illustrates a highly skewed but also multi-modal distribution of the iCE estimator. The three peaks reflect the number of cases where zero, one, or two modes are missing in the final IS distribution. A'missing' mode here implies that the mode is assigned a small (even zero) probability by the IS distribution. Any sample coincides with such a mode will be attached with a large weight, leading to an outlier that significantly overestimates the failure probability. By contrast, if no sample is generated from this mode, there will be a significantly negative bias. Note that the number of samples from the nominal distribution whose third component is safe follows a binomial distribution and therefore its properties can be calculated theoretically. For instance, the probability that the third component is safe for all samples generated at the first level is equal to \((1-10^{-3})^{1000}\approx 0.368\). In such case, the iCE method will definitely miss the mode (3) in all subsequent intermediate levels (see Eq. (22)), and the corresponding failure probability estimates will underestimate the true value, which is demonstrated in Fig. 2. In Fig. 2(b), we show the results for the BiCE-CM method. A balanced Dirichlet prior in Eq. (39) is chosen for mixture parameters, with \(C=200\) and Figure 2: Histogram of the failure probability estimates via the iCE or the BiCE-CM method. \((a)\) results of the iCE. \((b)\) results of the BiCE-CM. \(\epsilon=10^{-8}\). The remaining settings are the same as those of the iCE method. We can see that by introducing an appropriate prior, all three modes are found in most of the 500 estimates. A negligible relative bias (0.45%) and a small coefficient of variation (0.1) are achieved with an average of 4050 evaluations of \(g(\cdot)\). To investigate the reason for the significant difference between the performance of the two algorithms, we keep track of the reference distributions of all intermediate levels of the iCE method. The results are shown in Fig. 3. Fig. 3 demonstrates whether the distribution chosen at each level of the iCE method, i.e., the reference distribution, resembles the target distribution well. Apparently, the iCE method misses one of the three modes in the optimal IS distribution starting from the second level and produces a biased estimate. Figure 3: The PMF of the target distribution and of the reference distribution at each iteration of the iCE method. #### 5.1.2 Model selection or not: an empirical perspective Next, we use the BIC for choosing adaptively the number of mixture components \(K\) at each level of the BiCE-CM. The maximum number of mixture components \(K_{max}\) is equal to 10. For comparison, we also perform the BiCE-CM method with a fixed number of \(K\) ranging from 1 to 100. Overall, 8 scenarios are considered as listed in Table 2. In all cases, a Dirichlet distribution is employed as a priori with \(C=200\) and \(\epsilon=10^{-8}\), and \(\delta_{tar}=\delta_{\epsilon}\) is set to 1. We first consider a large sample setting, where \(N=10^{5}\), and check the estimated KL divergence between the intractable target distribution and its mixture approximation, the reference distribution, at each level of the BiCE-CM method. The results are illustrated in Fig. 4. We can see from the figure that the estimated KL divergence at each intermediate level decreases as \(K\) increases, and reaches a constant minimum value at \(K=3\). This result is expected since the optimal IS distribution has three major modes and can \begin{table} \begin{tabular}{l c c c c c c c c} \hline Case No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline number of mixture components, \(K\) & 1 & 2 & 3 & 5 & 10 & 20 & 100 & BIC \\ \hline \end{tabular} \end{table} Table 2: Case description for example 5.1.2. Figure 4: KL divergence between the intermediate target distribution and the reference distribution at each level of the BiCE-CM method (a large sample setting). be approximated sufficiently well by a three-component categorical mixture. Hence, additional flexibility from adding mixture components is not required. However, for \(K<3\), the model capacity is inadequate, and increasing \(K\) will lead to an IS distribution significantly closer to the optimal one thus clearly improving the performance of the BiCE-CM. Fig. 4 also demonstrates that selecting the \(K\) adaptively via BIC will not improve the results of a fixed \(K\) that is larger than \(3\), so the model selection is not needed in large sample settings for this example. Next, we consider small sample settings, in which the weighted MLE tends to overfit the data. Although introducing a prior distribution mitigates the overfitting issue for an appropriate choice of the prior parameters, such a choice is not always straightforward. That is, a poor parameter choice of the prior for a model with higher \(K\) could potentially result in a worse estimator. Such situations can be avoided by performing model selection. This is demonstrated by the numerical experiment, where for each scenario we run \(500\) times the BiCE-CM algorithm with \(1,000\) samples and we set \(C=200,\epsilon=10^{-8}\). The results are summarized through a box plot in Fig. 5(b). Figure 5: Boxplot of the estimates obtained from the BiCE-CM method. (a) \(C=0,\epsilon=10^{-8}\), (b) \(C=200,\epsilon=10^{-8}\), (c) \(C=500,\epsilon=10^{-8}\), (d) \(C=5000,\epsilon=10^{-8}\). To measure the quality of the failure probability estimator \(\widehat{p}_{f}\), we borrow the definition of the 'efficiency' in statistics [59], which is defined as follows \[\text{Eff}(\widehat{p}_{f})\triangleq\frac{1}{\text{MSE}(\widehat{p}_{f})\times \text{Cost}(\widehat{p}_{f})}, \tag{43}\] where \(\text{MSE}(\widehat{p}_{f})\) represents the mean square error of the estimator \(\widehat{p}_{f}\) and \(\text{Cost}(\widehat{p}_{f})\) is the average computational cost of getting \(\widehat{p}_{f}\), which is measured by the average number of evaluations of \(g(\cdot)\) throughout all numerical examples in this paper. Note that the efficiency of the MCS equals \(\frac{1}{p_{f}\cdot(1-p_{f})}\), which is independent of the sample size. Hence, the efficiency improvement over MCS can be measured through the following relative efficiency \[\text{relEff}(\widehat{p}_{f})\triangleq\frac{p_{f}\cdot(1-p_{f})}{\text{MSE}( \widehat{p}_{f})\times\text{Cost}(\widehat{p}_{f})}. \tag{44}\] The relative efficiency of different choices of \(K\) is illustrated in Fig. 5(b). The optimal choice, as expected, is \(K=3\). If guessing an appropriate \(K\) is not possible, adaptively selecting \(K\) via the BIC can be a good alternative. Note that this comes at a price of a significant overhead, since at each iteration, the generalized EM algorithm is performed \(K_{max}=10\) times, while for a fixed \(K\), we only perform one single run of the algorithm. Nevertheless, for a computationally demanding performance function \(g(\cdot)\), the computational cost is dominated by the evaluation of \(g(\cdot)\) and the overhead resulting from the adaptive selection of \(K\) via the BIC should not be critical. #### 5.1.3 Impact of the prior distribution In this subsection, we study the influence of the prior distribution on the performance of the BiCE-CM method. We consider 4 different values of \(C\), namely \(0,200,500\) and \(5,000\). \(\epsilon\) is fixed at \(10^{-8}\) for all 4 cases. The results are summarized in Fig. 5. When \(C=0\), the BiCE-CM method degenerates to the standard iCE method that employs the weighted MLE to update the mixture model. Due to overfitting, the relative efficiency is poor. When \(C=5000\), the weighted log-likelihood function is over-penalized, and the prior estimate dominates the data-related estimate in Eq. (35). Owing to the symmetric Dirichlet prior, the resulting IS distribution is close to an independent uniform distribution, and the BiCE-CM with different \(K\) performs similarly. For this 5-component toy example, an independent uniform distribution works well, however, as will be shown later, this is not generally the case. When \(C\) is appropriately large, the performance of the BiCE-CM method is shown in Fig. 5(b-c), and has been discussed in Subsec. 5.1.2. ### Comparison: a benchmark study In this subsection, we consider the multi-state two-terminal reliability problems [60], in which we compute the probability that a specified amount of 'flow' can (or cannot) be delivered from the source to the sink. This problem has been extensively studied in operations research [16; 61; 62; 17; 60], from which we borrow two benchmark problems, namely the Fishman network and the Dodecahedron network, to test the performance of the BiCE-CM method. The results are further compared with the creation-process-based splitting (CP-splitting) [17], which is a state-of-art technique for solving multi-state two-terminal reliability problems, especially when the failure probability \(p_{f}\) is small. The network topology of the two benchmarks is illustrated in Fig 6, and we employ the same problem settings as in [17]. We consider only the edge capacities, each following an independently and identically distributed categorical distribution. Following this distribution the probability of each edge capacity being \(0,100,200\) equals \(p_{0},\frac{1-p_{0}}{2},\frac{1-p_{0}}{2}\) respectively. We are interested in the probability that the maximum flow from the source node \(s\) to the sink node \(t\) is less or equal to the threshold \(thr\), i.e., \(\Pr(\mathrm{mf}(s,t)\leqslant thr)\). We estimate this probability for each combination of \(p_{0}\in\{10^{-3},10^{-4}\}\) and \(thr\in\{0,100\}\), and for each of the two benchmarks. The reference failure probability \(p_{ref}\) in each scenario is calculated using the CP-splitting method with \(10^{6}\) trajectories. The results are summarized in Table 3 and 4. For the BiCE-CM method, we set \(N=2000,\delta_{tar}=\delta_{\epsilon}=1.5,C=200,\epsilon=10^{-8}\), and compute the mean value, c.o.v., the average number of evaluations of \(g(\cdot)\), and the relative efficiency through \(500\) independent repetitions of the algorithm. For the CP-splitting method, we report the results from Tables 3 and 4 in [17]. Therein, the c.o.v. is computed for the mean value of \(1000\) repetitions. To obtain the c.o.v. of a single repetition, which guarantees a fair comparison between the two methods, the c.o.v. reported in [17] is multiplied by \(\sqrt{1,000}\). In addition, the number of \(g(\cdot)\) evaluations in CP-splitting is computed by multiplying the number of levels by the number of trajectories, without considering the pilot run. The performance of the BiCE-CM method for the two benchmarks is demonstrated in Table 3 and 4, in which the results of the CP-splitting method are enclosed in the parentheses for comparison. From these two tables, we observe a clear variance reduction in the BiCE-CM estimator without increasing the computational cost compared to the CP-splitting method. The standard iCE performs poorly for these two benchmarks due to the choice of a small \(p_{0}\). Fig. 7 illustrates the impact of different prior parameters \(C\) and of different \(K\) on the performance of the BiCE-CM method. We herein consider the Dodecahedron network with \(thr=0\) and \(p_{0}=10^{-3}\). When \(C=5000\), the prior estimate dominates the data-related estimate in Eq. (33) and results in a near uniform IS distribution. In such cases, the performance of the BiCE-CM is poor. On the contrary, when \(C=200\), which is a minor proportion of \begin{table} \begin{tabular}{l l l l l l} \hline & \(p_{ref}\) & mean & c.o.v. & cost & relEff \\ \hline \(p_{0}:10^{-3},thr:100\) & \(3.05\cdot 10^{-6}\) & \(3.04(3.03^{*})\cdot 10^{-6}\) & \(0.06(0.20)\) & \(1.11(0.90)\cdot 10^{4}\) & \(8.2(0.92)\cdot 10^{3}\) \\ \(p_{0}:10^{-4},thr:100\) & \(3.08\cdot 10^{-8}\) & \(3.00(2.99)\cdot 10^{-8}\) & \(0.06(0.23)\) & \(1.40(1.30)\cdot 10^{4}\) & \(7.6(0.49)\cdot 10^{5}\) \\ \(p_{0}:10^{-3},thr:0\) & \(2.06\cdot 10^{-9}\) & \(2.01(2.03)\cdot 10^{-9}\) & \(0.05(0.26)\) & \(1.41(1.30)\cdot 10^{4}\) & \(1.2(0.057)\cdot 10^{7}\) \\ \(p_{0}:10^{-4},thr:0\) & \(2.02\cdot 10^{-12}\) & \(1.99(1.97)\cdot 10^{-12}\) & \(0.06(0.27)\) & \(1.80(2.10)\cdot 10^{4}\) & \(7.4(0.34)\cdot 10^{9}\) \\ \hline \end{tabular} * The number in the parentheses shows the result of the CP-splitting method. \end{table} Table 4: Performance of the BiCE method for the Dodecahedron network in example 5.2. Figure 6: Topology of the two benchmarks in Example 5.2. the \(N\), the BiCE-CM works well for \(K\) equal to 5 or 10 or when employing BIC. ### Application: the IEEE 30 benchmark model with common cause failure In this subsection, we consider the IEEE 30 power transmission network [63] illustrated in Fig. 8. The network consists of 6 power generators, 24 substations, and 41 transmission lines, which we assume to be subjected to earthquakes. The hypocenter of the earthquake is assumed to be fixed and the earthquake magnitude is described by a truncated exponential distribution \(p_{M}\propto\exp(-0.85m),\quad 5\leqslant m\leqslant 8\). The failures of the network components are dependent as they occur due to the earthquake, but it is often assumed that they are conditional independent given the earthquake [64]. Such conditional independence is depicted in Fig. 9[1], where \(r_{i}\) represents the hypocentral distance of the \(i\)-th component, and \(im_{i}\) is the intensity measure of \(i\). In the present example, \(im_{i}\) is a deterministic function of \(r_{i}\) described by the ground motion predictive equation (GMPE) given in [65]. \(S_{i}\) denotes the state of the component \(i\), whose distribution is indicated by the fragility curves in [66]. For each of the 6 generators, we consider 5 damage states, namely negligible, minor, moderate, extensive, and complete damage, which correspond to 0%, 20%, 60%, 80%, and 100% reduction of power production, respectively. The remaining 24 non-generator buses and all 41 transmission branches have 2 damage states, either safe or complete failure. The distribution of different network components is summarized in Table 5. Figure 7: Boxplot of the BiCE-CM estimates for the Dodecahedron network with \(thr=0\) and \(p_{0}=10^{-3}\). (a) \(C=200,\epsilon=10^{-8}\), (b) \(C=5000,\epsilon=10^{-8}\). Figure 8: Network topology of the IEEE30 benchmark. Figure 9: Dependence structure for the IEEE30 benchmark subjected to earthquakes. The purple nodes represent the random variables. We measure the network performance by the load shedding based on a direct current optimal power flow (DC-OPF) analysis using MATPOWER v7.1 [63]. The system failure is defined as over \(50\%\) of the total power demand being shed after the earthquake, which gives the following performance function: \[g(\mathbf{x})\triangleq 50\%-\frac{LS(\mathbf{x})}{D_{tot}}, \tag{45}\] where \(LS(\mathbf{x})\) represents the load shedding with the network configuration, or state, \(\mathbf{x}\), and \(D_{tot}\) is the total power demand. The failure probability approximated by one single crude MCS with \(10^{6}\) samples is equal to \(0.0013\), which is then employed as the reference for validating the proposed BiCE-CM algorithm. For the BiCE-CM, \(200\) independent runs with \(N=2,000,\delta_{tar}=\delta_{\epsilon}=1.5\) are launched, based on which, we calculate the mean, c.o.v. and the relative efficiency of the BiCE-CM estimator. The number of mixture components \(K\) is adaptively chosen via the BIC, and we investigate \(4\) different prior distributions with \(C\in\{0,200,400,5000\}\) and \(\epsilon=10^{-8}\). The results are depicted in Fig. 10, where it is shown that the BiCE-CM with \(C=400\) performs the best among the four investigated cases. In particular, it significantly outperforms the \(C=0\) case, which represents the standard iCE method. The relative efficiency of the BiCE-CM with \(C=400\) is about \(6\), meaning the efficiency is around \(6\) times higher than that of the crude MCS. The average CPU time of the BiCE-CM is as \(371.23\) seconds on a \(3.50\)GHz Intel Xeon E3-1270v3 computer. As a comparison, crude MCS needs \(46,161\) samples to achieve the same coefficient of variation as the BiCE-CM, which takes \(1741.68\) seconds on the same computer. Hence, the overhead of BiCE-CM does not strongly affect the overall computation time. The BM averaged over \(200\) repetitions of the BiCE-CM algorithm is depicted in Fig. 11 for different components of the IEEE30 benchmark model. For multi-state generators, the failure is defined as the power production being reduced by \(80\%\) or more. We can see from the figure that except for components \(3\),\(4\) and \(8\), the BM evaluated with the BiCE-CM method is \begin{table} \begin{tabular}{l l l l} \hline & generators & non-generator buses & transmission lines \\ \hline \# components & 6 & 24 & 41 \\ distribution & categorical & Bernoulli & Bernoulli \\ reference & Table 6.6 in [59] & Table 6.9 in [59] & \(p_{f}=5\cdot 10^{-2}\) \\ \hline \end{tabular} \end{table} Table 5: The distribution of different components for the IEEE30 benchmark. consistent with that evaluated by crude MCS. and improve the performance of the estimate, we employ instead the categorical mixture as the parametric family of the BiCE. The parameters of the mixture model are updated through the weighted maximum a posteriori (MAP) estimate. In this way, the overfitting issue encountered in the standard improved cross entropy (iCE) method, which employs the weighted maximum likelihood estimate (MLE), is mitigated. The proposed algorithm is termed the BiCE-CM method. We approximate the weighted MAP through the expectation maximization(EM) algorithm with a minor modification to account for the weights and the prior. The algorithm results in a monotonically increasing weighted posterior and converges to a local maximum, a saddle point, or a boundary point depending on the starting point of the generalized EM algorithm. Moreover, the Bayesian information criterion (BIC) can be computed as a by-product of the generalized EM algorithm and is employed as model selection technique for choosing the optimal number of components in the mixture when the sample size is moderate. The model selection technique is unnecessary in a large sample setting in which case a large number of mixture components is suggested. A set of numerical examples demonstrates that the proposed algorithm outperforms the standard iCE and the BiCE with the independent categorical distribution. Note that there is no guarantee that the BiCE-CM can find all major failure modes. The accuracy and efficiency of the BiCE-CM depend highly on the choice of the prior distribution. In this paper, we suggest a balanced prior that works well in all our numerical examples. A detailed investigation of alternative choices of the prior should be carried out. In addition, the BiCE-CM method does not directly apply to high dimensional problems due to the degeneration of the IS weights, and hence, dimensionality reduction techniques should be employed in such cases. These two aspects will be addressed in future work. ## 7 Acknowledgment The first author gratefully acknowledges the financial support of the China Scholarship Council.
2310.20227
Achieving Scalable Capacity in Wireless Mesh Networks
Wireless mesh networks play a critical role in enabling key networking scenarios in beyond-5G (B5G) and 6G networks, including integrated access and backhaul (IAB), multi-hop sidelinks, and V2X. However, it still poses a challenge to deliver scalable per-node throughput via mesh networking, which significantly limits the potential of large-scale deployment of wireless mesh networks. Existing research has achieved $O(1)$ per-node throughput in a dense network, but how to achieve scalability remains an unresolved issue for an extended wireless network where the network size increases with a constant node density. This issue prevents a wireless mesh network from large-scale deployment. To this end, this paper aims to develop a theoretical approach to achieving scalable per-node throughput in wireless mesh networks. First, the key factors that limit the per-node throughput of wireless mesh networks are analyzed, through which two major ones are identified, i.e., link sharing and interference. Next, a multi-tier hierarchical architecture is proposed to overcome the link-sharing issue. The inter-tier interference under this architecture is then mitigated by utilizing orthogonal frequency allocation between adjacent tiers, while the intra-tier interference is reduced by considering two specific transmission schemes, one is MIMO spatial multiplexing with time-division, the other is MIMO beamforming. Theoretical analysis shows that the multi-tier mesh networking architecture can achieve a per-node throughput of $\Theta(1)$ in both schemes, as long as certain conditions on network parameters including bandwidth, antenna numbers, and node numbers of each tier are satisfied. A case study on a realistic deployment of 10,000 nodes is then carried out, which demonstrates that a scalable throughput of $\Theta(1)$ is achievable with a reasonable assumption on bandwidth and antenna numbers.
Lei Lei, Aimin Tang, Xudong Wang
2023-10-31T07:12:31Z
http://arxiv.org/abs/2310.20227v1
# Achieving Scalable Capacity in Wireless Mesh Networks ###### Abstract Wireless mesh networks play a critical role in enabling key networking scenarios in beyond-5G (B5G) and 6G networks, including integrated access and backhaul (IAB), multi-hop sidelinks, and V2X. However, it still poses a challenge to deliver scalable per-node throughput via mesh networking. As shown in Gupta and Kumar's seminal research [1], multi-hop transmission results in a per-node throughput of \(\Theta(1/\sqrt{n\log n})\) in a wireless network with \(n\) nodes, significantly limiting the potential of large-scale deployment of wireless mesh networks. Follow-up research has achieved \(O(1)\) per-node throughput in a dense network, but how to achieve scalability remains an unresolved issue for an extended wireless network where the network size increases with a constant node density. This issue prevents a wireless mesh network from large-scale deployment. To this end, this paper aims to develop a theoretical approach to achieving scalable per-node throughput in wireless mesh networks. First, the key factors that limit the per-node throughput of wireless mesh networks are analyzed, through which two major ones are identified, i.e., link sharing and interference. Next, a multi-tier hierarchical architecture is proposed to overcome the link-sharing issue. The inter-tier interference under this architecture is then mitigated by utilizing orthogonal frequency allocation between adjacent tiers, while the intra-tier interference is reduced by considering two specific transmission schemes, one is MIMO spatial multiplexing with time-division, the other is MIMO beamforming. Theoretical analysis shows that, the multi-tier mesh networking architecture can achieve a per-node throughput of \(\Theta(1)\) in both schemes, as long as certain conditions on network parameters including bandwidth, antenna numbers, and node numbers of each tier are satisfied. A case study on a realistic deployment of 10,000 nodes is then carried out, which demonstrates that a scalable throughput of \(\Theta(1)\) is achievable with a reasonable assumption on bandwidth and antenna numbers. Mesh networks, per-node throughput, scalability, MIMO, realistic deployment. ## I Introduction Wireless mesh networking has recently emerged as a key technology in many wireless communication systems, where data is transmitted from the source to the destination in a multi-hop way, offering several prominent advantages such as flexibility, cost efficiency, and low complexity [2]. One potential application of wireless mesh networking is to support backhauling of 6G networks and provide a more flexible service [3]. Furthermore, mesh networking is commonly used in vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-everything (V2X) networks as it can improve signal propagation and extend network coverage [4, 5]. It will be an enabling technology for multi-hop sidelinks [6]. A primary concern in wireless mesh networking is how to achieve scalable end-to-end throughput in large-scale deployments. Pioneering work by P. Gupta and P. R. Kumar [1] established a theory to analyze the capacity scaling law of wireless networks. Their results reveal that in a wireless network with \(n\) nodes independently and uniformly distributed within a unit disk, each node can attain a per-node throughput of \(\Theta(1/\sqrt{n\log n})\) via a multi-hop transmission strategy. However, even under optimal conditions, the per-node throughput cannot exceed \(\Theta(1/\sqrt{n})\), leading to a decline in the per-node throughput of wireless networks by at least an order of \(1/\sqrt{n}\) when using multi-hop transmission. This decline indicates that the throughput of multi-hop networks is not scalable as \(n\) increases. In [7], the \(1/\sqrt{\log n}\) factor in the achievable per-node throughput is removed with the aid of paths that percolate across the network, i.e., a throughput of \(O(1/\sqrt{n})\) is achievable. In a dense network where the network size maintains unchanged while node density keeps increasing, the degradation in performance using multi-hop transmission is mainly caused by the interference due to concurrent wireless transmissions, as reported in [1]. Nevertheless, by employing a hierarchical cooperation (HC) scheme [8] that facilitates joint transmission and reception among nodes or exploiting node mobility in a wireless mobile network [9], a scalable throughput of \(O(1)\) can be achieved. A more general conclusion is drawn in [10], where four different operating regimes are identified based on the node density and path loss exponent. Besides, each of these regimes corresponds to a specific order-optimal transmission scheme. In an extended network where the network size scales linearly with the number of nodes while maintaining a constant node density, the scaling law differs. Typically, in such networks, each node is capable of transmitting data only to its neighboring nodes, thereby generating negligible interference to far-off receiving nodes. It is revealed in [10] that multi-hop transmission is an optimal scheme in the extended network, which typically operates in the power-limited region and can achieve a throughput of \(O(1/\sqrt{n})\). In [11], a similar theoretic bound of \(\Theta(1/\sqrt{n})\) is established for extended networks with nodes location satisfying a minimum distance constraint and a power-law path loss model with exponent \(\alpha>6\), or an exponential attenuation. An upper bound of \(O(1/n^{1/2-1/\alpha})\) is derived in [12]. In general, a scalable throughput of \(O(1)\) is not achievable in an extended network. Consequently, how to achieve scalable per-node throughput remains an open issue for wireless mesh networks. This paper aims to develop a method to resolve this issue. To this end, the throughput of an extended mesh network is first analyzed by taking a similar approach of existing work, but from the perspective of identifying the key factors that restrict the scalability of extended mesh networks. The analysis reveals two main factors that limit scalability: link-sharing and interference. While the interference caused by concurrent nearby transmissions is a well-known issue in wireless networks, the link-sharing issue arises in a wireless mesh network because one node needs to help relay multiple data flows of other source-destination (S-D) pairs. As \(n\) increases, the number of data flows one node needs to relay also increases, which reduces the per-node throughput and limits the scalability. To resolve the link-sharing issue of extended networks, the key approach is to reduce the number of data flows each node needs to relay. A simple way is to enlarge the transmission range of all wireless nodes to be comparable with the network radius, i.e., the transmit power of each node should also grow as \(n\) increases. Besides, a sophisticated design of the interference management scheme is required to reduce excessive interference. Hence, the above method is not feasible for practical deployment. To address this issue, a multi-tier hierarchical architecture is proposed for extended mesh networks in this paper. It includes multiple types of relay nodes, being organized into multiple tiers and supporting data transfer for regular mesh nodes. Data packets of S-D pairs are routed in the multi-tier mesh network using a routing policy named \(D\)-hop maximum routing. To improve the capacity scaling laws of a multi-tier mesh network, multi-input multi-output (MIMO) technology is utilized for data transmission of each link. In particular, two transmission schemes are considered to increase the per-node throughput. One is the spatial multiplexing scheme where the DoF gain of MIMO is exploited to enhance link capacity, and the other is the beamforming scheme where the power gain of MIMO is utilized to increase the transmission range of wireless nodes and thus further alleviate the link-sharing issue. To resolve the remaining issue of interference, orthogonal frequency bands are allocated to adjacent tiers to avoid inter-tier interference, while time division or beamforming is used to reduce intra-tier interference. More specifically, under the spatial multiplexing scheme, time division is used to separate data transmissions of adjacent nodes to reduce the excessive interference of concurrent data transmissions, while under the beamforming scheme, MIMO beamforming technology is utilized to transmit data with thin beams, without causing too much interference to nearby nodes. Theoretical analysis shows that, a scalable throughput of \(\Theta(1)\) can be achieved in both spatial multiplexing and beamforming schemes, as long as the scaling orders of bandwidth, antenna number, and node number at each tier satisfy certain requirements. A case study is carried out subsequently, which reveals that the aforementioned requirements on network parameters are reasonable and achievable in realistic deployment, by leveraging high-frequency communications to support both large bandwidth and multiple antennas. The key contributions of this paper are summarized as follows: * Two critical factors that limit the scalability of wireless networks are revealed by deriving and analyzing the achievable per-node throughput of a single-tier mesh network. * A multi-tier hierarchical network architecture is proposed to tackle the link-sharing issue for extended mesh networks. * Various physical layer technologies are incorporated into the multi-tier network architecture, and then the achievable per-node throughputs in different cases are derived. * Theoretical analysis shows that scalable per-node throughput can be achieved via the multi-tier mesh network architecture along with certain physical layer technologies. The rest of this paper is organized as follows. A single-tier hexagonal mesh network is analyzed in Section II, where two main factors that limit the scalability of mesh networks are identified. To resolve the scalability issue, a multi-tier hierarchical architecture for mesh network is proposed in Section III. The achievable per-node throughput for a multi-tier mesh network considering MIMO technologies is derived in Section IV. In Section V, the conditions to achieve scalability are discussed. A case study on realistic deployment is carried out to demonstrate the feasibility of the multi-tier mesh architecture. Finally, the paper is concluded in section VI. ## II Factors that Limit the Scalability of Single-Tier Mesh Networks Deriving scaling laws for wireless mesh networks has been the subject of extensive research. However, most existing works fail to explicitly explain why wireless mesh networks suffer from scalability issues. Addressing this issue is the primary objective of this section. By deriving and analyzing the scaling laws for the per-node throughput of wireless mesh networks, interference and link-sharing are identified as the two key factors that limit the scalability of wireless mesh networks. ### _Network Model_ Consider a wireless mesh network comprised of \(n\) mesh nodes, as illustrated in Fig. 1. The network is divided into \(n\) hexagonal cells, where each hexagonal cell contains exactly one mesh node. Each mesh node can be placed either exactly in the center of the hexagonal cell (i.e., regular mesh network) or randomly distributed around the center within a small perturbation range (i.e., randomly perturbed network). Notably, different from the random network in [1] where cell size has to be at least \(\log n\) to make sure that each cell contains at least one node with high probability (w.h.p.), the geographical locations considered in this paper is more like a regular lattice. Consequently, the \(1/\sqrt{\log n}\) factor that occurred in the achievable per-node throughput in [1] can be removed, thus eliminating the effect of nodes' locations on the scalability. Define the \(l\)-th outer "ring" of a cell as the set containing cells that are \(l\) hops away from the given cell. For the central cell, there are \(6l\) cells in the \(l\)-th outer ring. Suppose there are \(L_{r}\) outer rings around the central cell in total. The total number of nodes is then \(n=3L_{r}\) (\(L_{r}+1\))+1. The distance between the centers of two neighboring cells is \(\sqrt{3}a\), where \(a\) denotes the side-length of the hexagonal cell, and the radius of the mesh network can be expressed as \(\sqrt{3}L_{r}a\). For the extended single-tier mesh network, when \(n\) increases, the cell size remains unchanged, i.e., \(a\) is a fixed constant, while the network size grows linearly with \(n\). Thus, it can be derived that \(L_{r}=\Theta(\sqrt{n})\). In terms of data traffic, each node randomly and independently selects another node as its destination, forming a source-destination (S-D) pair. The number of S-D pairs \(N_{S-D}\) is equal to \(n\), i.e., \(N_{S-D}=n\). There are \(N_{S-D}\) data flows across the mesh network in total. ### _Transmission Model_ Let \(d_{ij}\) denote the distance between nodes \(i\) and \(j\), and assume equal transmit power \(P\) for all mesh nodes. The received signal power \(P_{ij}\) at node \(j\) from node \(i\) can be expressed as \[P_{ij}=CPd_{ij}^{-\alpha}, \tag{1}\] where \(\alpha\) is the path loss exponent with typical values in outdoor environments of \(2\leq\alpha\leq 4\), and \(C\) is the effective antenna gain. To obtain a feasible transmission rate, the received signal power should be no less than a certain threshold, denoted by \(P^{0}\), i.e., \(P_{i,j}CPd_{ij}^{-\alpha}\geq P^{0}\), or equivalently, \[d_{ij}\leq\left(\frac{CP}{P^{0}}\right)^{1/\alpha}, \tag{2}\] which provides a threshold on the maximum allowed distance for two mesh nodes to communicate. In mesh networks, data transmissions are accomplished in multiple hops. We assume that all mesh nodes have the same transmission range \(r_{0}\). As the transmission range increases, the number of hops required decreases. Let \(d_{l,max}\) and \(d_{l,min}\) be the maximum and minimum distances from a node to another node located in the \(l\)-th outer ring around itself, whose concrete values can be found in [13]. A node can communicate with another node inside the \(r\)-th outer ring around itself if \[d_{r,max}\leq r_{0}<d_{r+1,min},\quad r=1,...,L_{r}.\] This condition for transmission range can be equivalently transformed into requirements on transmit power. Here we only consider the minimum required power to provide a feasible communication rate, i.e., \(CPd_{r,max}^{-\alpha}=P^{0}\). Since the network radius \(a\) remains constant in the extended network, \(d_{r,max}\) grows in the same order with \(r\). Thus, the required transmit power can be derived as \[P=P^{0}d_{r,max}^{\alpha}/C=\Theta\left(r^{\alpha}\right). \tag{3}\] Denote by \(R_{ij}\) the transmission rate from node \(i\) to node \(j\). \(R_{ij}\) can be obtained by Shannon's formula, i.e., \[R_{ij}=W\log_{2}\left(1+\gamma_{ij}\right), \tag{4}\] where \(W\) is the allocated channel bandwidth, \(\gamma_{ij}\) represents the Signal-to-Interference-plus-Noise Ratio (SINR) of the received signal at node \(j\) from node \(i\). ### _Throughput Analysis of Single-Tier Mesh Networks_ In this paper, we adopt the same definition of per-node throughput as [1]. An achievable per-node throughput of a single-tier mesh network, denoted by \(R(n)\), indicates that each node can transmit data to its chosen destination at a rate of \(R(n)\). To address the factors that limit the scalability of mesh networks, an investigation into the scaling law of per-node throughput for single-tier mesh networks, i.e., how \(R\left(n\right)\) changes with \(n\), is essential. In particular, two cases characterizing two different transmission range \(r\) are analyzed in this paper to examine the per-node throughput of single-tier mesh networks, namely \(r=\Theta\left(1\right)\) and \(r=\Theta\left(\sqrt{n}\right)\). In the former case, nodes can only access other nodes in their neighboring cells, and multi-hop transmission is required for data communication with distant nodes. On the other hand, in the latter case, data transmission can be completed within several hops, since the transmission range is in the same order as the network radius. Thus, we term these two specific transmission schemes as Short-Hop (SH) scheme and Long-Hop (LH) scheme, respectively. According to (3), the required transmit power under these two schemes should scale as \[P=\begin{cases}\Theta(1),&\text{for SH scheme,}\\ \Theta\left(n^{\alpha/2}\right),&\text{for LH scheme.}\end{cases} \tag{5}\] The per-node throughputs of the SH and LH transmission schemes are stated in the following theorem. **Theorem 1**.: _(Single-Tier) For both regular and randomly perturbed hexagonal mesh networks with \(n\) nodes, the achievable per-node throughput is_ \[R\left(n\right)=\begin{cases}\Theta\left(W/\sqrt{n}\right),&r=\Theta\left(1 \right),\\ \Theta\left(W/n\right),&r=\Theta\left(\sqrt{n}\right),\end{cases}\] _where \(W\) is the bandwidth allocated to wireless transmission, \(r=\Theta\left(1\right)\) and \(r=\Theta\left(\sqrt{n}\right)\) correspond to SH and LH transmission schemes, respectively._ In general, the procedure of obtaining per-node throughput of single-tier mesh networks can be partitioned into the following steps. First, a lower bound on the transmission rate between two communicating mesh nodes, denoted by \(R^{\left(L\right)}\), is derived. Next, the maximum number of interfering neighboring cells, denoted by \(\Delta c\), is obtained. Utilizing time Fig. 1: Hexagonal mesh network model. division to separate these interfering transmissions, each cell is active in every \(\left(1+\Delta c\right)\) time slots [1]. An upper bound on the number of data flows a node needs to relay is then derived as \(Z^{\left(U\right)}\). Last but not least, assuming these no more than \(Z^{\left(U\right)}\) data flows are also separated using time division, an achievable per-node throughput can be obtained as \[R\left(n\right)=\frac{R^{\left(L\right)}}{\left(1+\Delta c\right)Z^{\left(U \right)}}. \tag{6}\] In the sequel, the scaling laws on the per-node throughput of single-tier mesh networks will be derived in detail. #### Ii-C1 Lower bound on transmission rate To derive a lower bound on the transmission rate, we need to derive the minimum received power and maximum interference. As for the data transmission from node \(i\) to node \(j\), the cumulative interference suffered by node \(j\) comes from all the other concurrent transmissions sharing the same frequency bandwidth. The maximum interference occurs when the node \(j\) is located at the center cell of the network and all other nodes transmit concurrently. Using the time division method, the SINR of data transmission is \(\Omega(1)\) in a single-tier mesh network, as shown in [14] and [13]. Thus, the transmission rate between node \(i\) and node \(j\) is \(R_{ij}=W\log_{2}(1+\gamma_{ij})=\Omega(W)\). A lower bound on transmission rate between two communicating nodes for the single-tier mesh network can be obtained as \[R^{\left(L\right)}=\Theta(W). \tag{7}\] #### Ii-C2 Number of Interfering Cells Recall the network model shown in Fig. 1. Since each node can communicate with another node inside the \(r\)-th outer ring around itself, in the worst case, all other nodes inside the \(r\)-th outer ring around the receiving node except the transmitting node are considered to be interfering nodes. The number of interfering cells for a receiving node scales in the order of \(r^{2}\), and hence \[\Delta c=\Theta\left(r^{2}\right)=\begin{cases}\Theta\left(1\right),&r=\Theta \left(1\right),\\ \Theta\left(n\right),&r=\Theta\left(\sqrt{n}\right).\end{cases} \tag{8}\] #### Ii-C3 Number of Data Flows Each Node Relay The number of hops needed for data transmission depends on transmission range \(r_{0}\). A larger transmission range leads to fewer hops needed for data transmission. Let \(Z_{i}\) denote the number of data flows node \(i\) participates, it has been derived in [13] that \[\mathbb{E}\left[Z_{i}\right]=\begin{cases}\Theta\left(\sqrt{n}\right),&r= \Theta\left(1\right),\\ \Theta\left(1\right),&r=\Theta\left(\sqrt{n}\right).\end{cases}\] Intuitively, when each node can only access another node in a neighboring cell, the expected number of hops needed for each data flow scales in the same order as the number of outer rings of the central cell, i.e., \(\Theta(sqrt{n})\). The total number of hops across the network is \(\Theta(n\sqrt{n})\), and thus each node needs to relay \(\Theta(\sqrt{n})\) data flows on average. For SH transmission scheme where \(r=\Theta\left(1\right)\), an upper bound on \(Z_{i}\) can be obtained by applying Chernoff upper tail bound [15] to \(\mathbb{E}\left[Z_{i}\right]\). The result shows that \(Z_{i}\leq\left(1+\delta\right)\mathbb{E}\left[Z_{i}\right]\) for every node \(i\) with the probability no smaller than \(1-1/n^{2}\), where \(\delta=\sqrt{6\log n/\mathbb{E}\left[Z_{i}\right]}\). Thus, the upper bound on \(Z^{\left(U\right)}\) under the SH transmission scheme can be obtained as \[Z^{\left(U,SH\right)}=\left(1+\delta\right)\mathbb{E}\left[Z_{i}\right]=\Theta \left(\sqrt{n}\right). \tag{9}\] As for LH scheme where \(r=\Theta\left(\sqrt{n}\right)\), since most data transmissions are finished within a few long-range hops, the number of data flows each node needs to relay is upper bounded by a constant, i.e., \[Z^{\left(U,LH\right)}=\Theta\left(1\right). \tag{10}\] Combining (9) and (10) yields that \[Z^{\left(U\right)}=\begin{cases}\Theta\left(\sqrt{n}\right),&r=\Theta\left(1 \right),\\ \Theta\left(1\right),&r=\Theta\left(\sqrt{n}\right),\end{cases} \tag{11}\] #### Ii-C4 Capacity scaling law Plugging (7), (8), and (11) into (6), the achievable per-node throughput for the single-tier hexagonal mesh network can be obtained as \[R\left(n\right) \geq\frac{R^{\left(L\right)}}{\left(1+\Delta c\right)Z^{\left(U \right)}} \tag{12}\] \[=\begin{cases}\Theta\left(W/\sqrt{n}\right),&r=\Theta\left(1 \right),\\ \Theta\left(W/n\right),&r=\Theta\left(\sqrt{n}\right).\end{cases}\] This completes the proof of Theorem 1. ### _Key Factors that Limit the Scalability_ As shown in (12), the per-node throughput decreases in the order of \(1/\sqrt{n}\) and \(1/n\) under SH and LH schemes, respectively. By analyzing the derivation procedure, the following two factors that limit the scalability are obtained. **Link-Sharing:** From (12), the per-node throughput of mesh networks is lower bounded by \(1/\sqrt{n}\) in the SH scheme where \(r=\Theta(1)\). In this scenario, each node can only access a few nearby nodes and most data flows are delivered hop by hop. Thus, many nodes need to help relay data of other S-D pairs. Since every data flow is expected to be finished within \(\Theta(\sqrt{n})\) hops, the total number of disjoint hops needed is \(\Theta(n\sqrt{n})\). However, the single-tier mesh network can only support \(\Theta(n)\) links, thus leading to the link-sharing issue, i.e., each node needs to help relay \(\Theta(\sqrt{n})\) data flows of other S-D pairs. On the other hand, in the LH scheme where \(r=\Theta(\sqrt{n})\), data transmissions can be accomplished in several hops, and each node needs to relay \(O(1)\) data flows. Thus, the link-sharing issue is no longer a key factor that limits the per-node throughput. **Interference:** Since time division is used to reduce excessive interference, the number of interfering cells shows a direct impact on the per-node throughput. Specifically, in the SH scheme where \(r=\Theta\left(\sqrt{n}\right)\), data transmissions take place between neighboring cells. The number of interfering cells is \(O(1)\), which has little effect on the throughput of the SH scheme. On the contrary, in the LH scheme where \(r=\Theta(\sqrt{n})\), there are \(\Delta c^{\left(LH\right)}=\Theta(n)\) interfering cells which need to keep silent for successful long-range transmission, which leads to the \(1/n\) factor in the scaling law of throughput in LH scheme. It can be seen that there exists a trade-off between link-sharing and interference, which is dependent on the transmission range. As the transmission range increases, the number of data flows that each node needs to relay may decrease, i.e., the link-sharing issue is alleviated, but it comes at the cost of a larger number of interfering cells. ## III Architecture Design for Mesh Networks As discussed in the previous section, link-sharing is the primary factor that limits the scalability of wireless mesh networks using the short-hop transmission scheme, corresponding to the extended network. As a result, reducing the number of data flows each node needs to relay is the key approach to improving per-node throughput. While increasing transmit power to enlarge the transmission range can decrease the number of hops needed for transmission of each S-D pair, the required power for each node grows exponentially according to (5). Besides, a sophisticated interference cancellation scheme like hierarchical cooperation [8] is required to mitigate the excessive interference due to the increase of transmission range. Both of the above two factors make it impractical to solely increase transmit power in the realistic deployment. Thus, a new efficient architecture is required for wireless mesh networks to resolve both the link-sharing and interference issues. ### _Insights to Architecture Design_ In the previous analysis of per-node throughput in single-tier mesh networks, it has been established that link-sharing is the main cause of the decline in the per-node throughput of extended networks where the multi-hop transmission scheme is employed. To achieve scalability in extended mesh networks, alleviating the link-sharing issue is essential. As shown before, the link-sharing issue arises when a node is required to relay data of other S-D pairs. When the SH transmission scheme is used, the number of data flows that each node needs to relay is in the order of \(\sqrt{n}\). On the other hand, when the long-hop (LH) transmission scheme is used, each node only needs to relay \(O(1)\) data flows, but the number of interfering cells becomes \(\Theta(n)\), which also limits the per-node throughput. Moreover, it has been proven in [11] and [10] that, the multi-hop transmission scheme is orderly optimal in extended networks. Thus, a new architecture is required to address both the link-sharing issue and interference to achieve scalability in extended networks. To address the link-sharing issue in an extended mesh network, the key is to limit the number of data flows each node needs to relay. To achieve this goal, additional relay nodes can be exploited to establish more wireless links and facilitate data transmission. In this way, a two-tier mesh network is constructed where the first tier contains all data nodes and the second tier is comprised of relay nodes. To alleviate the link-sharing issue in the data tier, the following routing policy can be utilized. Specifically, for a specific S-D pair, if the required hop counts using a pure multi-hop scheme in the first tier exceeds a certain predetermined threshold, then the data will be transmitted to the relay tier in a multi-hop manner. After reaching the relay node near the destination, the data will be sent to the destination node using multiple hops. The link-sharing issue in the data tier can be resolved since each node only needs to relay \(O(1)\) data flows regardless of the number of data nodes. To achieve this target, the density of relay nodes should maintain as a constant independent of \(n\) such that each node can access a relay node within a certain number of hops. The number of required relay nodes grows in the order of \(\Theta(n)\). The relay tier can be regarded as a separate mesh network. The link-sharing issue in the relay tier now becomes the bottleneck of throughput if the transmission range of relay nodes is fixed independent of \(n\). One way of resolving this issue is keeping the number of relay nodes constant while increasing their transmission range and transmission rate in accordance with \(n\), which can be achieved by increasing transmit power, allocating more bandwidth, and employing more antennas. Nevertheless, as \(n\) increases, the required transmit power will become out-of-range in this two-tier mesh network. As the link-sharing issue will also manifest in the relay tier of the aforementioned two-tier networks, which curtails the scalability of this type of two-tier mesh network. Thus, a multi-tier hierarchical architecture for mesh networks is developed in this paper. As depicted in Fig. 2, several types of relay nodes with varying transmission ranges are overlaid with the data tier. Nodes at higher tiers have larger transmission ranges and higher transmission rates than lower-tier nodes, which is achieved by allocating more communication resources including bandwidth, antenna numbers, and transmit power. The specific system model, scaling relations of network parameters, and the routing policy are presented in detail in the next section. ### _Architecture Design_ #### Iii-B1 Node Distribution We consider a multi-tier wireless network, illustrated in Fig. 2, where a hierarchical architecture is constructed with multiple relay tiers overlaid with the first tier which contains \(n\) data nodes. These relay nodes are deployed solely to aid in the transmission of data and do not generate any traffic data themselves. Suppose there are \(L\) tiers of nodes in total, with \(n_{l}\) denoting the number of nodes in the \(l\)-th tier. It follows that \(n_{1}=n\). At each tier, the network is divided into \(n_{l}\) hexagonal cells, where each cell contains exactly one \(l\)-th tier node, i.e., the same setting as the aforementioned single-tier hexagonal mesh Fig. 2: Illustration of the multi-tier hierarchical architecture. network. Each node is located around the center of the cell with an allowed small random perturbation. Communication of the \(l\)-th tier takes place over a bandwidth of \(W_{l}\). In addition, each node at the \(l\)-th tier is equipped with \(M_{l}\) antennas. The transmit power of each antenna element is denoted as \(P_{l}\). As for data traffic, it is still assumed that each data node in the first tier randomly and independently chooses another data node as its destination, i.e., an S-D pair. Thus, there are still \(N_{S-D}=n\) data flows. #### Iii-C2 Scaling of Network Parameters In this paper, we consider the scaling relations for network parameters in terms of node number \(n_{l}\), allocated bandwidth \(W_{l}\), and antenna number \(M_{l}\) with respect to the tier index \(l\), as illustrated in Table I. To ensure that the total number of nodes in the network is finite, \(n_{l}\) should at least decrease in the square order of \(l\). For general investigation, we assume \(n_{l}=n/l^{k}\) where \(k\geq 2\). Consequently, the cell size of the \(l\)-th tier, denoted by \(A_{l}\), is \(A_{l}=n/n_{l}=l^{k}\). Ten, the cell radius of the \(l\)-th tier, denoted by \(a_{l}\), is \[a_{l}=\Theta\left(A_{l}\right)=\Theta\left(l^{k/2}\right). \tag{13}\] To avoid the link-sharing issue in the highest tier, data transmission is supposed to be finished within a few hops. Thus, it should be satisfied that \(n_{L}=n/k^{L}=\Theta(1)\), and the number of total tiers \(L\) can be obtained as \[L=\Theta\left(\sqrt[4]{n}\right), \tag{14}\] which is dependent both on \(n\) and \(k\). As explained before, the allocated bandwidth and antenna number should also increase as \(n\) grows to maintain a non-decreasing throughput. Here, we also assume bandwidth and antenna number are increasing in the polynomial order with respect to \(l\), i.e., \(W_{l}=l^{\psi}\) and \(M_{l}=l^{\nu}\) where \(\psi\) and \(M_{l}\) are both constants no smaller than \(1\). Thus, the total required bandwidth is \[W_{\text{tot}}=\sum_{l=1}^{L}W_{l}=\Theta\left(W_{1}L^{\psi+1}\right)=\Theta \left(n^{(\psi+1)/k}\right).\] The maximum required antenna number is \[M_{\text{max}}=W_{L}=\Theta\left(M_{1}L^{\nu}\right)=\Theta\left(n^{\nu/k} \right).\] As can be seen, the required total bandwidth and maximum antenna number grow to infinity as \(n\) increases. Therefore, to obtain a reasonable and achievable setting for network parameters, the values of the three scaling orders, \(\psi\), \(\upsilon\), and \(k\) need to be carefully chosen in realistic deployment, which will be later discussed in Section V. Different from other parameters, \(P_{l}\) should be chosen to maintain a feasible link rate between two communicating nodes, as in the case of single-tier mesh networks discussed in Section II. Based on (3), to guarantee a feasible link rate between two neighboring nodes in the \(l\)-th tier, the received signal should be no less than a certain threshold, denoted by \(P_{l}^{0}\), i.e., \(C_{l}P_{l}d_{l}^{-\alpha_{l}}\geq P_{l}^{0}\), or equivalently, \[P_{l}\geq P_{l}^{0}d_{l}^{\alpha_{l}}/C_{l}, \tag{15}\] where \(d_{l}\) represents the distance between two neighboring nodes in the \(l\)-th tier, \(\alpha_{l}\) and \(C_{l}\) are the path-loss exponent and the constant determined by frequency, antenna profile, etc. of the \(l\)-th tier, respectively. From (13), we obtain that \(d_{l}=\Theta(\sqrt{n/n_{l}})=\Theta(l^{k/2})\), and hence \[d_{l+1}/d_{l}=\left(1+1/l\right)^{k/2}. \tag{16}\] Subscribing \(d_{L}=\Theta(L^{k/2})=\Theta(\sqrt{n})\) into (15), the scaling law for required transmit power of the highest relay nodes is \[P_{L}=\Omega\left(n^{\alpha_{L}2}\right). \tag{17}\] #### Iii-C3 D-hop Maximum Routing Policy As previously noted, the crux of resolving the link-sharing issue entails a reduction in the number of data flows that each node assists in relaying. To leverage the transport capacity furnished by newly added multiple types of relay nodes, a tier-by-tier routing strategy is adopted to convey data from the source to the destination. The overarching principle of this routing policy is to effectuate data transmission for a given S-D pair in a tier-wise fashion across the multi-tier mesh network. Considering each tier as an individual mesh network, it is feasible for each node to access its neighboring nodes within a single hop, allowing for data communication between nodes using multi-hop transmission at each tier. To prevent the link-sharing issue observed in the single-tier mesh network, it is crucial to restrict the number of hops for data transmission at each tier. Therefore, we propose a \(D\)-hop maximum routing policy for the multi-tier mesh network based on the \(L\)-maximum routing policy discussed in [16, 17]. At each tier, the source node sends data to its corresponding destination node using the multi-hop scheme if it can be reached within \(D_{l}\) hops. The specific values for \(D_{l}\) will be obtained later. Otherwise, the source node sends the data to its nearest next higher-tier node using the multi-hop transmission scheme. A similar decision is made for each subsequent tier until the data reaches the specific tier where data transmission can be finished within certain hops. Next, the data will be transmitted to the first tier in a similar manner and finally arrive at the destination mesh node. The corresponding destination node for the above "new" source node at each relay tier is chosen as the same-tier relay node nearest to the next-lower tier destination \begin{table} \begin{tabular}{c|c|c} \hline \hline & Scaling & Parameter range \\ \hline Node number & \(n_{l}=n/l^{k}\) & \(k\in[2,\infty]\) \\ Bandwidth & \(W_{l}=W_{l}l^{\psi}\) & \(\psi\in[1,\infty]\) \\ Antenna number & \(M_{l}=M_{l}l^{\nu}\) & \(\nu\in[1,\infty]\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Scaling Relation of Network Parameters Fig. 3: \(D\)-hop maximum routing policy. node. Fig. 3 illustrates an example of how to achieve data transmission for an S-D pair under this tier-by-tier routing policy. In addition, Fig. 4 depicts the three types of data transmission and reception for a node in the \(l\)-th tier (\(2\leq l\leq L-1\)). Data transmission of the \(l\)-th tier comprises same-tier transmission, upstream transmission to the \((l+1)\)-th tier, and downstream transmission from the \((l+1)\)-th tier. These transmissions consume the same bandwidth allocated to the \(l\)-th tier. As for the \(L\)-th tier, data transmission contains only the same-tier transmission. ## IV Achievable Per-Node Throughput for A Multi-tier Mesh Network In this section, the achievable per-node throughput for the proposed multi-tier mesh network is established under the \(D\)-hop maximum routing policy. To be specific, section IV-A extends the definition of per-node throughput to the case of the multi-tier mesh network. Section IV-B briefly introduces the two transmission schemes for the multi-tier mesh network studied in this paper. Specifically, based on the usage of multiple antennas, two transmission schemes, i.e., spatial multiplexing and beamforming schemes, are considered. Finally, the per-node throughput under both transmission schemes is derived in section IV-C. ### _Per-Node Throughput of A Multi-Tier Mesh Network_ We first extend the definition of per-node throughput of a single-tier mesh network to the case of a multi-tier mesh network. Similar to the single-tier mesh network, a per-node throughput of \(R(n)\) bps, for a multi-tier mesh network of \(n\) wireless data nodes, is said to be achievable if each node can transmit data to its chosen destination at a rate of \(R(n)\) bps. Moreover, an end-to-end rate of \(R_{l}(n)\) bits per second for the \(l\)-th tier of the multi-tier mesh network, is defined to be achievable if any two nodes in the \(l\)-th tier can communicate with each other at a rate of \(R_{l}(n)\) bps. The procedure of deriving an achievable per-node throughput in the multi-tier mesh network is summarized in the following. Let \(R_{l}^{(L)}\) denote the lower bound on the transmission rate of the \(l\)-th tier's links, including the same-tier transmission, the upstream transmission to the \((l+1)\)-th tier, and the downstream transmission from the \((l+1)\)-th tier. Let \(\Delta c_{l}\) and \(Z_{l}^{(U)}\) denote the maximum number of interfering neighbors for a node in the \(l\)-th tier and the maximum number of data flows each node in the \(l\)-th tier needs to relay, respectively. Since time division is utilized to reduce excessive intra-tier interference and separate data transfers of multiple data flows at one node, the achievable end-to-end rate between two nodes at the \(l\)-th tier, denoted by \(R_{l}(n)\), can be obtained as \[R_{l}(n)\geq\frac{R_{l}^{(L)}}{(1+\Delta c_{l})Z_{l}^{(U)}}. \tag{18}\] Under the \(D\)-hop maximum routing policy, every data flow is finished in a tier-by-tier manner, given the achievable per-node throughput of the \(l\)-th tier \(R_{l}^{(L)}\) in the multi-tier mesh network, the achievable end-to-end rate for every S-D data flow, or per-node throughput for the multi-tier mesh network \(R(n)\), can be obtained by taking minimum on all \(R_{l}^{(L)}\) from \(l=1\) to \(L\), i.e., \[R(n)=\min_{1\leq l\leq L}R_{l}(n), \tag{19}\] ### _Transmission Scheme_ Here, the transmission scheme is mainly focused on physical technologies and interference management at the link level. At each tier, data is transmitted using a multi-hop strategy from the source to the destination. Based on the aforementioned \(D\)-hop maximum routing policy, data is forwarded from one \(l\)-th tier node to its nearest \((l+1)\)-th tier node via multi-hop communication in the upstream transmission stage, and vice versa in the downstream transmission with the \((l+1)\)-th tier. Moreover, we assume that one \((l+1)\)-th tier node is capable of utilizing Multi-User MIMO (MU-MIMO) to start \(M_{l+1}/M_{l}\) routing paths at the same time slot for upstream and downstream transmission. For the same-tier transmission, one \(l\)-th tier node can only forward data to another neighboring node per transmission slot. All three types of data transmission at the \(l\)-th tier, under the \(D\)-hop maximum routing policy, take place between two neighboring cells, or in the same cell when a \(l\)-th tier source node and its corresponding \((l+1)\)-th tier destination node happen to be located in the same cell for upstream transmission or vice versa for downstream transmission. In this way, time or frequency division can be utilized to reduce excessive interference or strong collisions and satisfy the half-duplex (HD) constraint [18]. To be specific, supposing that the maximum number of interfering cells at the \(l\)-th tier is \(\Delta c_{l}\), then a \((1+\Delta c_{l})\)-TDMA can be used, where the \(l\)-th cells alternate in becoming active in every one of \((1+\Delta c_{l})\) time slots [1]. Nowadays, MIMO has become a key technology to increase the performance of wireless networks by exploiting the degree-of-freedom (DoF) gain, power gain, and diversity gain [19]. Notably, diversity gain is aimed at improving the reliability of a transmitted signal by spreading the same signal across two or more uncorrelated communication channels, resulting in a boosted performance of the decoding bit error probability (BER), which has little effect on the increase of channel capacity. Thus, we consider the following two transmission schemes of MIMO use. **Spatial Multiplexing:** When spatial multiplexing is utilized, multiple independent data streams can be transmitted at the same time. It has been shown in [20] and [21] that, utilizing spatial multiplexing for data transmission, a DoF gain of \(M\) can be obtained for a single-tier mesh network with Fig. 4: Data transmission and reception for a node at tier \(l\). nodes each equipped with \(M\) antennas. In the general case, a maximum DoF gain of \(N_{\min}=\min\{N_{t},N_{r}\}\) can be obtained, where \(N_{t}\) and \(N_{r}\) are the numbers of antennas equipped at the transmitter and the receiver, respectively [19]. To reduce the excessive inter-tier interference coming from concurrent transmissions of different tiers, we assume that frequency division is used in this scheme, i.e., orthogonal frequency bands are allocated to data transmission of different tiers. For data transmission of each tier, time division or frequency division is utilized to reduce intra-tier interference and satisfy the half-duplex constraint. In summary, using the spatial multiplexing scheme can improve the per-node throughput of wireless mesh networks by increasing the transmission rate of each link, such that the throughput degradation due to link-sharing can be compensated to some extent. **Beamforming:** In this scenario, at each hop, the source node transmits a single stream of data to its destination. Under the beamforming scheme, it is possible for each pair of communicating nodes to transmit with thin beams, such that the transmission energy is concentrated in the direction of the receiving node and excessive interference to nearby concurrent receiving nodes can be reduced [20]. Thus, both intra-tier and inter-tier interference can be regarded as upper bounded by a constant, i.e., \(O(1)\). In this way, frequency reuse among multiple tiers is possible. It has been shown in [20] that, for a single-tier mesh network with nodes each equipped with \(M\) antennas, the power gain which can be obtained from beamforming is equal to \(M^{2}\). In general, an \(N_{t}\times N_{r}\) MIMO system can provide a power gain of \(N_{t}N_{r}\)[19]. Moreover, by utilizing the power gains provided by beamforming, the transmission range of wireless nodes can be increased such that data transmission of each S-D pair in the mesh network can be finished within fewer hops. Consequently, the link-sharing issue is alleviated and the required relay nodes can be made less than that of the spatial multiplexing scheme. To sum up, using beamforming can not only reduce interference but also alleviate the link-sharing issue by enlarging the transmission range. ### _Achievable Per-Node Throughput_ This section is mainly focused on deriving the achievable per-node throughput of the spatial multiplexing and beamforming schemes in the multi-tier mesh network under the \(D\)-hop maximum routing policy, respectively. The related theorems are stated as follows. **Theorem 2**.: _(Spatial Multiplexing) For the proposed multi-tier hierarchical architecture for a mesh network of \(n\) data nodes with network parameters scaling according to Table I, the per-node throughput under the \(D\)-hop maximum routing policy using the spatial multiplexing scheme is_ \[R^{(SM)}(n)=\min_{1\leq l\leq L}\left\{\Theta\left(l^{\phi+\phi-k}\right) \right\}.\] **Theorem 3**.: _(Beamforming) For the proposed multi-tier hierarchical architecture for a mesh network of \(n\) data nodes with network parameters scaling according to Table I, the per-node throughput under the \(D\)-hop maximum routing policy using the beamforming scheme is_ \[R^{(BF)}(n)=\min_{1\leq l\leq L}\left\{\Theta\left(l^{\phi+2\nu/a_{l}-k}\log l \right)\right\}.\] The detailed processes of deriving \(R^{(SM)}(n)\) and \(R^{(BF)}(n)\) are demonstrated in the following. #### Iii-C1 Spatial Multiplexing Based on the aforementioned procedure of deriving the per-node throughput of the multi-tier mesh network, the first thing to do is to determine the number of data flows that pass through each tier. For simplicity, we consider the data flow of a randomly chosen S-D pair and analyze the probability that the data flow of this randomly chosen S-D pair passes through the \(l\)-th tier, denoted by \(Q_{l}\), under the \(D\)-hop maximum routing policy. The average number of data flows going across the \(l\)-th tier, denoted by \(N_{l}\), can be then obtained as \[N_{l}=Q_{l}N_{S-D}. \tag{20}\] Under the \(D\)-hop maximum routing policy, a data flow will not consume the \(l\)-th tier's resources if it doesn't go through the \(l\)-th tier. The induction method is adopted to derive \(Q_{l}\). Apparently, \(Q_{1}=1\) since the first hop of every data flow is always carried out in the first tier. Under the \(D\)-hop maximum routing policy, data will be transmitted to the next higher tier by upstream transmission if the number of hops between the source and destination at the \(l\)-th tier is larger than \(D_{l}\). Let \(P_{h}(h_{l}=x)\) denote the probability that the hop distance between two communicating nodes at the \(l\)-th tier is \(x\). From Fig. 1, it can be observed that there are \(6x\) nodes that are \(x\) hops away from each node at the \(l\)-th tier without consideration of edge effect [22]. The edge effect can be ignored when the number of nodes is very large, which holds valid for the lower tiers. Nevertheless, the node number for a higher tier is a finite integer, which calls the requirement to consider the edge effect when deriving \(P_{h}(h_{l}=x)\) for large \(l\). Consider the data transmission in the first tier with \(n\) data nodes. Let \(b_{l}(n,x)\) denote the number of nodes that are \(x\) hops away from node \(i\) in a single-tier mesh network with \(n\) nodes. Under the assumption of random and uniform distribution of data flows, i.e., each source node randomly chooses another node as its destination in the \(l\)-th tier, it can be obtained that \[P_{h}(h_{l}=x)=\frac{1}{n_{l}}\sum_{i=1}^{n_{l}}\frac{b_{i}(n_{l},x)}{n_{l}-1},\] which is a constant determined by \(x\) and \(n\). The probability that a randomly chosen data flow going across the \(l\)-th tier can be finished in the \(l\) tier under \(D\)-hop maximum routing policy, denoted by \(\xi_{l}(n_{l},D_{l})\), is equal to the probability that the hop distance between two communicating nodes is no more than \(D_{l}\), i.e., \[\xi_{l}\left(n_{l},D_{l}\right)=\sum_{x=1}^{D_{l}}P_{h}(h_{l}=x)\equiv\xi_{l},\] which is a constant when \(n_{l}\) and \(D_{l}\) are fixed. It can be seen that \(\xi_{l}\) represents the probability that a data flow requires no more than \(D_{l}\) hops to transmit in the \(l\)-th tier. In other words, for data flows going across the \(l\)-th (\(l<L\)) tier, the probability that it needs to be transmitted to the \((l+1)\)-th tier in upstream transmissions is \((1-\xi_{l})\). Thus, the probability that the data flow of an S-D pair going across the \(l\)-th tier \(Q_{l}\) can be derived by recurrence. To be specific, it can be obtained that \[Q_{l+1}=(1-\xi_{l})\cdot Q_{l},\quad l=1,...,L-1.\] Since \(Q_{1}=1\), the following expression for \(Q_{l}\) can be derived, \[Q_{l}=\begin{cases}1,&l=1,\\ \prod_{i=1}^{l-1}\left(1-\xi_{i}\right),&l=2,...,L-1.\end{cases}\] Let \(H_{j,l}\) denote the number of hops for data transmission of a data flow \(j\) at the \(l\)-th tier if it goes across the \(l\)-th tier. Similar to the single-tier case, each node in the multi-tier mesh network resort to the time division method to separate data transfers of multiple data flows it needs to relay. Under the \(D\)-hop maximum routing policy, the number of hops in each of the \(L\) tiers for all the \(n\) data flows will never exceed \(D\). As for the \(l\)-th tier, there are \(N_{l}=nQ_{l}\) data flows going across it. Among them, \(N_{l+1}=nQ_{l+1}\) data flows need to be transmitted to the \((l+1)\)-th tier through upstream transmissions, leaving \(N_{l}^{\prime}=nQ_{l}^{\prime}=n(Q_{l}-Q_{l+1})\) data flows transferred using same-tier multi-hop transmission at the \(l\)-th tier. As for those \(N_{l}^{\prime}\) data flows transferred by the same-tier multi-hop scheme at the \(l\)-th tier, it can be easily observed that the number of hops between the source and destination node is \(\Theta(D_{l})\) under the \(D\)-hop maximum routing policy. Since those \(N_{l+1}\) data flows transmitted to the \((l+1)\)-th tier by upstream transmission will still be sent back to the \(l\)-th tier in downstream transmission from the \((l+1)\)-th tier, the number of hops for these data flows at the \(l\)-th tier can be divided into two parts. One is the number of hops from the source node at the \(l\)-th tier to its nearest \((l+1)\)-th tier node, the other is the number of hops from the destination node at the \(l\)-th tier to its nearest \((l+1)\)-th tier node. Recall that one \((l+1)\)-th tier node covers the area of \(k\) cells that contain the \(l\)-th tier nodes. As can be seen, the average number of \(l\)-th tier cells contained in one \((l+1)\)-th tier is \[\Delta k_{l}=\frac{A_{l+1}}{A_{l}}=\left(1+\frac{1}{l}\right)^{k}.\] Thus, the expected number of hops needed for an \(l\)-th tier node to access an \((l+1)\)-th tier is \(\Theta(\sqrt{\Delta k_{l}})\). The expected value of \(H_{j,l}\), under \(D\)-hop maximum routing policy using spatial multiplexing, can be expressed as \[\mathbb{E}\left[H_{j,l}^{(SM)}\right]=\begin{cases}\zeta_{l}^{(n)}\sqrt{ \Delta k_{l}},&\text{for upstream data flows},\\ \zeta_{l}^{(s)}D_{l},&\text{for same-tier data flows},\end{cases} \tag{21}\] where \(\zeta_{l}^{(u)}\) and \(\zeta_{l}^{(s)}\) are two constants with scaling of \(O(1)\). Let \(Z_{l,i}\) denote the number of data flows node \(i\) at the \(l\)-th tier needs to help transfer under the \(D\)-hop maximum routing policy in the multi-tier mesh network. Based on the equation that the total number of data flows for the \(n_{l}\) nodes at the \(l\)-th tier to transfer is equal to the summation of the number of hops at the \(l\)-th tier, it can be obtained that \[\sum_{i=1}^{n_{l}}Z_{l,i}=\sum_{j=1}^{N_{l}}H_{j,l}.\] Taking expectations on both sides yields that \[\sum_{i=1}^{n_{l}}\mathbb{E}\left[Z_{l,i}\right]=\sum_{j=1}^{N_{l}}\mathbb{E }\left[H_{j,l}\right]. \tag{22}\] Thus, the number of data flows node \(i\) at the \(l\)-th tier needs to transfer, under the spatial multiplexing scheme, can be obtained as \[\mathbb{E}\left[Z_{l,i}^{(SM)}\right] =\frac{nQ_{l+1}\zeta_{l}^{(u)}\sqrt{\Delta k_{l}}+n(Q_{l}-Q_{l+1 })\zeta_{l}^{(s)}D_{l}}{n/l^{k}}\] \[=l^{k}\left(Q_{l+1}\zeta_{l}^{(u)}\sqrt{\Delta k_{l}}+(Q_{l}-Q_{ l+1})\zeta_{l}^{(s)}D_{l}\right).\] Since \(Q_{l}\), \(Q_{l+1}\), \(\zeta_{l}^{(u)}\), and \(\zeta_{l}^{(s)}\) are all constants, to minimize the number of data flows for each node to transfer, \(D_{l}\) should be chosen as \(D_{l}=\Theta(\sqrt{\Delta k_{l}})\). For instance, we choose \[D_{l}=\sqrt{\Delta k_{l}}=\left(1+\frac{1}{l}\right)^{k/2}. \tag{23}\] Subscribing (23) back yields \[\mathbb{E}\left[Z_{l,i}^{(SM)}\right]=\Theta\left(\left(l^{2}+l\right)^{k/2 }\right)=\Theta\left(l^{k}\right).\] As the number of hops for each data flow at the \(l\)-th tier is upper bounded by a constant under the \(D\)-hop maximum routing policy, the same-tier multi-hop transmission is finished in a local area, and so are the upstream transmission and downstream transmission. In this way, an excessive traffic load burden can be rarely observed at each node w.h.p. Thus, the upper bound on the number of data flows any node at the \(l\)-th tier needs to transfer \(Z_{l}^{(U)}\), under the spatial multiplexing scheme, can be taken as the same order with \(\mathbb{E}\left[Z_{l,i}^{(SM)}\right]\), i.e., \[Z_{l}^{(U,SM)}=\Theta\left(l^{k}\right). \tag{24}\] Since all three types of data transmission of each tier take place in the same cell or between two neighboring cells, the maximum number of interfering neighboring cells for a specific link is upper bounded by a constant, i.e., \[\Delta c_{l}^{(SM)}=O\left(1\right). \tag{25}\] When orthogonal frequency bands are allocated to different tiers for data transmission, inter-tier interference under the spatial multiplexing scheme is completely canceled. To be specific, the bandwidth for data transmissions of the \(l\)-th tier is \(W_{l}\). Considering the three types of data transmission in the \(l\)-th tier, the same-tier transmission can be seen as a point-to-point \(M_{l}\times M_{l}\) MIMO system, while the upstream transmission to the \((l+1)\)-th tier, and downstream transmission from the \((l+1)\)-th tier can be regarded as the uplink and downlink of an MU-MIMO system with \(M_{l+1}/M_{l}=(1+1/l)^{\nu}\) users and one base station (BS) [18], where each user has \(M_{l}\) antennas and the BS has \(M_{l+1}\) antennas. As can be seen, under the \(D\)-hop maximum routing policy, data transmission at each tier is similar to that of a single-tier mesh network. Denote by \(\gamma_{l}^{(L,SM)}\) the lower bound on the SINR of the received signal at the \(l\)-th tier using spatial multiplexing. When time division is used to reduce inter-tier interference of the \(l\)-th tier, i.e., each node is only active in one of every \((1+\Delta c_{l})\) time slots. Consequently, data transmission of each tier is interference-limited and the SINR of received signal scales as \(\Omega(1)\)[14], i.e., \[\gamma_{l}^{(L,SM)}=\Omega(1).\] This result still holds when frequency division is used, i.e., evenly dividing the bandwidth of the \(l\)-th tier \(W_{l}\) into \((1+\Delta c_{l})\) segments and alternately allocating the segments across the network, which shows a similar effect with the time-division strategy. Since a DoF gain of \(N_{\text{min}}=\min\{N_{t},N_{r}\}\) can be obtained in an \(N_{t}\times N_{r}\) MIMO system when spatial multiplexing scheme is utilized [19], an achievable link rate for the data transmission in the \(l\)-th tier, denoted by \(R_{l}^{(L,SM)}\), can be obtained as \[R_{l}^{(L,SM)}=\Theta\left(M_{l}W_{l}\right)=\Theta\left(M_{1}W_{l}\imath^{ \phi+\nu}\right). \tag{26}\] Plugging (24), (25), and (26) into (18), an achievable end-to-end rate for the \(l\)-th tier under the frequency division scheme can be obtained as \[R_{l}^{(SM)}(n)=\Theta\left(\frac{M_{1}W_{l}\imath^{\phi+\nu}}{\imath^{k}} \right)=\Theta\left(\imath^{\phi+\nu-k}\right). \tag{27}\] Thus, the per-node throughput for the multi-tier mesh network using frequency division among different tiers can be derived by substituting (27) into (19), i.e., \[R^{(SM)}(n)=\min_{1\leq l\leq L}\left\{\Theta\left(\imath^{\phi+\nu-k}\right) \right\}, \tag{28}\] which completes the proof on Theorem 2. #### Iii-B2 Beamforming In this scenario, it is assumed that all nodes in the multi-tier mesh network utilize beamforming to transmit a single stream of data, where transmission energy is concentrated in a certain direction targeting the receiving node, causing litter interference to other concurrently transmitting nodes. Utilizing the power gain provided by beamforming, the transmission range of wireless nodes can be increased, as demonstrated in [20]. To be specific, when each node is equipped with \(M\) antennas, a power gain of \(M^{2}\) can be provided. Then the transmission range can be obtained by solving the equation \[CPM^{2}\tilde{r}_{0}^{-\alpha}=P^{0},\] which yields \[\tilde{r}_{0}=\left(\frac{CP}{P^{0}}\right)^{1/\alpha}M^{2/\alpha}=M^{2/\alpha }r_{0}.\] As can be seen, the transmission range is increased \(M^{2/\alpha}\) times. In this way, the expected number of hops needed at the \(l\)-th tier can be reduced to a fraction of that under the spatial multiplexing scheme, i.e., \[\mathbb{E}\left[H_{j,l}^{(BF)}\right]=\frac{\mathbb{E}\left[H_{j,l}^{(SM)} \right]}{M_{l}^{2/\alpha}}=M_{l}^{-2/\alpha}\mathbb{E}\left[H_{j,l}^{(SM)} \right], \tag{29}\] where \(\mathbb{E}\left[H_{j,l}^{(SM)}\right]\) has been derived in (21). Substituting (29) into (22) yields that \[\mathbb{E}\left[H_{j,l}^{(BF)}\right]=\Theta\left(\imath^{k}M_{l}^{-2/\alpha }\right)=\Theta\left(\imath^{k-2\nu/\alpha_{l}}\right).\] Similarly, an upper bound on the number of data flows any node at the \(l\)-th tier needs to transfer \(Z_{l}^{(U)}\), under the beamforming scheme, can be taken as \[Z_{l}^{(U,BF)}=\Theta\left(\imath^{k-2\nu/\alpha}\right). \tag{30}\] Compared to (24), it can be seen that \[Z_{l}^{(U,BF)}\leq Z_{l}^{(U,SM)},\] since \(\alpha_{l}\) and \(\nu\) are both positive constants. Hence, each node needs to help transfer fewer data flows using beamforming, i.e., the link-sharing issue is further alleviated. Similarly, since data transmissions of each tier are carried out between two neighboring cells or in the same cell, the maximum number of interfering neighboring cells to reduce excessive interference and satisfy the half-duplex constraint, for a specific link under the beamforming scheme, is also upper-bounded by a constant, i.e., \[\Delta c_{l}^{(BF)}=O\left(1\right). \tag{31}\] Consider a high-SNR environment where noise power is negligible compared to transmit power \(P_{l}\) for \(l=1,...,L\). When beamforming is utilized, it can be regarded that the interference suffered by each receiving node is much too smaller compared to the transmit power, i.e., interference power scales as \(O(1)\). Let \(\gamma_{l}^{(L,BF)}\) denote the lower bound on the SINR of the received signal at the \(l\)-th tier using the beamforming scheme. It can be derived that \[\gamma_{l}^{(L,BF)}=\Theta(C_{l}M_{l}^{2}P_{l}d_{l}^{-\alpha_{l}}),\] where \(d_{l}\) represents the expected distance between the transmitter and the receiver of the \(l\)-th tier, \(\alpha_{l}\) and \(C_{l}\) are path-loss exponent and the characteristic constant determined by frequency, antenna profile, etc. of the \(l\)-th tier, respectively. Since an \(N_{t}\times N_{r}\) MIMO system can provide a power gain of \(N_{t}N_{r}\) when used for beamforming, a lower bound on link transmission rate, denoted by \(R_{l}^{(L,BF)}\), can be derived under the high-SNR assumption as \[R_{l}^{(L,BF)} =\Theta\left(W_{l}\log\left(C_{l}M_{l}^{2}P_{l}d_{l}^{-\alpha_{l}} \right)\right) \tag{32}\] \[\overset{(a)}{=}\Theta\left(W_{l}\imath^{\phi}\log\left(M_{l}^{ 2}\imath^{2\nu}P_{l}^{0}\right)\right)\] \[=\Theta\left(\imath\imath^{\phi}\log l\right),\] where \((a)\) comes directly from (15). Plugging (30), (31), and (32) into (18), an achievable link rate for the \(l\)-th tier using beamforming, denoted by \(R_{l}^{(BF)}(n)\), can be obtained as \[R_{l}^{(BF)}(n) =\Theta\left(\imath\imath^{\phi}\log l\over\imath^{k-2\nu/\alpha}\right) \tag{33}\] \[=\Theta\left(\imath\imath\imath^{\phi+2\nu/\alpha_{l}-k}\log l \right).\] Substituting (33) into (19), the per-node throughput for the multi-tier mesh network under the beamforming scheme can be obtained as \[R^{(BF)}(n)=\min_{1\leq l\leq L}\left\{\Theta\left(\imath\imath\imath^{\phi+2 \nu/\alpha_{l}-k}\log l\right)\right\}, \tag{34}\] which completes the proof on Theorem 3. ## V Scalability Conditions for A Multi-Tier Mesh Network In the previous study, we obtained the per-node throughput of a multi-tier mesh network under both spatial multiplexing and beamforming schemes. The results are subject to the scaling orders of bandwidth allocation, transmit power, and antenna number. In this section, we aim to investigate the requirements of such parameters to achieve scalability under different transmission schemes. As shown in (19), the per-node throughput of a multi-tier mesh network \(R(n)\) is defined as the minima of the end-to-end rate \(R_{I}(n)\) for \(1\leq l\leq L\). To obtain a scalable per-node throughput of \(\Theta(1)\), \(R_{I}(n)=\Omega(1)\) should be met for \(1\leq l\leq L\), i.e., the conditions for a multi-tier mesh network to achieve throughput-scalability are \[R_{I}(n)=\Omega(1),\quad l=1,2,...,L. \tag{35}\] In the following, the specific scalability conditions for a multi-tier mesh network under spatial multiplexing and beamforming schemes are derived, respectively. ### _Scalability Conditions for Spatial Multiplexing_ The end-to-end rate of the \(l\)-th tier using spatial multiplexing scheme \(R_{I}^{(SM)}(n)\) has been derived in (27). As can be seen, \(R_{I}^{(SM)}(n)\) scales in the order of \(l\) to the power of \(\psi+\nu-k\), which is determined by the scaling relationships of bandwidth, antenna number, and node number. Apparently, for the first tier with \(l=1\), the end-to-end rate is \(R_{1}^{(SM)}(n)=\Theta(M_{1}W_{1})=\Theta(1)\), which maintains scalable if the allocated bandwidth \(W_{1}\) and antenna number \(M_{1}\) are fixed. As for upper tiers, substituting \[R_{I}^{(SM)}(n)=\Theta(l^{\psi+\nu-k})\] into (35), to achieve a scalable throughput using the spatial multiplexing scheme, it should be satisfied that \[\psi+\nu\geq k, \tag{36}\] in order to make sure that \(R_{I}^{(SM)}(n)=\Omega(1)\), since in this way, \[l^{\psi+\nu-k}\geq 1,\] and hence the per-node throughput of the multi-tier mesh network, obtained by taking the minimum value of \(R_{I}^{(SM)}(n)\) from \(l=1\) to \(L\), is scalable as \(n\) increases. Notably, (36) suggests that to achieve a scalable throughput, higher relay tiers should not become the bottleneck of the network. They are required to be capable of handling data transfers coming from the lower tiers. To achieve this, the summation of bandwidth increasing order \(\psi\) and antenna number increasing order \(\nu\), i.e., \(\psi+\nu\), should be no less than the decreasing order of the node number \(k\). On the other hand, to obtain a scalable throughput, the scaling order for the number of required relay nodes deployed at each tier \(k\) should be at least equal to \(\psi+\nu\). ### _Scalability Conditions for Beamforming_ The end-to-end rate of the \(l\)-th tier using beamforming scheme \(R_{I}^{(BF)}(n)\) has been obtained in (33). As can be seen, \(R_{I}^{(BF)}(n)\) scales in the order of \(l\) to the power of \(\psi+2\nu/\alpha_{l}-k\) times \(\nu\log l\). Similarly, for the first tier with \(l=1\), the end-to-end rate is \(R_{1}^{(BF)}(n)=\Theta(W_{1}\log M_{1}{}^{2})=\Theta(1)\), which remains non-decreasing as long as the allocated bandwidth \(W_{1}\) and antenna number \(M_{1}\) are determined. Since frequency bands can be shared among multiple tiers under the beamforming scheme, a common path loss exponent, say \(\alpha\), can be taken for convenience. In order to make sure that \(R_{I}^{(BF)}(n)=\Omega(1)\) for upper tiers, substituting \[R_{I}^{(BF)}(n)=\Theta\left(\nu l^{\psi+2\nu/\alpha-k}\log l\right)\] into (35), a scalable throughput using the beamforming scheme can be obtained, as \(n\) increases, if \[\psi+2\nu/\alpha\geq k. \tag{37}\] In this way, \(R_{I}^{(BF)}(n)\geq R_{l-1}^{(BF)}(n)\) for \(l=2,...,L\) and therefore \(R^{(BF)}(n)=R_{1}^{(BF)}(n)=\Theta(1)\), which means that a scalable per-node throughput can be obtained. From (37), to achieve a scalable throughput, the summation of bandwidth increasing order \(\psi\) and antenna number increasing order \(\nu\) times \(2/\alpha\), i.e., \(\psi+2\nu/\alpha\), should be no less than the decreasing scaling order of the node number \(k\). On the other hand, to obtain a scalable throughput, the scaling order for the number of required relay nodes deployed at each tier \(k\) should be at least equal to \(\psi+2\nu/\alpha\). ### _Clarification on Scalability of Mesh Networks_ Assume \(k\geq 2\) is satisfied to ensure the convergence of total number of nodes. From the scalability conditions of (36) and (37), the required bandwidth and antenna numbers should go to infinity as \(n\rightarrow\infty\). Moreover, (17) indicate that the transmit power of the highest tier should also increase exponentially with \(n\). In summary, the \(\Theta(1)\) throughput for a continuously expanding network can only be obtained with unlimited resources. However, it is infeasible and impractical for such an endless expansion of network size in a realistic scenario. The deployment region for a wireless network is always limited in practice. When the region size of a wireless network reaches a certain value, it is supposed to access the wired network (i.e., the Internet), and thus establish a hybrid networking architecture for communication. The capacity scaling laws on hybrid networks have already been investigated in [14, 22, 23, 24, 25, 26, 27]. Their results indicate that, by utilizing the high throughput provided by wired links, a scalable end-to-end rate between two wireless nodes is achievable. However, the throughput-scalability of the wireless part in the hybrid network is not fully addressed. To resolve this issue, we make the following clarification on the scalability of mesh networks in this paper. Specifically, we are aiming at deploying a throughput-scalable wireless mesh network under the available resources, while making the network size as large as possible. In other words, a per-node throughput of \(\Theta(1)\) is guaranteed for the wireless part of the network deployed in such a finite region without considering the wired nodes. ### _Case Study_ Here we present a case study to investigate the practicality of the proposed multi-tier mesh architecture in the realistic deployment. Consider a circular region populated with \(10,000\) data nodes. We found that choosing \(k=8\), \(\psi=4\), and \(\upsilon=4\) can obtain feasible network parameters under the spatial multiplexing scheme, while maintaining the throughput-scalability. As shown in Table II, a three-tier mesh network is established with \(n_{2}=10^{4}/2^{8}=39\) and \(n_{3}=10^{4}/3^{8}=2\). We assume that the path-loss exponents for all three tiers are 3, and that the antenna gains are \(C_{1}=3\) dB, \(C_{2}=6\) dB, and \(C_{3}=9\), respectively. If each data node has a transmit power of 1 mW, the minimum receiving signal power to achieve a transmission range of 50 m is \(P_{1}^{0}=-78\) dBm, derived by (15), which is a reasonable value in practice. The network radius is roughly \(50\times\sqrt{10000/\pi}=2.8\) km. The required transmission range of the second and third tiers can be derived by (16), i.e., \(d_{2}=50\times 2^{4}=800\) m and \(800\times 1.4^{4}=4.05\) km. The required transmit power of nodes in the second and third tiers are 2 W and 13 W, assuming that \(P_{2}^{0}=P_{3}^{0}=P_{1}^{0}=-78\) dBm. We allocate a bandwidth of 10 MHz to the first tier, then the required bandwidth for the second and third tiers are 160 MHz and 810 MHz, respectively, which are reasonable parameters in B5G or 6G networks where high-frequency bands such as mmWave and terahertz can be leveraged. In addition, supposing each data node is equipped with a single antenna, the required antenna numbers for relay nodes in the second and third tiers are 16 and 81, respectively, which are also achievable in realistic deployment. All the above results are summarized in Table II. ## VI Conclusion In this paper, interference and link-sharing were identified as two key factors that limit the scalability of mesh networks. A multi-tier hierarchical mesh network architecture was developed to resolve the link-sharing issue. Combined with certain schemes of interference reduction, a scalable per-node throughput of \(\Theta(1)\) was proven to be achievable in the multi-tier mesh network, when certain conditions on bandwidth, antenna numbers, and transmit power are satisfied. The case study carried out also demonstrated the feasibility of the multi-tier mesh networking in the realistic deployment. However, the results also indicate that, to attain a scalable capacity in a multi-tier mesh network, either the required bandwidth or the number of relay nodes is highly demanding, which requires further exploration on a more efficient architecture and the corresponding networking methods.
2305.19893
Web scraping: a promising tool for geographic data acquisition
With much of our lives taking place online, researchers are increasingly turning to information from the World Wide Web to gain insights into geographic patterns and processes. Web scraping as an online data acquisition technique allows us to gather intelligence especially on social and economic actions for which the Web serves as a platform. Specific opportunities relate to near-real-time access to object-level geolocated data, which can be captured in a cost-effective way. The studied geographic phenomena include, but are not limited to, the rental market and associated processes such as gentrification, entrepreneurial ecosystems, or spatial planning processes. Since the information retrieved from the Web is not made available for that purpose, Web scraping faces several unique challenges, several of which relate to location. Ethical and legal issues mainly relate to intellectual property rights, informed consent and (geo-) privacy, and website integrity and contract. These issues also effect the practice of open science. In addition, there are technical and statistical challenges that relate to dependability and incompleteness, data inconsistencies and bias, as well as the limited historical coverage. Geospatial analyses furthermore usually require the automated extraction and subsequent resolution of toponyms or addresses (geoparsing, geocoding). A study on apartment rent in Leipzig, Germany is used to illustrate the use of Web scraping and its challenges. We conclude that geographic researchers should embrace Web scraping as a powerful and affordable digital fieldwork tool while paying special attention to its legal, ethical, and methodological challenges.
Alexander Brenning, Sebastian Henn
2023-05-31T14:27:24Z
http://arxiv.org/abs/2305.19893v1
# Web scraping: a promising tool for geographic data acquisition ###### Abstract With much of our lives taking place online, researchers are increasingly turning to information from the World Wide Web to gain insights into geographic patterns and processes. Web scraping as an online data acquisition technique allows us to gather intelligence especially on social and economic actions for which the Web serves as a platform. Specific opportunities relate to near-real-time access to object-level geolocated data, which can be captured in a cost-effective way. The studied geographic phenomena include, but are not limited to, the rental market and associated processes such as gentrification, entrepreneurial ecosystems, or spatial planning processes. Since the information retrieved from the Web is not made available for that purpose, Web scraping faces several unique challenges, several of which relate to location. Ethical and legal issues mainly relate to intellectual property rights, informed consent and (geo-) privacy, and website integrity and contract. These issues also effect the practice of open science. In addition, there are technical and statistical challenges that relate to dependability and incompleteness, data inconsistencies and bias, as well as the limited historical coverage. Geospatial analyses furthermore usually require the automated extraction and subsequent resolution of toponyms or addresses (geoparsing, geocoding). A study on apartment rent in Leipzig, Germany is used to illustrate the use of Web scraping and its challenges. We conclude that geographic researchers should embrace Web scraping as a powerful and affordable digital fieldwork tool while paying special attention to its legal, ethical, and methodological challenges. Friedrich Schiller University Jena, Department of Geography, and Michael Stifel Center Jena for Data-Driven and Simulation Science (MSCJ), Jena, Germany Web scraping, Web mining Volunteered geographic information internet data sources geographic information retrieval ## 1 Introduction Since going live in 1991, the World Wide Web ('Web') as part of the broader internet has revolutionized the way in which humans access information. Today, the average internet user spends almost seven hours per day using the internet, or more than 40 percent of their waking time (DataReportal, 2022). Web browsers have become a one-stop software for accessing many services, with online platforms increasingly replacing brick-and-mortar businesses and government service desks. The importance of geospatial data in the Web --georeferenced data as well as place names in texts-- has continued to gain importance with the advent of the Web 2.0 and the GeoWeb (Haklay et al., 2008). Dynamic Web pages, interactive map displays, location-based services and volunteered geographic information (VGI) are the result of an increased availability of positioning technologies and reflect the importance of location in our lives. Despite these fundamental and rapid changes in our private and professional lives, research methodologies do not yet fully reflect this radical transformation. In geography and related fields, internet-enabled research strategies mostly relate to conducting Web surveys, leveraging data provided through spatial data infrastructures, or advancing open science by sharing data and code on platforms such as Pangaea (Pangaea, 2022) for geoscientific data. VGI and crowdsourcing have also emerged as overlapping concepts that describe the increasing availability of user-generated online contents, including geospatial data generated by collaborative mapping initiatives such as OpenStreetMap (Goodchild, 2007). Social media messages from platforms such as Twitter are today also increasingly analyzed with regards to their potential for geographic analyses (de Albuquerque et al., 2015). Unlike the general Web, the platformization of open data sharing and the mining of social media information either strive to establish, or rely on already established standards expressed by contributor guidelines or implemented in the form of application programming interfaces (APIs) facilitating access to Web services. The broader Web, in contrast, does not know --or reveal-- such standards, resulting in specific challenges in and techniques for accessing and capturing its contents for subsequent data analysis (Glez-Pena et al., 2014). Each website can have a unique structure and follow its own conventions for contents, location references, and structure. This does not make the Web less relevant for academic research: Just like historical vessel logbooks with their semi-structured information can be an invaluable addition to instrumental climate records (Schweiger et al., 2019), we can learn to extract relevant geographic information from the broader Web in order to complement other means of data acquisition such as surveys or authoritative sources. Web scraping has therefore emerged as an approach to information retrieval from the Web for a variety of geographic research questions. Beyond the research context, we would like to point out the potential of Web scraping in teaching geographic information science, especially for experiential learning where students should be exposed to new and unexpected challenges that come with the use of real data. The Web, and therefore Web-enabled data acquisition, offer an ever-replenishing supply of data that can be used to create more variable teaching situations than those offered by textbook datasets. Moreover, outside the academic context, Web scraping has the potential to provide valuable information for commercial avenues such as geomarketing (Boegershausen et al., 2022) and government applications such as official statistics (Hoekstra et al., 2012); these share technological approaches and challenges with the academic context, nevertheless their ethical and legal frameworks are partly distinct. The purpose of this paper is to promote responsible Web-scraping practices as a data acquisition tool for academic research and teaching in geography and related fields. It outlines technological strategies as well as ethical, legal and methodological challenges, and presents a survey of geographic studies involving Web scraping, which may serve as templates for future applications of Web scraping. In contrast to previous overviews in other disciplinary contexts such as psychological, food-price or hospitality research (Landers et al., 2016; Hillen, 2019; Han and Anderson, 2021), we also highlight aspects that are specific to geographic and geospatial applications, involving implicit and explicit location references and geospatial relationships. This contribution is structured as follows: First, an overview of current opportunities and applications of Web scraping in geography and related disciplines is given. Based on this, a typical geographic Web scraping workflow and overview of related techniques is presented in order to clarify underlying concepts and technical requirements (Section 3). The subsequent sections elaborate on legal and ethical issues (Section 4) and methodological challenges affecting data quality, again with a particular focus on spatiality (Section 5). These overarching opportunities and issues are depicted in Figure 1. To illustrate the workflow and its challenges in a case study, we will analyze apartment rents from a spatial perspective using Leipzig, Germany as an example (Section 6), and we finally discuss the potentials and limitations of Web scraping a geographic and geospatial research context. ## 2 Opportunities and Application in Geographic Research Web scraping has only started to gain importance for research in the last five years across all disciplines, including geography and related fields (e.g., planning, tourism, and conservation) according to a search in the Web of Science Core Collection (WoS; search terms 'web scraping', 'web-scraping' and 'webscraping' in 'All Fields'; Figure 2). Although it is still a relatively minor phenomenon in these geospatial research areas (65 publications in the WoS) and not all relevant studies are detected based on these search terms, the existing studies allow us to identify research directions and opportunities. ### Overview of Application Domains The most prominent geography-related application domains include the real-estate market (especially short-term rentals; Han and Anderson (2021)) and tourism (Table 1). Issues addressed with the help of Web scraping include the analysis of urban transformation in response to a demand for short-term leases (Wachsmuth and Weisler, 2018; Adamiak et al., 2019; Hubscher et al., 2020), the classification and mapping of company websites (Kinne and Axenbeck, 2020), and the spatial analysis of real-estate prices using hedonic models (Bonetti et al., 2016; Tomal, 2020; Boeing and Waddell, 2017), among others. In these fields, Web scraping benefits from the existence of online platforms in markets with a strong platform concentration such as hospitality and real estate. In physical geography, the focus has mostly been on data acquisition from government websites that do not offer more strongly standardized interfaces such as Web services (Bonifacio et al., 2015; Canli et al., 2018; Samourkasidis et al., 2019). In terms of the type of information scraped, the vast majority of studies focuses on retrieving information on spatial entities (e.g., apartment offers, mosques) and spatial time series (weather data), their characteristics and location references. A relational approach has also been taken to map relationships between companies within an entrepreneurial ecosystem (Kinne and Axenbeck, 2020), highlighting the potentials of Web scraping --and of hyperlinks in particular--to connect various entities. Beyond academic geography, consumer price research as part of official government statistics has been assessing the potentials of Web scraping for more than a decade in order to automate data retrieval and diversify the data sources (Hoekstra et al., 2012; Blaudow and Seeger, 2020; Virgillito and Polidoro, n.d.). Although this is not primarily a geographic research area, it has the potential to be applied at a regional scale to map spatial disparities (Benedetti et al., 2022). Food-price research is a related area in which the utility of Web scraping was reviewed recently (Hillen, 2019). Regarding the parts of the Web that are covered, the surveyed studies focus on information retrieval from the Clear Web, which is open to everyone, and those parts of the Deep Web that are publicly accessible by, for example, dynamically querying information from databases. None of the geographic studies explored the Darknet, which has so far only been scraped for studying online drug markets and cyber crimes (e.g., Crowder and Lansiquot, 2021). ### Opportunities for Geographic Research Our review of the literature revealed that studies that use scraping techniques benefit from various aspects of the harvested information, creating opportunities for new lines of research in their respective application domains: Figure 1: Mind map showing the various aspects of Web scraping discussed in this paper. * **Object-level geospatial data.** Since data analyses of individuals or objects at an aggregated level, as offered by most official statistics, may result in ecological fallacies, object-level data is desirable if not necessary for many geographic analyses. This level of detail and the corresponding precision of location information is also essential to obtain relevant micro-geographic attribute information describing, for example, accessibility (Tomal, 2020). Researchers have therefore recognized the potential of Web scraping to provide access to object-level geospatial information by harvesting online platforms in the hospitality or real-estate sectors (Wachsmuth and Weisler, 2018; Tomal, 2020), or crawling the Web for company websites and their relations (Kinne and Axenbeck, 2020). * **Near-real-time data.** The scraped online sources often provide data in (near-) real-time, such as the most recent apartment offers or environmental data. Especially socio-economic data such as tourism statistics would otherwise only become available with delay or upon request. Although real-time information is usually not necessary for research (exception: e.g., Canli et al. (2018)), this may help to improve the timeliness of research and avoid bottlenecks in data collection. * **Access to user-generated content.** User-generated Web contents such as VGI are a (possibly biased) reflection of the interests, motivations and actions of citizens and businesses, but they do not normally become part of public statistics and archives, not even at an aggregated level. Web scraping is the key to these digital media, creating opportunities to study not only user-contributed factual information (e.g., on mosquitoes, [gravelle.et.al.2021.mosques]), but also the content generators' attitudes (Lin and Kant, 2021) and self-portrayal (Schmidt et al., 2022). * **Independence of Web services.** While governments are striving to implement Web services to share their data, for example geospatial data under the European Commission's INSPIRE directive, Web scraping allows researchers to capture public data that is not (yet) provided in such a standardized form. In other words, the use of Web scraping may be indicative of a lack of such services in areas that are of relevance for geographic research, such as specific types of weather data (Canli et al., 2018) or public meeting reports (Hui, 2017). The same could be said about commercial online platforms, which in some cases provide a fee-based API whose use can be avoided through Web scraping. Overall, while not all of these advantages apply to all use cases of geographic Web scraping equally, and Web scraping may sometimes simply be used due to its cost-effectiveness, they demonstrate that this technique has its place alongside traditional offline as well as online data acquisition methods. ## 3 The Web-scraping Workflow The typical Web-scraping workflow in geographic research, shown in Figure 3, requires a thorough assessment of legal and ethical aspects as well as an evaluation of the technical feasibility of scraping a suitable website, including attention to location references. Considering the challenge of extracting information from websites and webpages Figure 2: Adoption of Web scraping in the academic literature since the year 2000. Data source: Clarivative Web of Science Core Collection, (c) Clarivative, 10 May 2023. whose structure is undocumented to the public and therefore has to be guessed by the researchers, testing, debugging and validation play a critical role. They are often more time-consuming and challenging than in processing datastreams with a well-documented structural and semantic specification. It is, in particular, not uncommon to encounter deviations from the inferred and expected format (or semantic details) after running a scraper for some time. For geospatial analyses it is of particular importance to extract location references such as place names, complete addresses, or coordinates. Although coordinates are provided in some cases (e.g., Tomal, 2020), these are usually not displayed verbatim but rather embedded in hyperlinks (URLs) to map displays or in JavaScript data structures that are not actually displayed. Coordinate reference systems are usually not specified, but latitude/longitude information in the World Geodetic System (WGS84) reference are the _de facto_ standard (e.g., Google Maps API). Place names and addresses are useful for obtaining coordinates by toponym resolution or geocoding (Melo and Martins, 2017). When place names are embedded in unstructured text, it may be necessary to use sophisticated algorithms to recognize them beforehand based on gazetteers or machine learning (geoparsing; Hu et al., 2022). Apart from extracting location references, a relational approach may help to uncover networks relating places and spatial entities to each other. This can be achieved by extracting embedded hyperlinks or entity names. In a recent example, regional company networks representing entrepreneurial ecosystems were reconstructed through crawling and scraping (Kinne and Axenbeck, 2020). Here, a challenge lies in determining which hyperlinks, or which imprecisely matched entity names, are relevant for the type of relationship of interest (e.g., business partnerships). Machine-learning classifiers can be used for this task (Liu et al., 2020). The extraction of other, non-geographic attributes describing the scraped objects is another important task. Established Web-scraping tools are able to extract pieces of information based on structural elements of HTML documents or embedded JavaScript code such as specific tags. Challenges in retrieving numeric data include non-standard and sometimes inconsistent representations of numeric values and their units. \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} **Authors** & **Purpose** & **Major themes** & **Scraped information** & **Scraped websites** & **Software/Services used** \\ \hline Newing et al. (2022) & To assess online groceries coverage & Urban–rural inequalities, food deserts & Delivery coverage by postcode & Several online retailers & Python with requests library \\ Wachsmuth and Weisler (2018) & To assess the extent of Airbnb-induced gentrification & Gentrification trends and sharing economy & Airbnb via AirDNA sharing economy & Price, occupancy, location & None \\ Tomal (2020) & To identify geographic and structural determinants of apartment rent & Non-stationary hedonic modeling & Apartment rent, characteristics, location & outdoor.pl & Unknown \\ Kinne and Axenbeck (2020) & To map innovation ecosystems & Innovation ecosystems; Web-scraping methodology & Company website text and relations & Crawled company websites & ARGUS based on Python with Scrapy \\ Hui (2017) & To examine use of permitting process in coastal management & Bureaucratic transparency; text mining methodology & Meeting reports incl. task, outcome, address & coastal.ca.gov & Unknown \\ Lin and Kant (2021) & To assess the role of social media in citizen participation & Effectiveness of participation in planning processes & Posted messages, their comments, likes, shares & facebook.com & ScrapeStorm \\ Tachibana et al. (2021) & To reveal the distribution of nature TV programs & Cultural ecosystem services & Broadcasting details, textual summary, extracted toponyms & Archived TV programs at nhk.or.jp & R, Webdriver, rvest, Selenium; goo Lab API \\ Schmidt et al. (2022) & To determine the extent of greenwashing in industry & Air pollution, company self-portrayal & Sustainability-related text fragments & Company web-sites & Unknown \\ Canli et al. (2018) & To interpolate rainfall data in real time & Landslide early warning system & Rainfall measurements at gauging stations & Multiple weather websites & JavaScript with MeteorJS, Cheerio, html-parser2 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of selected applications of Web scraping in geographic and related research. Unstructured text contents, in contrast, need to be represented as a set of features using text mining (Munzert et al., 2014) or topic modeling (Thiehlmann et al., 2021). While text mining focuses on extracting features, such as keywords or item frequencies from text, topic modeling can be useful for identifying semantic clusters that may relate to perceptions or discourses referring to places or spatial entities. Sentiment analysis is also of particular interest as it provides a quantitative means to assessing negative or positive emotions that may relate to social characteristics, health outcomes, or marginalization (e.g., Mitchell et al., 2013; de las Heras-Pedrosa et al., 2020). Moreover, classification techniques can be used to add attributes to scraped pages or classify them them thematically (e.g., Kinne and Axenbeck, 2020). Although Web-scraping software with a graphical user interface (GUI) is increasingly becoming available, programming offers the most flexible approach to retrieving and mining Web contents. Popular environments include the data analysis language R (R Core Team, 2021) with its Web-scraping extension rvest(Wickham, 2021). In the Python language, Beautiful Soup (Richardson, 2022) (combined with a HTML parser such as HTML.parser) and the more comprehensive Scrapy library offer similar functionality. Regardless of the chosen programming language, it is important to acquire some basic understanding of HTML, CSS (Cascading Style Sheets), and JavaScript in order to disentangle the structure and meaning of webpages. Proficiency in text processing (e.g., regular expressions and string operations; Wickham, 2019) is also essential while more advanced natural language processing and text mining skills (Feinerer et al., 2008; Munzert et al., 2014; Han and Anderson, 2021) are needed for processing unstructured text. Since many websites require user interaction to load and display relevant contents, software for automated testing of Web applications has also its place in Web scraping, depending on a website's characteristics. Such user interactions may include scrolling a webpage, filling and submitting a Web form, accepting cookies, or logging in as a user. Selenium with its R (Harrison, 2022) and Python libraries is widely used in this context. Apart from programming, GUI tools available to researchers include mostly commercial visual Web-scraping programs that do not require (traditional) programming skills, such as ScrapeStorm and ParseHub. Typical features include support for interactive websites, cloud storage, database connectivity, IP (internet protocol) address rotation, and VPN (virtual private network) connections. VPN and IP rotation are strategies designed to avoid getting blocked by a server based on scraping patterns or scraper location (see Section 4 for possible legal and ethical issues). GUI-based solutions have also been developed in an academic context as open-source software (Kinne, 2018). Secondary providers of Web-scraped data and services have emerged in recent years due to the commercial and academic importance of Web scraping. These include, for example, specialized service providers such as AirDNA (AirDNA, 2022) for monitoring the short-term rental market (Airbnb, Vrbo; e.g., Wachsmuth and Weisler, 2018). Coordinated scraping efforts furthermore intend to support communities and increase transparency in the short-term rental market (DetaHippo, 2022; Inside Airbnb, 2022). ## 4 Ethical and Legal Issues The retrieval and use of third-party data requires the consideration of possible legal as well as ethical issues. These issues primarily relate to how the data is provided and accessed on a website, under what terms of use it is provided, what the economic and privacy implications are, and what consequences the use of the data has for its owner, provider, user, and the objects (or subjects) being described by the data. ### Legal Aspects Naturally, different rules apply in different jurisdictions, and the physical location of the computer executing the Web-scraping software may be decisive in determining the applicable rules and policies (Klawonn, 2019). Recent reviews of legal aspects in the US have been presented by Hirschey (2014), Macapinlac (2019), and Stringam et al. (2021). Klawonn (2019) has summarized the legal situation in Germany with special emphasis on copyright, and Golla and von Schonfeld (2019) with a focus on social media. Other authors provide adopt a broader perspective by providing an overview of legal issues of Web scraping in the 'common law world' (Liu, 2020) or reviewing the regulations in different judicial systems (Jeylelevskaja and Buckley, 2023). Given the complexity of the legal situation and its dependence on the corresponding jurisdiction, we cannot engage in a comprehensive discussion here, but only touch on important issues in three areas: * **Copyright.** The legal issues that probably have received the most attention so far relate to copyright law. As a general rule, copyright law requires the consent of an owner before their works may be reproduced by a third party. In principle, the owner of a website does not necessarily own the data it contains, especially when the latter has been generated by users; nevertheless, scraping and republishing the data raise copyright issues time and again (Dreyer and Stockton, 2013). In this context, it should be noted that it is already the composition of the scraped database itself that may give rise to copyright protection; it may therefore be advisable to use the material in the database according to the principle of 'fair use,' i.e., to a limited extent, or to reuse it in a new or original way (e.g., by summarizing it). Further, it may be relevant whether authorization mechanisms, or the lack thereof, constitute a license to copy scraped data. A particularly relevant role accrues to the robots.txt file that provides scrapers with information on possible Web scraping restrictions such as the permitted crawf rate and the areas of the website that are allowed for access (Sellars, 2018; Hillen, 2019). Copyright issues of their own kind arise when collecting data from Web archives such as the Wayback Machine (Arora et al., 2016; Nielsen, 2016) as their status with respect to copyright is subject to legal challenges (e.g., dyno Nobel Inc v Orica Explosives Technology Pty Ltd (No 2) (2019) FCA 1552 on 17 September 2019). Webpages from times past may, for instance, not reflect the current legal situation especially in evolving fields such as privacy (e.g., EU General Data Protection Regulation, in effect since 2018). The same is true for the spatial footprint of legality: scraping American Web archives (which may have been scraped using servers located in the US) from outside the US may circumvent an archived website's geoblocking, creating ethical and legal challenges. Researchers should therefore critically consider the feasibility of data sharing (e.g., DatalHippo, 2022), and of publishing scraped data as open data (through, e.g., general-purpose platforms such as Pangaea, 2022). * **Contractual compliance.** Another important issue relates to the circumvention of the terms of service (ToS) of a website. This may, e.g., relate to the act of scraping itself, the use of fake user accounts, or of IP rotation services that circumvent technical obstacles such as geoblocking. Whether or not this constitutes a breach of contract is subject to differing interpretations by the courts. It should also be noted that the ToS may not have been perceived or explicitly agreed to by the party performing the Web scraping (Dreyer and Stockton, 2013; Zhao, 2017). Some U.S. courts therefore do not consider a mere circumvention of the ToS to be a criminal offense under the Computer Fraud and Abuse Act (CFAA) designed to fight computer crime and hacking (Macapinlac, 2019) if the user in question has not taken an affirmative action on the website to become a party under the applicable ToS (Krotov et al., 2020). * **Website integrity.** If repeated access to websites results in the service in question being interrupted, liability issues may arise (Hirschey, 2014; Zhao, 2017). Specific legal and also ethical (Han and Anderson, 2021) issues arise in this context when other users are prevented from accessing the website as a result of such activities. In general, as server capacities and bandwidths have developed, this problem has recently become less important (Thelwall and Stuart, 2006). Figure 3: General outline of the Web-scraping workflow in geographic research. Partly based on Hillen (2019). 1. Are there viable alternatives to scraping the data from a website? 2. Do the website's terms of service explicitly prohibit Web scraping? 3. Does the website identify the copyright holder and define a license under which the contents are provided? 4. Can scraping potentially cause material damage to the website or the Web server that hosts it? 5. Has the website blocked or restricted the user's access to its contents or asked the user to cease and desist? 6. Does the website's robots.txt protocol limit or prevent Web-scraping activities? 7. Is the scraped data only a small fraction of the website's database contents? 8. Can the data obtained from the website compromise individual privacy, research subjects' rights, or non-discrimination principles? 9. Can the scraped data reveal confidential information about organizations affiliated with the website? 10. Can the project that requires the Web data potentially diminish the value of the service that the website provides? 11. Does the quality of the data obtained from the Web have the potential to lead to ill-informed decision making? On the whole, as Web scraping is a comparatively new research technique, a regulatory framework is still evolving (Hillen, 2019; Han and Anderson, 2021), and the case law to date is often inconsistent (Krotov et al., 2020; Brewer et al., 2021). In this context, recommendations, for example those developed by European statistics offices (Condon et al., 2019), may prove to be a useful resource for researchers. In any case researchers should protect themselves from potentially disadvantageous legal consequences and take appropriate precautions before carrying out Web scraping activities (Linders et al., 2016; Hillen, 2019), also in the context of addressing geographic research questions. ### Ethical Aspects Even Web scraping activities that are legally unproblematic may raise a number of ethical questions. These have been reflected in the academic discourse (e.g., Landers et al., 2016; Hand, 2018; Brewer et al., 2021; Stringam et al., 2021) but also been the subject of various guidelines that can be found online (e.g., Suciu, 2021; Thakur, 2022). Important ethical issues related to Web scraping refer, amongst other things, to the following aspects: * **Informed Consent.** From an ethical point of view, classical offline research requires implicit or explicit consent from the ones being researched (Miggelbrink et al., 2022). For most Web scraping activities, however, this consent does not exist. This does not necessarily have to be a problem since for ethical reasons, consent can be dispensed with if the expected benefits of the research exceed the risks associated with it (Brewer et al., 2021). However, it may be very difficult to determine this in individual cases, for example because it is not possible to identify precisely which information that appears online can reasonably be classified as "private" and which as "public." In other words, even if certain text passages or other media can be found online, the content could be private or contain information that, on its own or in combination with other data, such as location, could even allow drawing conclusions about individuals (Mahmud et al., 2014). One important question that arises in the context of informed consent is therefore whether information accessible online has the character of public information. It is also worth questioning to what extent it can be considered ethical to be part of a forum for the sole purpose of collecting certain data without participating in the conversations that take place there, as this establishes power asymmetries while at the same time blurring the line between private and public information (Sugiura et al., 2017). * **Privacy.** An essential question is whether the data collected allow conclusions to be drawn about individual persons. Following the basic ethical principle of avoiding harm (Brewer et al., 2021; Miggelbrink et al., 2022) can thus require the removal or the modification of all identifiers from the data that could be associated with individuals. These can be, for example, names, hyperlinks but also IP addresses and mobile device identifiers (Sugiura et al., 2017; Hand, 2018). Often, however, this is not sufficient, as verbatim text passages can be Figure 4: Questions for the assessment of the legal and ethical viability of Web-scraping projects. Modified and expanded after Krotov et al. (2020), (c) Association for Information Systems. easily found through internet searches and associated with the persons who wrote them. Therefore additional, in some cases time-consuming processing steps may be required. This may involve summarizing the original data, or extracting relevant features through text mining. This whole issue is further complicated by the fact that there are people who do not want to be anonymized (e.g., bloggers). In such cases any removal of identifiers could thus constitute a copyright infringement (Sugiura et al., 2017). * **Handling access restrictions.** From an ethical point of view it may be acceptable to disregard an explicit prohibition to collect data stated in the ToS if the benefits associated with web scraping are found to outweigh potential harms. Specifically, it may be argued in some cases that there is no other, or no similarly cost-effective way to scientifically analyze a given subject matter. Similar considerations apply to ignoring robots exclusion protocols in the robots.txt file (Brewer et al., 2021). However, it is important to keep in mind that activities that violate ToS may constitute a criminal offense even if they have been determined to be ethical (Brewer et al., 2021). To ensure that their research is ethically sound, researchers should not carry out Web scraping at random, but only after careful consideration of the expected benefits and potential risks associated with their activities. This requires a thorough examination of the issue of data collection and processing, as well as a preference for presenting data at an aggregate rather than individual level. In this context, it should also be noted that researchers themselves may be ethically affected by their own research activities. Especially the Darknet with its high degree of anonymity is a place in which geographic and social-science research can observe a range of otherwise hidden, marginalized or possibly criminal activities (Benjamin et al., 2019). Being exposed to evidence of unethical or illegal activities may lead to traumatization (Brewer et al., 2021), and Web scraping (and Web crawling) as an automated activity offers little protection against entering problematic zones within the Darknet. A good exercise and starting point is to assess the legal and ethical viability of a Web-scraping activity based on a set of questions. Krotov et al. (2020) presented such a set of questions, which we have slightly modified and enhanced (Figure 4). Negative answers to any of these questions should be a warning sign that should encourage researchers to critically assess the viability of a project, perhaps consulting legal or ethical experts. Nevertheless, this does not automatically mean that the Web-scraping project in question is per se illegal or ethically problematic. ## 5 Methodological Challenges Web scraping of geospatial data faces a combination of challenges, some of which are due to the scraping process itself while others are inherent in the data sources being harvested. In our overview we especially highlight issues that are related to the Web scraping process and/or to location and spatial patterns, although these aspects are partly intertwined with biases related to the data sources themselves. The literature on data quality of VGI from social media also offers relevant insights that are more broadly related to capturing geospatial online data (e.g., Tjaden, 2021). Important issues include the following: * **Limited dependability.** The structure and contents of websites may change at any time without prior notice, and service providers may impose technological obstacles at any time. Researchers must therefore be able to rapidly adjust scraping software, which may imply significant software development efforts. Also, legal and ethical aspects may have to be re-evaluated regularly. As an example, the German real-estate platform ImmoScout24 blocked scraping activities on its website in 2020 while establishing a fee-based API. Authors also reported software maintenance in response to changes in website structure in Airbnb (2018/19) or targeted climate data portals (Bonifacio et al., 2015; Slee, 2020). * **Incompleteness.** Scraped information may present significant gaps either due to technical challenges in scraping the available information, or due to incomplete data being provided on a website. In particular, it is important to recognize that attributes that are important to the scraper may be less relevant to the data provider. As an example, real-estate agents may have an interest in omitting apartment characteristics that may lower a property's value. Information may also be entered without proper standardization or validation, e.g. with rare and unexpected qualifiers ('approx.', 'at least'), or non-standard address information involving informal or abbreviated toponyms. As a result, scrapers must be robust to unexpected formats, detect anomalies and outliers, and perform sanity checks. In the example of Bonetti et al. (2016), 29% of the scraped real-estate offers could not be geocoded, and in the end only 28% of the records were left due to missing attribute values or outliers. Gap filling (imputation; Gelman and Hill, 2006) is often a necessary step that must be used with great care in order to balance robustness against the need to detect systematic problems. * **Obfuscation of location.** The accuracy of geographic location is sometimes intentionally degraded to protect location privacy or business interests. This can be achieved by adding a random positional error (e.g., Airbnb: 150-200 m; Deboosere et al., 2019), reporting the location of higher-level spatial units, or providing incomplete address information (e.g., house number unavailable in 20% of the apartments scraped in our case study in Section 6). Obfuscation affects, in particular, the calculation of micro-geographic descriptors for spatial modeling, such as walkability. Location effects may thus be underestimated due to regression dilution (Frost and Thompson, 2000). It is, however, sometimes possible to infer the true location if obfuscation is poorly implemented (Ardagna et al., 2011), or to reduce its effects on an analysis with application-specific correction and aggregation strategies (Wachsmuth and Weisler, 2018). * **Search personalization and geotargeting.** Web sites use various sources of information to personalize their contents and search results (Micarelli et al., 2007; Teevan et al., 2010). This may involve spatial (location of device: geotargeting, geoblocking), temporal (time of search), and thematic context information that informs models of user needs. It may influence the order of search results, their accuracy and completeness, and even the attributes themselves (e.g., personalized and geographic pricing). Web scrapers can to some extent mitigate these effects by emulating a variety of browsers, varying the time of search, rotating IP addresses, using virtual private network (VPN) services to bypass geoblocking, and creating fake user profiles. However, some of these strategies raise ethical and legal questions. * **Representativeness.** Web-scraped data often suffers from various biases related to the representativeness of the data. This may relate to the scraping process itself in unexpected ways as, for example, the Airbnb scraper of Slee (2020) appeared to have missed some of the listings in high-density areas. As far as the data sources themselves are concerned, data obtained from online marketplaces does not necessarily represent the broader market due to undercoverage (Beresewicz, 2017), the severity of which may vary spatially depending on a platform's market penetration. Offers that remain available for a longer period of time will also be over-represented in one-off data capture. Achieving consistency with other independent data sources is therefore challenging; this is especially relevant for official government statistics or when combining various data sources (Agarwal et al., 2019). * **Logical inconsistencies.** Web scraping over extended periods of time or in platforms covering large regions runs the risk of producing inconsistent data. A platform's internal data collection and validation procedures may change at any time and are usually undocumented. Thus, the publicly accessible attributes may change in quality, semantics, or even availability at any time. As an example, Airbnb reservation status could be scraped until late 2015; in contrast, occupancy in AirDNA's more recent Airbnb data is estimated using machine learning (Deboosere et al., 2019). These as well as other changes (Alsudais, 2021) may be more subtle and harder to detect than changes in a website's fundamental structure. * **Limited temporal coverage.** Instead of scraping a platform continuously over an extended period of time in a longitudinal design, researchers would often like to access historical data retrospectively. Web archives such as the Internet Archive's Wayback Machine offer access to numerous snapshots of webpages and online media. Going as far back as the year 2001, the Wayback Machine has been identified as a useful resource for the social sciences (Arora et al., 2016) that has also been leveraged in geographic research contexts (e.g., Tachibana et al., 2021). Nevertheless, Web archives are a substantially incomplete copy of the scrapable portions of the Web as they usually do not cover the Deep Web, such as dynamic content and scripted content that is generated in response to user interaction or queries (Arora et al., 2016). Dynamic webpages are, however, particularly important as interfaces to large databases such as real-estate listings. Also, unique legal and ethical issues may arise as a consequence of archiving activities (see Section 4). * **Barriers to open science.** Depending on the legal space in which Web scraping takes place, researchers may not be able to share the retrieved data with the research community under an open-data license, creating barriers to open science. Relevant aspects include the amount of data being scraped (Klawonn, 2019), or the type of processing or aggregation being applied. To our knowledge, none of the articles reviewed for this work shared their data publicly. Overall, research involving Web-scraped geographic data needs to address multiple, partly unique challenges at all stages from data collection and processing to data analysis and the interpretation of results. Nevertheless, using multiple digital data sources in concert may help to mitigate some of the biases inherent in each source (Tjaden, 2021). ## 6 Sample Application: Leipzig Apartments To illustrate the potentials and pitfalls of Web scraping in a case study, apartment listings from Leipzig, Germany, will be examined. While possible applications in geographic research are manifold (Bonetti et al., 2016; Boeing and Waddell, 2017), our original motivation for Web scraping real-estate data for Leipzig and other cities was to obtain a variety of datasets that could be used in teaching various geographic data science methods to students of geography, allowing students to relate general geographic knowledge to real-world case studies. Sample applications include hedonic price modeling using linear and nonlinear regression models, or black-box predictive modeling using random forest or boosting methods. For the intended application it was necessary to retrieve samples consisting of at least several hundred apartment listings with complete information in key attributes. In addition to apartment rent and size, the year of construction and apartment condition (especially renovation) seemed necessary. Complete address information is also desirable in order to account for micro-geographic factors that would be calculated using GIS. ### Feasibility Assessment Two leading real-estate platforms were initially examined, Immowelt (immowelt.de) and ImmoScout24 (immobilienscout24.de), but the latter was excluded as its ToS disallow automated data capture, and technical restrictions have been in place since autumn 2020. Immowelt's terms contained no specific provisions related to Web scraping or the intended uses of the data, and neither did the robots.txt file, accessed multiple times during the scraping period, impose relevant restrictions. Privacy issues were not of concern as contact information, which very rarely referred to private landlords, was not to be extracted. Overall, considering these factors, the amount of data to be retrieved relative to the overall size of the platform's database, and the copyright regulations governing academic uses in Germany (Klawonn, 2019), it was deemed legally and ethically acceptable to harvest data from Immowelt. The scraped platform is considered the second largest platform at the national level. At the time of writing, ImmoScout24, Immowelt and Ebay Kleinanzeigen (ebay-kleinanzeigen.de) held overlapping sets of 958, 651 and 513 Leipzig offers, respectively, with regional variation in market shares (e.g., Berlin: 2034, 366, and 1461 offers, respectively; Jena: 71, 76, 73). The scraped sample may therefore be biased as specific types of apartments may be procured through other platforms or directly through intermediaries such as real-estate agents or social media (undercoverage). This bias may change over time as the market share of platforms may vary due to mergers (e.g., Immowelt and Immonet in 2015; Handelsblatt, 2015) or advertising campaigns (Horizont, 2022), posing challenges for trend assessments in inconsistent time series. Apartment offers also remain open for different amounts of time; while this clearly under-weights highly-sought apartments when using a cross-sectional design at a single time point, it should not affect samples gathered over an extended time period. Bersewicz (2017) discusses representativeness issues in Web-scraped real-estate data analysis at depth from a statistical perspective. Regarding the technical feasibility, the platform provides an easy-to-use (but not publicly documented) URL syntax that allows scrapers to access listings easily based on arguments such as object type, place name, or search radius. Actual user interaction such as scrolling or clicking are not required to access the HTML-based listings. A login was also not required. An inspection of HTML sources quickly showed that scraping would be easy to implement using basic programming tools. Although historical apartment data was not required for this study, it is worth noting that the Wayback Machine Web archive has not archived apartment offers from Immowelt. A prototype was first implemented and tested and then deployed to routinely scrape real-estate listings. R's rvest package (Wickham, 2021) was used along with robotstxt (Meissner and Ren, 2020) for scraping and information extraction. Retrieval was scheduled at night times and with generous delays (>10 seconds) between individual calls. Searches were done based on the city's name and using multiple sorting strategies to obtain a better coverage. All scripts were optimized to catch exceptions that may be related to server errors, internet connection problems, or unexpected webpage structure or data formats. The software is therefore able to ignore rare exceptions, but care must be taken to detect possible increases in the frequency of error messages, data gaps or inconsistencies. We limit our analysis to the calendar year 2021. ### Quality Considerations Overall, during 2021 there were 89 days on which no records were retrieved. These gaps mostly relate to an undetected technical failure during the summer holidays and a hardware damage in autumn; nevertheless, a peak in scraped apartments after the interruptions indicates that only a small fraction of the normal number of scraped apartments had been missed. Overall, data on 9904 apartments in Leipzig was retrieved. Depending on the exclusion criteria used, between 6248 and 6505 apartment offers was available for subsequent hedonic modeling, as detailed below. Table 2 shows statistics on missing data. As for data quality, it is believed that information on 'hard', price-determining facts is of generally high quality considering the landlord's possible liability for incorrect information. Nevertheless, a small fraction had implausible information such as an impossible construction year (one apartment) or price per square meter (two cases). The year of construction may in some cases represent the year of renovation as 67 renovated apartments were reportedly built after 2010. Address information was usually complete with street name and house number. Nevertheless, some effort had to be put into processing address information in order to account for a reverted order of street name and postal code / town or to remove redundant and possibly confusing information from the address information (e.g., name of the neighborhood or unique characteristics such as 'building B'). Address geocoding using the Nominatim Web service (Clemens, 2015) was successful (in the sense of returning a coordinate) for 94% of the apartments. This compares favorably with the 71% success rate achieved by Bonetti et al. (2016). A systematic assessment of failed attempts has not been made, but it appears that 173 coordinates were clearly outside the city of Leipzig. Euclidean distance to city center (town hall) was calculated as a simple proxy for access to services and retail. This variable was imputed based on the postal code (6% of all apartments). (Additional imputation rules or regression models were implemented for some other variables that are not further used here, such as energy efficiency class: 57% missing.) \begin{table} \begin{tabular}{l|c|c} \hline & Missing & Implausible \\ \hline Monthly (net) rent & 0.00 & \\ \hline Apartment size & 0.00 & \\ \hline Price per square meter & 0.00 & 0.02 \\ \hline Running costs & 0.00 & 0.01 \\ \hline Number of rooms & 0.12 & \\ \hline Year of construction & 33.07 & 0.01 \\ \hline Energy efficiency class & 56.76 & \\ \hline Postal code & 0.00 & 0.05 \\ \hline Geocoded coordinates & 5.57 & 1.75 \\ \hline \end{tabular} \end{table} Table 2: Percentages of missing and implausible data for selected attributes in the 9904 Immowelt apartment offers from the city of Leipzig. Figure 5: Spatial distribution of a random subsample of 1000 apartments in the city of Leipzig used for training the models. Data source: Immowelt; map background: OpenStreetMap contributors. Overall, substantially complete or imputed data on 6505 apartment offers from Leipzig were retrieved during 2021, with a more limited set of 6248 apartments having more precise building-level geolocation (Figure 5). ### Geospatial Analysis In an academic teaching context, we use subsets of several hundred apartments for various activities, for example in practical assignments on geospatial data science, and to generate online quiz questions or exam questions using a programmatic literate programming framework (Zeileis et al., 2014). Random variability obtained by subsampling as well as varying sets of predictor variables (e.g., presence of a balcony, or fraction of green space in surroundings) offer a large amount of flexibility to obtain, e.g., significant or non-significant predictor effects or linear as well as nonlinear relationships. Additional variability can be achieved by varying the city, including cities with unique predictors such as distance to waterfront. The example of a nonlinear regression analysis of apartment rents (per square meter) in Leipzig is shown in Figure 6, using a training sample of 1000 offers. This generalized additive model (Wood, 2017) achieved a very good model fit, considering its limited set of predictors (\(R_{\text{adj}}^{2}\): 0.678). General patterns relate to a sharp increase in rent per square meter for apartments built in recent years and for micro-apartments. Locations in close proximity (<3 km) to the city center are also more costly. In this simplified analysis, the number of features is a lumped quantity that simply counts the presence of features such as balcony, parking space, or senior-friendly condition. These features were positively, but much less strongly related to apartment rent, while accounting for year and distance to center. Machine-learning models such as random forests (Breiman, 2001) offer more flexibility and therefore possibly improved predictive performances, although at the cost of reduced model interpretability. The random forest model is also capable of handling strongly correlated predictors. This allows us to incorporate, for example, a larger number of variables representing specific apartment features (e.g., renovation status, built-in kitchen) or micro-geographic variables (e.g., distance to public transportation, postal code area). For comparison, the GAM was also applied in a data-driven manner by enabling variable selection by shrinkage to zero effective degrees of freedom (Wood, 2017). Model performance was assessed using all 5245 apartments that were not included in the training sample. Prediction maps were prepared for exemplary apartment characteristics held constant throughout the study area (55-m2 two-room apartment built in 2000 with balcony, parking space, and basement). The random forest model with the enhanced feature set achieved the best performance on the test set (RMSE: 1.08 **E**/m2) followed by the GAM with shrinkage (RMSE: 1.22 **E**/m2) and the initial simple GAM (RMSE: 1.27 **E**/m2). Despite the similar performances, prediction maps in Figure 7 show important discrepancies that may relate to different model capabilities (smooth versus step-function relationships) and biases (variable selection bias of random forest: Strobl et al. (2008)). Figure 6: Nonlinear transformation functions of a generalized additive model of apartment rent (price per square meter) in the city of Leipzig. Distance to city center is in meters, apartment size in square meters, year is the year built, and nfeatures represents the number of reported features such as balcony or parking space. ## 7 Discussion and Conclusions The review and case study presented in this paper demonstrate that Web scraping as a tool for internet-based data collection and digital fieldwork offers numerous opportunities for geographic research, but also comes with several pitfalls. Considering the data openly available on the Web and the previously published research, Web scraping is particularly useful for the study of emergent patterns in urban transformation (e.g., gentrification, social inequality; Wachsmuth and Weisler, 2018; Mermet, 2021) and platformed professionalization (Bosma, 2022), based on online platforms related to hospitality, real estate and mobility. Examples of studies exist in all fields of geography, making creative use of Web scraping to retrieve data on social and economic actors (Kinne and Axenbeck, 2020; Gravelle et al., 2021) or the physical environment (Canli et al., 2018; Skoulikaris and Krestenitis, 2020)) in order to understand spatial patterns and processes. Geocomputing tools such as geoparsing and geocoding as well as the enrichment of the captured data with ancillary location-based information show the relevance of geographic information science within the geographic Web scraping workflow. Data that is purposefully provided for further analysis is often limited to privileged groups due to restrictions imposed by data owners or other gatekeepers. Access to and use of such data may result in costs (including the costs of paperwork; Vallone et al., 2020) or, if provided for free upon request, conflicts of interest as the permission may be revoked at any time. Web scraping therefore has the potential to democratize the access to data, at least within the legal, ethical and methodological limits laid out above, some of which have a geospatial dimension (e.g., geo-privacy, geoblocking, spatial undercoverage). As an automated tool, Web scraping offers real-time insights especially into socio-economic processes that are otherwise monitored at annual to multi-annual frequencies through surveys and censuses, whose results can be delayed -- in some cases by years (The New York Times, 2022; Statistisches Bundesamt, 2023). This advantage has received increasing recognition by the statistics offices of many countries (Hoekstra et al., 2012; Virgillito and Polidoro, n.d.), but its potential for creating prospective longitudinal samples for academic research is yet to be tapped. In some applications, Web archives may allow to extend time series into the past or to conduct retrospective studies (e.g., Tachibana et al., 2021). Nevertheless, the missing support for dynamic Web contents is still a strong limitation of he existing Web archives. Although Web scraping is still partly uncharted terrain, researchers should not be afraid and should rather use the freedoms granted to them in most jurisdictions in order to generate datasets that would otherwise be difficult to acquire. Nevertheless, they should be aware that public access to contents does not automatically mean that the information is Figure 7: GAM (left) and random forest (right) spatial prediction maps of expected apartment rent for a newly built two-room apartment with three features (balcony, basement, parking). "open" or "free" in the sense of Open Science. We should therefore be cognizant of the legal and ethical limitations related to using and possibly sharing Web-scraped information. This is particularly true for potentially sensitive geospatial data.
2309.08493
Evidence for a luminosity-decay correlation in GRB GeV light curves
Correlations between intrinsic properties of gamma-ray burst (GRB) light curves provide clues to the nature of the central engine, the jet, and a possible means to standardise GRBs for cosmological use. Here we report on the discovery of a correlation between the intrinsic early time luminosity, $L_{G,\rm 10s}$, measured at rest frame 10s, and the average decay rate measured from rest frame 10s onward, $\alpha_{G,\rm avg>10s}$, in a sample of 13 Fermi Large Array Telescope (LAT) long GRB light curves. We note that our selection criteria, in particular the requirement for a redshift to construct luminosity light curves, naturally limits our sample to energetic GRBs. A Spearman's rank correlation gives a coefficient of -0.74, corresponding to a confidence level of 99.6%, indicating that brighter afterglows decay faster than less luminous ones. Assuming a linear relation with $\log(L_{G,\rm 10s})$, we find $\alpha_{G,\rm avg>10s} = -0.31_{-0.09}^{+0.12}\log(L_{G,\rm 10s}) + 14.43_{-5.97}^{+4.55}$. The slope of -0.31 is consistent at $1\sigma$ with previously identified correlations in the optical/UV and X-ray light curves. We speculate that differences in the rate at which energy is released by the central engine or differences in observer viewing angle may be responsible for the correlation.
K. R. Hinds, S. R. Oates, M. Nicholl, J. Patel, N. Omodei, B. Gompertz, J. L. Racusin, G. Ryan
2023-09-15T15:55:00Z
http://arxiv.org/abs/2309.08493v1
# Evidence for a luminosity-decay correlation in GRB GeV light curves ###### Abstract Correlations between intrinsic properties of gamma-ray burst (GRB) light curves provide clues to the nature of the central engine, the jet, and a possible means to standardise GRBs for cosmological use. Here we report on the discovery of a correlation between the intrinsic early time luminosity, \(L_{\rm G,10s}\), measured at rest frame 10s, and the average decay rate measured from rest frame 10s onward, \(\alpha_{\rm G,avg\!\!>\!10s}\), in a sample of 13 _Fermi_ Large Array Telescope (LAT) long GRB light curves. We note that our selection criteria, in particular the requirement for a redshift to construct luminosity light curves, naturally limits our sample to energetic GRBs. A Spearman's rank correlation gives a coefficient of -0.74, corresponding to a confidence level of 99.6%, indicating that brighter afterglows decay faster than less luminous ones. Assuming a linear relation with \(\log(L_{\rm G,10s})\), we find \(\alpha_{\rm G,avg\!\!>\!10s}\)\(=-0.31^{+0.12}_{-0.09}\log(L_{\rm G,10s})+14.43^{+4.55}_{-5.97}\). The slope of \(-0.31\) is consistent at \(1\sigma\) with previously identified correlations in the optical/UV and X-ray light curves. We speculate that differences in the rate at which energy is released by the central engine or differences in observer viewing angle may be responsible for the correlation. keywords: (transients:) gamma-ray bursts ## 1 Introduction Gamma-ray bursts (GRBs) are collimated relativistic jets, launched either by the core collapse in rapidly rotating massive stars (long GRBs; LGRBs), or the mergers of compact object binaries (short GRBs; SGRBs). Their observed emission comprises of two phases: initial short-lived gamma-ray emission in the range keV-MeV, known as the prompt emission, quickly followed by longer-lived emission, known as the afterglow, observed across the electromagnetic spectrum from TeV to radio (Sari, Piran & Narayan, 1998; MAGIC Collaboration et al., 2019; H. E. S. S. Collaboration et al., 2021). In the standard GRB fireball model, the prompt emission originates from internal shocks that take place inside the relativistic jet between shells of materials moving at different speeds, whilst the afterglow emission is created via external shocks when the jet collides with the surrounding circumstellar medium (e.g. Meszaros & Rees, 1997; Zhang & Meszaros, 2004; Zhang et al., 2006). Sample studies of GRBs have led to the discovery of correlations linking the properties of prompt and afterglow emission, which provide invaluable insight in to the mechanisms common to all GRBs; see Dainotti, Del Vecchio & Tarnopolski (2018) for a review on various correlations. A correlation of particular interest is that found between the luminosity and average decay rate discovered in the optical/UV and X-ray afterglow light curves (Oates et al., 2012; Racusin et al., 2016); see also earlier work (Kouveliotou et al., 2004; Boer & Gendre, 2000). The correlation, known as the luminosity-decay correlation, indicates that the more luminous light curves decay faster than their less luminous counterparts. In the case of the optical/UV afterglows, the correlation was found in a sample of 48 LGRBs and for the X-ray afterglows, it was found in 237 LGRBs1. A Spearman's rank correlation was run for both studies, in the case of the optical/UV light curves, the rank coefficient, \(R_{sp}\), was determined to be \(-0.58\) and the probability of the null hypothesis to be \(p<1\times 10^{-5}\)(Oates et al., 2012). For the X-ray light curves, \(R_{sp}=-0.59\) and \(p\ll 1\times 10^{-6}\) was found (Racusin et al., 2016). The correlation in the optical/UV and X-ray indicates that the afterglow light curves of GRBs can be described by one unifying model regardless of the detailed and varied temporal behaviour of individual LGRBs (Oates et al., 2015). Footnote 1: no evidence for a correlation was found in the sample of 9 X-ray SGRB light curves Observations by the _Fermi_ Large Area Telescope (LAT) has revealed GeV light curves to have a power-law decay that extends beyond the end of the prompt emission (Nava, 2018, e.g.). These GeV light curves are likely a combination of the prompt emission and afterglow emission, with the early light curve dominated by internal shock processes (prompt emission) and the late time light curves dominated by external shock processes (afterglow) (e.g Nava, 2018). Panaitescu (2017) examined the \(>100\) MeV flux light curves from the first _Fermi_-LAT GRB catalogue (Ackermann et al., 2013) and an additional 14 well monitored GRBs. They divided the sample into fast decaying events (\(\alpha<-1.2\)) and slow decaying events (\(\alpha>-1.2\)), finding that the light curves converged at late times and that the faster decaying events were brighter, suggesting a correlation between brightness and decay rate at high energies within the observer frame light curves. In this paper, we expand this analysis and test if the luminosity-decay correlation found in optical/UV and X-ray, is also found at GeV energies. We construct our sample using the GeV light curves observed by the _Fermi_-LAT contained in the 2nd LAT GRB catalogue (Ajello et al., 2019). In SS2 we discuss the sample of GRBs, the fitting procedures used to measure the luminosity and decay rate, and the linear regression method performed to define the relationship. The results of this analysis are presented in SS3 with the discussion and conclusions in SS4 and SS5 respectively. All uncertainties throughout this paper are quoted at \(1\sigma\). Throughout, we assume the Hubble parameter H\({}_{0}=70\) kms\({}^{-1}\) Mpc\({}^{-1}\) and density parameters \(\Omega_{\Lambda}=0.7\) and \(\Omega_{m}=0.3\). ## 2 Data Analysis ### The sample We obtained the _Fermi_-LAT 100 MeV-100 GeV flux light curves from the 2nd LAT GRB catalogue (Ajello et al., 2019). The catalogue contains 219 light curves, comprising 21 SGRBs and 198 LGRBs; SGRBs release 90% of the prompt energy within 2s (\(T_{90}\)\(<\) 2s), and LGRBs release 90% of the prompt energy on timescales \(>\) 2s (\(T_{90}\)\(>\) 2s). Of these, we selected those that had measured spectroscopic redshifts taken from the 2nd LAT GRB catalogue (Ajello et al., 2019). This criteria results in a sample of 40 GRBs, one of which we further exclude as it is the only SGRB with redshift - in addition the X-ray and optical studies found the correlation exclusively in LGRBs - leaving us with a sample of 39 GRBs. In section SS4.3 we discuss how requiring spectroscopic redshifts may introduce selection effects. However, this study relates to the intrinsic luminosity in the rest frame, thus we require accurate redshifts to move from the observer frame to the rest frame. In the following we measure the luminosity and decline rate of the light curves using a simple power-law. When fitting so that the number of data points is greater than the free parameters, we impose an additional criterion that the light curve must have at least 3 data points included in the fit. This criterion reduces the final sample to 14 LGRBs. ### Luminosity Light curves We define the start time of our light curves, \(T_{0}\), as the end time of the GBM \(T_{90}\) parameter, consistent with the procedure of Oates et al. (2012) and Racusin et al. (2016)2. We then converted each of the GeV flux light curves into the rest frame. All times were divided by a factor 1+\(z\) and the luminosity defined by Footnote 2: Note Oates et al. (2012) and Racusin et al. (2016) used _Swift_ Burst Alert Telescope (BAT) detected GRBs and therefore use the end time of the \(T_{90}\) parameter measured by _Swift_/BAT \[L(t)=F_{\nu}(t)\times 4\pi D_{l}^{2}(1+z)^{\beta-1}, \tag{1}\] where \(D_{l}\) is the luminosity distance, \(z\) is the redshift and \(\beta\) is the spectral index of the GRB. The temporal and spectral indices, \(\alpha\) and \(\beta\), are given by the expression \(F(t,\nu)\propto t^{\alpha}\nu^{\beta}\). A photon index, \(\Gamma\), was provided for each flux point in the LAT GRB catalogue, where \(\Gamma=\beta+1\). For this analysis we take \(\Gamma\) to be the average of the values computed for each GRB - these are listed in Table 1. ### Intrinsic Early Time Luminosity & Power-law Fits We first define a time at which we measure the luminosity and from this, we fit a power-law to the rest of the light curve to measure the average decay index. The luminosity-decay correlations from Oates et al. (2012) and Racusin et al. (2016) were exclusively found in the afterglow regime and not the prompt. For the GeV sample, we therefore need to select a time that passes through as many GRB light curves as possible, is early (to maximise the dynamic range in luminosity), but not too early so as to avoid the very earliest behaviour which exhibits prompt emission features (e.g. Ackermann et al., 2011; Nava, 2018); we chose this time to be 10s. To measure the 100 MeV-100 GeV luminosity at 10s, \(L_{\rm G,10s}\), we fit a power law to the data within the time range \(\log(T/\rm s)=1\pm 30\%\); corresponding to fitting data points within \(\sim 5-20\)s. A second power-law is fit to the data from 10s onward, to measure the decay rate \(\alpha_{\rm GJ,avg\sim 10s}\). These fits are performed using the python module lmfit. By fitting a simple power-law (SPL) to the light curves from restframe 10s onwards, we are probing the average rate that light curves decay, rather than the detailed underlying behaviour. It is well established that the X-ray afterglow light curves display a canonical behaviour, a power law decay with one or more light curve segments (e.g. Zhang et al. 2006). In Racusin et al. (2016), a correlation was found between the X-ray luminosity at restframe 200s, \(L_{\rm X,200s}\), and the average decay index. They also tried correlating individual light curve segments with \(L_{\rm X,200s}\) to test whether one segment was more significant than the others. However, they found that the correlation was not significant for any of the individual segments of the canonical light curve with \(L_{\rm X,200s}\), indicating the importance of the average decay measure. We also investigated the affect of measuring the correlation at times later than rest frame 10s, e.g measuring the luminosity at 20s, 30s and 40s, and also the average decay index using data from the same time the luminosity is measured and beyond. In each instance, the range in measured luminosities at these times decreases making it increasingly more difficult to recover a correlation. In addition, the average number of data points per light curve decreases as we consider time ranges that start later. At 10s onwards, the average number of data points per GRB light curve for this sample is \(\sim 18\) while from 40s onwards the average number of data points is \(\sim 12\) but the coverage is not consistent across the sample. Conversely, using times earlier than 10s increases the risk of sampling a larger contribution from the prompt phase or the subsequent transition from prompt to afterglow. ### Determining a Relationship To determine if luminosity is correlated with the decay rate we perform a Spearman's rank test, which is a non-parametric measure of the strength and direction of any correlation. We also performed a 'partial' Spearman's rank test, which takes into account the effect of a third parameter; this was to determine if systematic effects due to redshift could be responsible for the correlation. The results of both the standard and 'partial' Spearman's rank analysis are presented in Table 2. Following on, linear regression was performed using the ODR python module which defined the relationship between the two parameters; the python ODR linear regression results were compared with the IDL routine FITEXY, which was used by Oates et al. (2012) and Racusin et al. (2016), and regression parameters from both methods were found to be consistent within \(1\sigma\). The errors on the Spearman's rank and linear regression were calculated using Monte-Carlo methods. Curran (2014) discussed whether a Bootstrapping method or Resampling each point within its uncertainties is optimal for calculating errors; for this analysis, we use both methods individually and also use a combination of the two. In each case, we ran the Monte Carlo simulations for \(10^{5}\) trials. In an attempt to be thorough with our error analysis, we favour the combination method which includes Bootstrapping and then Resampling - these are the errors presented in Table 2. ## 3 Results Examining the distribution of light curves in Fig. 2, we see the light curves cluster. Note the greatest spread in luminosity is at early times and the distribution appears to become narrower with time. In addition to this, when colouring the luminosity light curves by their average decay rate we see a colour gradient, which serves as visual confirmation of the correlation (that the more luminous light curves decay faster). There is one outlier, GRB 170405A, that appears to be offset at a higher luminosity compared to the other GRBs. We first perform a Spearman's rank test on the entire sample of 14 light curves. This results in a correlation coefficient of \(-0.44\pm 0.31\) and a p-value of \(1.14\times 10^{-1}\). With the large error on the Spearman's rank coefficient, we cannot claim a correlation between the two parameters. However, we note the exceptionally flat light curve of GRB 170405A that stands out in Fig. 2, and suggests this may have a different emission origin compared to the rest of the sample or that the spectral index used in Eq. 1 is inaccurate (see section 4). We, therefore, performed the Spearman's rank test after removing this GRB from the sample. In this case Figure 1: Light curve of GRB 080916C. The green dashed line shows the simple power-law (SPL) fit to the data from 10s onward. The red data points in the range \(\log(T/\rm{s})=1\pm 30\%\) were used in a separate fit to calculate \(L_{\rm G,10s}\). Figure 2: Final 14 LGRB light curves, colourmapped according to their absolute values of \(\alpha_{\rm{G,avg>10s}}\). The GeV GRB afterglow light curves appear to cluster more tightly in luminosity at later times. The colour mapping suggests that the more luminous the GRB the faster its decay. The exception is GRB 170405A, which is significantly brighter at late times compared to the rest of the sample. we find a significant negative correlation, with a coefficient of \(-0.74\pm 0.19\) and p-value of \(4.11\times 10^{-3}\). For the 'partial' Spearman's rank, we found a coefficient of \(-0.44\) and p-value of \(1.37\times 10^{-1}\). Fig. 3 shows the luminosity vs decay rate for the final 13 GeV light curves. We also performed a linear regression which gives a relationship \(\alpha_{\rm G,avg>10s}\)= (\(-0.31^{+0.12}_{-0.09}\)) \(\log(L_{\rm G,10s})+14.43^{+4.55}_{-5.97}\). This line is over-plotted in Fig. 3. Table 2 gives the results of the Spearman's rank and linear regression analyses. ## 4 Discussion Overall, we have shown that a correlation exists between the intrinsic brightness of GeV light curves and their average decay rate. In the following section, we discuss the origin of the GeV emission and whether it is appropriate to exclude 170405A. We then compare our results with the correlation found in the optical/UV and X-ray samples. ### GeV Emission Mechanisms The GeV emission observed by _Fermi_-LAT is thought to be a combination of emission processes from the internal and external shocks that dominate at different times during the evolution of the GeV emission (see Nava, 2018, for a review). At early times, the GeV light curves often correlate with the flux observed at MeV energies (Tang, Wang and Liu, 2017), while spectrally they can either be fit with an extension of the power-law from the MeV energy range or have an additional power-law component (Maxham, Zhang and Zhang, 2011; Panaitescu, 2017; Ajello et al., 2019; Fraija et al., 2020), for which the origin may be synchrotron self-Compton (SSC) emission (Ackermann et al., 2011; Nava, 2018). The external shock emission thought to produce the afterglow is unable to reproduce the GeV flux at very early times (Maxham, Zhang and Zhang, 2011). Instead, the early GeV emission is expected to be dominated by synchrotron and SSC emission components, originating from the internal shock that drives the prompt emission (Maxham, Zhang and Zhang, 2011; Pe'er et al., 2012; Fraija et al., 2020). Following the prompt emission is a regime labelled the 'GeV extended emission' (e.g. Ackermann et al., 2014). From this point onward, the temporal behaviour in the GeV band is a power-law decay similar to the canonical X-ray afterglow value (Nousek et al., 2006; Zhang et al., 2006). This emission is thought to be dominated by synchrotron radiation (e.g. Ackermann et al., 2011; Kumar and Barniol Duran, 2010; Toma, Wu and Meszaros, 2011; Feng and Dai, 2011; Ajello et al., 2018; Nava, 2018; Maxham, Zhang and Zhang, 2011; Beniamini et al., 2015; Ajello et al., 2019; Tak et al., 2019). Though for some GRBs, particularly those with photons \(>10\) GeV, SSC emission can explain the observed emission (Fraija et al., 2022). In the case of GRB 221009A, a narrow jet, \(\sim 0.8^{\circ}\), and SSC of electrons in the external shock has been suggested to explain observations of TeV photons from the afterglow (LHAASO Collaboration et al., 2023). By excluding GRB 170405A from our sample, the outlier in our luminosity distribution Fig. 2, we find a strong correlation between the brightness of the GeV luminosity light curves and their average rate of decay. This prompted us to investigate why 170405A is an outlier. We searched the literature to determine if GRB 170405A is produced by different emission processes compared to the other GRBs. Tak et al. (2019) compared the temporal and spectral behaviour of the GeV extended emission to the synchrotron external shock model and showed that most GRBs could be explained by this model. This analysis included GRBs 080916C, 090323A, 090926A, 091003A, 110731A, 130427A, 131108A, 141028A, 160509A, 170214A, 170405A & 180720B from our sample. However, using multi-wavelength observations, Arimoto et al. (2020) found that the GeV emission from 170405A could not be produced by the same compo \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline GRB & \(z\) & \(\Gamma\) & \(\alpha_{\rm G,avg>10s}\) & \(L_{\rm G,10s}\)(erg s\({}^{-1}\)) \\ \hline 080916C & 4.35 & \(-2.60\pm 0.54\) & \(-1.55^{+0.20}_{-0.26}\) & \(3.73^{+0.92}_{-0.99}\times 10^{52}\) \\ 090323 & 3.57 & \(-2.29\pm 1.11\) & \(-1.21^{+0.42}_{-0.42}\) & \(4.05^{+2.89}_{-0.89}\times 10^{51}\) \\ 090902B & 1.82 & \(-1.96\pm 0.18\) & \(-1.49^{+0.32}_{-0.13}\) & \(9.16^{+0.15}_{-1.06}\times 10^{51}\) \\ 090926A & 2.11 & \(-2.21\pm 0.46\) & \(-1.09^{+0.07}_{-0.11}\) & \(1.09^{+0.15}_{-0.15}\times 10^{52}\) \\ 91003 & 0.90 & \(-1.88\pm 0.26\) & \(-0.14^{+0.31}_{-0.31}\) & \(1.03^{+1.03}_{-1.03}\times 10^{50}\) \\ 110731A & 2.83 & \(-2.33\pm 0.64\) & \(-1.23^{+0.33}_{-0.31}\) & \(1.54^{+0.71}_{-0.71}\times 10^{51}\) \\ 130427A & 0.34 & \(-2.02\pm 0.24\) & \(-1.02^{+0.03}_{-0.03}\) & \(3.10^{+0.35}_{-0.35}\times 10^{50}\) \\ 131108A & 2.40 & \(-2.64\pm 0.69\) & \(-1.45^{+0.42}_{-0.02}\) & \(4.12^{+0.75}_{-0.30}\times 10^{51}\) \\ 141028A & 2.33 & \(-2.43\pm 0.48\) & \(-1.07^{+0.26}_{-0.26}\) & \(1.84^{+0.31}_{-0.31}\times 10^{51}\) \\ 160509A & 1.17 & \(-2.45\pm 1.78\) & \(-1.21^{+0.42}_{-0.34}\) & \(1.40^{+0.32}_{-0.32}\times 10^{51}\) \\ 170214A & 2.53 & \(-2.47\pm 0.58\) & \(-1.19^{+0.14}_{-0.14}\) & \(1.72^{+0.84}_{-0.85}\times 10^{52}\) \\ 170405A & 3.51 & \(-5.58\pm 2.85\) & \(-0.51^{+0.11}_{-0.11}\) & \(2.58^{+0.58}_{-2.58}\times 10^{53}\) \\ 180720B & 0.65 & \(-2.26\pm 0.33\) & \(-0.77^{+0.14}_{-0.14}\) & \(9.62^{+0.63}_{-0.63}\times 10^{49}\) \\ 190114C & 0.42 & \(-2.10\pm 0.60\) & \(-1.07^{+0.04}_{-0.04}\) & \(3.49^{+0.22}_{-2.21}\times 10^{50}\) \\ \hline \end{tabular} \end{table} Table 1: The sample parameters: GRB, redshift (provided in the 2nd LAT GRB catalogue; Ajello et al., 2019), mean photon index (\(\Gamma\)), and \(\alpha_{\rm G,avg>10s}\) and \(L_{\rm G,10s}\) which are the average decay rate from restframe 10s toward and the intrinsic early time luminosity calculated at restframe 10s. Errors are given at \(1\sigma\) confidence. Figure 3: The GeV average decay rate from restframe 10s onwards against luminosity measured at restframe 10s. The solid red line is the best fitting linear regression relationship and the blue dashed lines represent the \(3\times\) RMS (root-mean-square) variation. The Spearman’s rank coefficient is \(-0.74\pm 0.19\) and the probability of the null hypothesis (no correlation) is \(4\times 10^{-3}\). We measure a linear relationship \(\alpha_{\rm G,avg>10s}\)= (\(-0.31^{+0.12}_{-0.09}\)) \(\log(L_{\rm G,10s})+14.43^{+4.55}_{-5.97}\). nent as the optical/UV emission and that the GeV emission must be produced by either a different external shock component or more likely produced by internal processes. This suggests that it is important to examine multi-wavelength observations in order to confirm the origin of the GeV emission. Further exploring the literature, we find that the external forward shock model is shown to reproduce the late GeV emission of the light curves of all the other GRBs in our sample (Kumar & Barniol Duran, 2010; Swenson et al., 2010; Ackermann et al., 2011; Barniol Duran & Kumar, 2011; Feng & Dai, 2011; Maxham, Zhang & Zhang, 2011; Piron, McEnery & Vasileiou, 2011; Toma, Wu & Meszaros, 2011; Ackermann et al., 2013; Fan et al., 2013; Kouveliotou et al., 2013; Liu, Wang & Wu, 2013; Wang, Liu & Lemoine, 2013; Ackermann et al., 2014; Maselli et al., 2014; Perley et al., 2014; Vestrand et al., 2014; Beniamini et al., 2015; Fraija, 2015; Burgess et al., 2016; Lu et al., 2017; Panaitescu, 2017; Tam et al., 2017; Nava, 2018; Ajello et al., 2019; Fraija et al., 2019; Ronchi et al., 2020; Fraija et al., 2021; Joshi & Razzaque, 2021). However, the picture is not completely clear, as some authors invoke additional components to produce some or all of the late time LAT emission (Liu, Wang & Wu, 2013; Tam et al., 2017; Duan & Wang, 2019; Wang et al., 2018). For instance, SSC emission may better explain the observed LAT emission (Fraija et al., 2022), particularly for those GRBs with photons \(>10\) GeV. Inverse Compton (IC) could also explain the highest energy GeV photons in GRBs such as GRB 130427A, 160509A, 180720B (Fan et al., 2013; Liu, Wang & Wu, 2013; Tam et al., 2013; Wang, Liu & Lemoine, 2013; Ackermann et al., 2014; Tam et al., 2017; Fraija et al., 2019). While late GeV light curves from LAT are typically dominated by low-energy photons (e.g. Ackermann et al., 2011; Nava, 2018), likely produced by the external forward shock model, other emission components such as SSC may contribute, particularly producing the highest energy photons. Examining Table 1, we note that the photon index for 170405A, \(-5.58\pm 2.85\), is especially large when compared to the mean of the sample, \(-2.8\pm 0.57\), which may account for why this GRB is an outlier. In the LAT catalog paper (Ajello et al., 2019), the photon index for 170405A, determined between 18 and 868s, is \(-2.8\pm 0.3\). For our analysis we have used the average photon index of the entire LAT light curve of GRB 170405A, provided in the LAT catalog and we note that the earliest spectral bins, with times \(<18\)s, have values of the photon index \(\ll-2.8\). Arimoto et al. (2020) also report photon indices for the LAT data. In two time intervals 310-560s and 589-1000s (observer frame), they report LAT photon indices of \(-1.88\pm 0.33\) and \(-2.36\pm 0.50\), which are consistent within 1.40 and 0.58 \(\sigma\) respectively, of the average photon index of our sample. Therefore to test if this photon index of \(-5.58\pm 2.85\) is anomalous, we assumed the mean of our sample as the photon index of 170405A, recomputed its luminosity light curve and then reran the analysis. We found that the light curve of 170405A decreased in luminosity by approximately two orders of magnitude. It no longer appears as an outlier and is consistent in luminosity with the other GRBs in the sample. Rerunning the correlation gives a result consistent with that found in the sample of 13 GRBs - the slope of the linear regression being consistent within 1\(\sigma\) of their respective errors. Since it is unclear whether this GRB is an outlier due to physical differences in the origin of this particular GRB or uncertainty in the photon index measurement, we will continue to discuss the GeV luminosity-decay correlation excluding GRB 170405A. ### Comparison with Previous Correlations Due to very few of the GeV, optical and X-ray light curves overlapping at restframe 10s or 200s, we are unable to directly compare the luminosity-decay correlation found at GeV energies using the same time as that for the optical and X-ray. However, we can compare the parameters and strength of the correlation derived using data covering the different time ranges. In Table 2, we provide the results of the optical/UV and X-ray correlation analyses presented in Oates et al. (2012) and Racusin et al. (2016) - we also provide a more physical interpretation of the GeV correlation with the luminosities normalised by \(\times 10^{51}\). Comparing the results of our GeV sample with the optical/UV results, we find the linear regression slope and intercept are consistent within \(0.27\sigma\) and \(1.27\sigma\), respectively. For the X-ray study, we find the linear regression slope and intercept are consistent within \(0.36\sigma\) and \(1.42\sigma\), respectively. The consistency of the correlation slopes across \(10^{10}\) orders of magnitude in energy (from optical photons to GeV photons) indicates the processes producing the emission are likely to be the same mechanism and provides additional support for the GeV light to originate from an external shock, at least after rest frame 10s. The GeV light curves are shorter in duration and cover an earlier time range compared to the optical/X-ray with the GeV lasting \(\sim 10^{1}-10^{3}\)s and the optical/X-ray lasting \(\sim 10^{2}-10^{7}\)s. This implies that GeV light curves have the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Parameters} & Spearman’s & Null & Partial Spearman’s & Null & Linear Regression & No. \\ x-axis & y-axis & Rank & Hypothesis & Rank & Hypothesis & slope & intercept & GRBs \\ \hline \(L_{\rm 0,10s}\)([1]) & \(\alpha_{\rm G,avg\geq 10\,s}\) & \(-0.44\pm 0.33\) & \(1.14\times 10^{-1}\) & -0.13 & \(6.81\times 10^{-1}\) & \(-0.34^{+0.18}_{-0.12}\) & \(16.42^{+0.70}_{-0.14}\) & \(14\) \\ \(L_{\rm 0,10s}\)([2]) & \(\alpha_{\rm G,avg\geq 10\,s}\) & \(-0.74\pm 0.19\) & \(4.11\times 10^{-3}\) & -0.45 & \(1.37\times 10^{-1}\) & \(-0.31^{+0.12}_{-0.09}\) & \(14.43^{+0.35}_{-5.97}\) & \(13\) \\ \(L_{\rm 0,10s}\)([3]) & \(\alpha_{\rm G,avg\geq 10\,s}\) & \(-0.74\pm 0.19\) & \(4.11\times 10^{-3}\) & -0.46 & \(1.37\times 10^{-1}\) & \(-0.31^{+0.05}_{-0.05}\) & \(10.60^{+0.04}_{-0.94}\) & \(13\) \\ \(L_{\rm 0,200s}\)([4]) & \(\alpha_{\rm O,avg>200s}\) & \(-0.58\pm 0.11\) & \(1.90\times 10^{-5}\) & -0.50 & \(2.85\times 10^{-4}\) & \(-0.28^{+0.04}_{-0.04}\) & \(7.72^{+1.31}_{-1.31}\) & \(48\) \\ \(L_{\rm X,200s}\)([5]) & \(\alpha_{\rm X,avg\geq 200s}\) & \(-0.59\pm 0.09\) & \(8.03\times 10^{-8}\) & -0.63 & \(1.58\times 10^{-6}\) & \(-0.27^{+0.04}_{-0.04}\) & \(6.99^{+1.23}_{-1.11}\) & \(237\) \\ \hline \end{tabular} \end{table} Table 2: This table contents include the x and y axis parameters used in the Spearman’s rank tests, the Spearman’s rank coefficient and probability of null hypothesis, ‘partial’ Spearman’s rank and the corresponding probability of null hypothesis, linear regression slope and intersect and the number of GRBs used in each run. ([1]) denotes the run that included 170405A, ([2]) denotes the run excluding 170405A, ([3]) denotes the results with luminosity values normalised by \(10^{51}\), ([4]) found in Oates et al. (2012) and ([5]) found in Racusin et al. (2016). potential to be in the fast cooling phase whilst the optical/X-ray is typically in the slow cooling regime (Zhang et al., 2006; Ghisellini et al., 2010; Ajello et al., 2019). Tak et al. (2019) looked at the closure relations for the GeV extended emission of 13 out of 14 GRBs in our sample. They determined that 7 of the GRBs in our sample are consistent with the fast cooling regime with \(\nu>\nu_{m},\nu_{c}\), where \(\nu_{c}\) is the synchrotron cooling frequency and \(\nu_{m}\) is the synchrotron peak frequency. Four are consistent with being in the slow cooling regime with \(\nu_{m}<\nu<\nu_{c}\) and two are unclassified. We split the sample based on their cooling regime and tested the correlation strength to determine whether the correlation is driven by a certain cooling regime. The Spearman rank test for fast cooling only and slow cooling only gives coefficients of -0.68 and -0.60, and p-values of 0.09 and 0.40, respectively. Although the p-values are larger due to the smaller number of GRBs involved in each correlation, the coefficients are a similar value to that found for the full value. This suggests that the luminosity-decay correlation in the GeV energy range is not affected or produced by differences in cooling regime. This is also supported by Fig 1 of Tak et al. (2019), which shows similar observed temporal indices for LAT light curves consistent with either fast or slow cooling regimes. Oates et al. (2015), simulate the relationships, expected between \(\log L_{200s}\) and \(\alpha_{>200s}\) and isotropic gamma-ray energy log \(E_{iso}\) from a basic afterglow model, for the optical and X-ray afterglows. They conclude that the simulations do not agree with correlations observed between \(\log L_{200s}\) and \(\alpha_{>200s}\), or log \(E_{iso}\) and \(\alpha_{>200s}\). This suggests that while a common underlying physical mechanism is consistent with producing GRBs and their optical and X-ray afterglows, regardless of their detailed afterglow light curve behaviour, a basic afterglow model has difficulty explaining all the observed correlations. Instead, the luminosity-decay correlation could be a result of different rates of energy deposition from the central engine to the surroundings; faster decays occur when the energy is deposited rapidly from the central engine, and hence produce initially more luminous afterglows (Oates et al., 2012, 2015; Panaitescu, 2017). An alternative explanation may be that the jet is viewed off-axis and may be structured (Oates et al., 2012, 2015). When a jet is viewed at large angles away from the jet axis, a GRB can appear to be dimmer and decay on a longer timescale compared to GRBs that are observed close to the jet axis (see also Granot et al., 2002; Rossi et al., 2004; Ramirez-Ruiz et al., 2005; Panaitescu & Vestrand, 2008; Ryan et al., 2020). Structured jets have been used to explain the brightest GRB afterglows such as that of GRB 221009A O'Connor et al. (2023). ### Possible Selection Effects Requiring a spectroscopic redshift notably reduces the number of LAT light curves in our sample. However, it is necessary to construct rest-frame light curves in order to directly compare the intrinsic brightness of different GRBs. We also work in the rest frame to be able to compare the results of this paper with previously found correlations at other wavelengths by Oates et al. (2012) and Racusin et al. (2016). This redshift requirement introduces some selection biases. The gamma-ray emission of GRBs is in general not well localised. Unlike _Swift_, _Fermi_ does not have narrow field instruments onboard and so follow-up of the GRB afterglow at longer wavelengths, which provides better positional accuracy and enables spectroscopic follow-up, occurs later than for _Swift_ detected GRBs. This implies that spectroscopic follow-up of _Fermi_-LAT detected GRBs is only achievable for those that have afterglows bright enough at late times to obtain a spectrum. We attempt to quantify this selection bias, by comparing the distributions of isotropic gamma-ray energy, \(E_{iso}\), of this sample with the whole LAT catalogue, using the GBM measured \(E_{iso}\); the isotropic energy is correlated with afterglow brightness (e.g. D'Avanzo et al., 2012; Margutti et al., 2013; Oates et al., 2015). These distributions are shown in Figure 4, together with the \(E_{iso}\)values of the entire GBM sample. We ran a two-sample Anderson-Darling test on our sample and the full _Fermi_ sample to address whether they are statistically different. A bias towards brighter GRBs is apparent visually in Figure 4. A two-sample Anderson-Darling test, comparing our sample (red) to the LAT sample (blue) and to the GBM sample (green), gives p=0.07 and p=0.01 respectively. The comparison to the LAT sample is only marginally significant and likely due to the small size of our sample. However, comparison with the GBM sample is more significant and indicates we are biased towards energetic events. ## 5 Conclusions We examined a sample of 13 LAT light curves to determine the relationship between the intrinsic early time luminosity, \(L_{\rm G,10s}\), and average decay index, \(\alpha_{\rm G,avg>10s}\). From the Spearman's rank test we found a coefficient of \(-0.74\pm 0.19\) and p-value \(4.11\times 10^{-3}\), indicating a correlation is present such that the brightest GeV light curves decay on average faster than fainter GeV light curves. A linear regression between the two parameters gives Figure 4: Isotropic gamma-ray energy, \(E_{iso}\), distribution of the GeV afterglows in this sample (red), the 2nd LAT catalogue (blue) and the GBM measured \(E_{iso}\)(green; Poolakkil et al. (2021)). \(-0.31^{+0.12}_{-0.09}\log(L_{\rm G,10s})+14.43^{+4.55}_{-5.97}\), consistent with optical/UV and X-ray measurements of a similar correlation to within \(0.4\sigma\) and \(1.4\sigma\) in the slope and intercept, respectively. This consistency suggests the mechanism producing the GeV luminosity-decay correlation is the same as that producing the correlation observed in the optical/UV and X-ray light curves. It suggests that they are all produced by the same emission component, further supporting the forward shock being the dominant emission mechanism of GRB GeV light curves from around rest frame 10s onward, at least for the GRBs in this sample. Due to the sample size and requirement of redshifts, we have discussed possible selection biases and how representative our sample is compared to LAT detected and GBM detected GRBs; statistical tests suggest our sample is biased towards energetic GRBs. ## 6 Acknowledgments The _Fermi_ LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucleaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. ## 7 Data availability The data underlying this article were obtained from the 2nd _Fermi_/LAT GRB catalogue (Ajello et al., 2019). The data are available at [https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermilgrb.html](https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermilgrb.html) and [https://www-glast.stanford.edu/pub_data/1874/](https://www-glast.stanford.edu/pub_data/1874/)
2309.09955
The Protein Engineering Tournament: An Open Science Benchmark for Protein Modeling and Design
The grand challenge of protein engineering is the development of computational models that can characterize and generate protein sequences for any arbitrary function. However, progress today is limited by lack of 1) benchmarks with which to compare computational techniques, 2) large datasets of protein function, and 3) democratized access to experimental protein characterization. Here, we introduce the Protein Engineering Tournament, a fully-remote, biennial competition for the development and benchmarking of computational methods in protein engineering. The tournament consists of two rounds: a first in silico round, where participants use computational models to predict biophysical properties for a set of protein sequences, and a second in vitro round, where participants are challenged to design new protein sequences, which are experimentally measured with open-source, automated methods to determine a winner. At the Tournament's conclusion, the experimental protocols and all collected data will be open-sourced for continued benchmarking and advancement of computational models. We hope the Protein Engineering Tournament will provide a transparent platform with which to evaluate progress in this field and mobilize the scientific community to conquer the grand challenge of computational protein engineering.
Chase Armer, Hassan Kane, Dana Cortade, Dave Estell, Adil Yusuf, Radhakrishna Sanka, Henning Redestig, TJ Brunette, Pete Kelly, Erika DeBenedictis
2023-09-18T17:26:25Z
http://arxiv.org/abs/2309.09955v2
# The Protein Engineering Tournament: ###### Abstract The grand challenge of protein engineering is the development of computational models that can characterize and generate protein sequences for any arbitrary function. However, progress today is limited by lack of 1) benchmarks with which to compare computational techniques, 2) large datasets of protein function, and 3) democratized access to experimental protein characterization. Here, we introduce the Protein Engineering Tournament, a fully-remote, biennial competition for the development and benchmarking of computational methods in protein engineering. The tournament consists of two rounds: a first in silico round, where participants use computational models to predict biophysical properties for a set of protein sequences, and a second in vitro round, where participants are challenged to design new protein sequences, which are experimentally measured with open-source, automated methods to determine a winner. At the Tournament's conclusion, the experimental protocols and all collected data will be open-sourced for continued benchmarking and advancement of computational models. We hope the Protein Engineering Tournament will provide a transparent platform with which to evaluate progress in this field and mobilize the scientific community to conquer the grand challenge of computational protein engineering. ## 1 Introduction Protein engineering is the discipline of modifying or designing protein DNA sequences to create new functions or improve upon existing ones[1]. It has historically been used to create new enzymes for industrial processes[2], develop new biologics with improved efficacy[3], and engineer proteins for better tasting plant-based burgers[4]. Many current global challenges are also being addressed through protein engineering, such as: reducing greenhouse gas emissions[5], recycling plastic[6], diagnosing biomarkers related to rare diseases[7] and the development of biologics targeting infectious diseases[8]. In order to develop increasingly sophisticated protein solutions, powerful computational models are needed to guide the design of proteins with new or improved functionality. The field of computational protein engineering aims to guide the design of proteins by developing predictive[9] and generative[10] models; however, several obstacles are currently limiting model development. Although various machine learning methods have been developed over the years to take advantage of existing evolutionary, structural, and assay data, the lack of complex, publicly available datasets, limited experimental reproducibility, and absence of infrastructure for benchmarking models all impede model development and validation. The majority of publicly available datasets[11] capturing sequence to function data are relatively simple, mapping single point mutations[12] to a single biophysical property per experimental condition. When designing new variants, computational scientists often lack the means[13] to experimentally reproduce the conditions in which certain datasets were created and therefore cannot reliably perform comparisons against existing models. Until now, there have been no common benchmarks nor infrastructure for experimentally assessing such models. The Protein Engineering Tournament aims to tackle the aforementioned obstacles by curating a series of tournaments centered on various protein engineering challenges. By providing a transparent platform for benchmarking protein design methods, generating publicly available datasets, and developing open-sourced infrastructure for automated experimentation, the Tournament hopes to reduce barriers to model development and validation (Figure 1A). The Tournament will act as a platform for accelerating development in computational protein design (Figure 1B), enabling computational scientists to benchmark their models by both predicting protein properties and generating novel proteins with improved functionality. To achieve this goal, the Tournament includes both an in silico round and an in vitro round (Figure 1C). The Tournament also aims to inspire more machine learning scientists to contribute to the field of protein engineering by making the field more accessible and transparent. The Tournament will engage with the field of machine learning to solve technical problems that uniquely arise in protein engineering and develop the infrastructure it needs to design, build, and test better models and protein engineering strategies. Furthermore, we hope to engage with the community by showcasing challenges with high societal value that are currently unsolved. To achieve this goal, we will intentionally focus on both technically challenging tasks and use cases which may be overlooked by academia and/or industry. ## 2 Related Approaches Open datasets have long provided valuable opportunities for developing and benchmarking new methods in machine learning research. Computer vision datasets, such as MNIST[14] and ImageNet[15], not only provided individual research labs with a substrate for experimenting with new approaches but also created a yardstick with which to measure collective progress. Researchers in the protein engineering community have also utilized open datasets and introduced tasks, such as FLIP[16], TAPE[17], and ProteinNet[18], to encourage similar developments. These datasets have challenged researchers to develop models performing a wide range of tasks, such as predicting biophysical properties on both protein and amino acid levels, as well as predictions on variant effects and structural features. Science competitions take these efforts a step further by allowing researchers to test their computational methods on never-before-seen datasets. Perhaps the most notable example is the Critical Assessment of Structure Prediction (CASP)[19] a biennial event for computational protein structure prediction. Since its inception, CASP has become a crucial benchmark for the protein structure Figure 1: **Overview of the Tournament.****(A)**The tournament impacts the space by providing a transparent benchmark for computational methods, open datasets for the research community, and automated protocols for continued independent benchmarking. **(B)** The Tournament is designed to accelerate creation of computational techniques that can predict the biophysical properties for any given protein and generate a protein with any desired properties. **(C)** The Tournament consists of two sequential rounds: an in silico round for predicting properties of proteins and an in vitro round for generating proteins with specific properties. prediction community. By creating visibility around a single event, the competition has inspired an ambitious spirit among researchers to develop the best performing method, thereby encouraging a strong pace of method development. Moreover, a well-known competition can offer an accessible entry point for research groups outside of the field to participate, as demonstrated by Deepmind's participation in the 2018 CASP competition[20]. CASP has inspired the creation of similar competitions, like the Critical Assessment of Computational Hit-finding Experiments (CACHE)[21], which was created in the computational chemistry field to benchmark novel approaches for finding new small-molecule binders. We believe there is a burgeoning opportunity to create a new scientific competition that addresses the unique challenges of predicting and engineering protein function. Computational research groups which lack the ability to experimentally characterize engineered proteins are currently unable to meaningfully evaluate the performance of their protein engineering methods. By introducing never-before-seen datasets on protein function and offering open-source experimental characterization of novel proteins, the Protein Engineering Tournament hopes to overcome this barrier and enable research groups from all backgrounds to participate in cutting-edge protein engineering research. In doing so, we expect the Tournament will become a unifying benchmark for the field. ## 3 The Protein Engineering Tournament ### Tournament structure The tournament will consist of two sequential rounds: the in silico round and the in vitro round (Figure 2). In the in silico round teams will be tasked with developing predictive models which can infer the biophysical properties of protein sequences (Figure 2A). The in silico round contains the option to either directly predict biophysical properties based on protein sequence (zero-shot learning) or to pre-train a model on an optional training dataset (supervised learning). After choosing their method, participants will be asked to predict the biophysical properties for a held-out test set of protein sequences, such as their stability, expressibility, and activity. Submissions will be evaluated by a comparison between predictions and experimental data, using statistics such as the Spearman correlation. Each biophysical property will be assessed independently, and participants will be allowed to submit predictions for as few or as many of these properties as they desire. Therefore, research labs that are building specialized computational tools for one specific prop Figure 2: **Overview of Tournament Rounds. The tournament consists of two rounds. (A) In the in silico round participants will predict properties of given protein sequences, with optional training data provided for specific events. Performance is evaluated by comparing the participant’s predictions with the ground-truth values. Top performing participants will be selected to advance to the in vitro round. (B) In the in vitro round participants will be asked to generate protein sequences based on desired properties, which will then be expressed and characterized using cloud labs. Performance of the designed sequences will be evaluated as a weighted combination of the protein’s properties; the exact evaluation metric will be event-dependent.** erty of interest, such as a stability predictor or an enzymatic activity predictor, will be able to focus on submitting predictions for that property alone. In contrast, research teams focused on applying state-of-the-art machine learning techniques to biological datasets, for example, may be more interested in submitting predictions for all available properties. Once submissions are closed and the performance of each team has been evaluated, the final leaderboard of the in silico round will be published and the datasets used will be made publicly available. The highest performing teams will then advance to the in vitro round. In the in vitro round (Figure 2B), the teams will be asked to design protein sequences that maximize or satisfy certain biophysical properties (e.g., an enzyme design challenge may ask for sequences which maximize enzymatic activity while staying above a specified threshold for stability and expressibility). Each team will submit a list of amino acid sequences that will then be synthesized and experimentally characterized using automated laboratory protocols developed by the Tournament and its partners. Once experimentally characterized, a ranking algorithm will evaluate the submissions to produce a score for each participating team. As an example, a submission score may be produced by calculating the normalized discounted cumulative gain (NDCG) for the enzymatic activity of all sequences above the required thresholds of stability and expressibility. The exact evaluation metric will depend on the protein target in question and will be tailored to the academic or industrial use-case for which it is being studied. At the conclusion of the in vitro round, the Tournament will publish the final leaderboard, and the team with the highest performing proteins will be awarded the title of Protein Engineering Champion. The characterized protein sequences in the in vitro round and the datasets of the in silico round will be made publicly available. Furthermore, the automated protocols used to experimentally characterize the designed proteins will be made available to the public for continued use. ### Participation The first round of the Protein Engineering Tournament, the in silico round, will be open to any and all researchers interested in participating. Interested teams composed of one or more individuals will be able to download challenge data and upload final predictions. The in silico challenge will be open to the community for a specified amount of time, after which the submissions will be evaluated and the final leaderboard will be released. To allow the greatest number of teams to participate in the in vitro round, while being conscious of the costs associated with DNA synthesis and experimental characterization, we propose three avenues for admission: 1) top performance in the in silico round, 2) submitting a written application, and 3) paid entry for corporate researchers (Figure 3). By offering these three avenues for participating in the in vitro challenge we will increase the accessibility of the tournament while maximizing opportunities for participation, community impact, and benchmarking of new methods. First, the top-performing teams in the in silico round will be offered the opportunity to advance to the in vitro round (Figure 3A). Since the in silico round is open to any team or individual with access to a computer, with no entry fee or requirement on prior research experience, this path focuses on rewarding successful computational approaches and increasing the accessibility of the tournament Second, we will provide an application system that allows researchers to apply directly for a spot in the in vitro round (Figure 3B). The value of providing an application-based path to entry is born from the fundamental differences that exist between the in silico and in vitro challenges. There are likely research groups who have developed impressive methods for generative protein design, which would be well-suited for the in vitro challenge, but have no prior experience with the property prediction tasks requested in the in silico challenge. Without a separate application process, these groups may perform poorly in the in silico round, or simply may not choose to participate at all, and their generative methods would not have the opportunity to be benchmarked in the Tournament. For the third and final path to entry, participants will be allowed to pay a fee which covers the cost of experimentation (Figure 3C). With this option, funding for the Protein Engineering Tournament no longer becomes the limiting factor on the number of participants that can compete and the number of methods that can be evaluated. Figure 3: **Round Advancement.** There will be three avenues for participants to enter into the in vitro round, with each avenue catering to a unique audience. **(A)** Research groups specializing in generative design methods can apply to directly enter the in vitro round via an application submission. **(B)** The top performing teams from the in silico round will be invited to participate in the in vitro round. **(C)** Groups can pay to enter the in vitro round directly. This path could provide entry to well-funded corporate research labs capable of managing these costs, which further allows our application system discussed above to focus on providing entry to promising, less well-funded research groups. Our belief is that by offering these three avenues for participating in the in vitro challenge we will increase the accessibility of the tournament and maximize opportunities for participation and benchmarking of new methods. ### Selecting our protein engineering challenges More than benchmarking the field, we believe the Tournament will become a powerful engine for making meaningful progress on important protein engineering problems. Many of the most impactful protein engineering applications, from climate technology and green manufacturing to antivirals and diagnostics, are not being fully addressed by current research efforts. The Protein Engineering Tournament will become a vehicle for connecting cutting-edge researchers with the experimental resources necessary to make headway on important societal challenges that are not traditionally addressed by industry or academia (Figure 4A). While industry possesses significant resources to dedicate towards protein engineering problems, they are often restricted to applications which can recover the cost of research and generate profit. Conversely, while academia is capable of focusing on important problems without the constraints of profitability, research labs often lack the resources necessary to tackle large protein engineering efforts. This leaves a research gap where many applications with significant societal benefits have been under-resourced. With this consideration in mind, our protein engineering challenges will be selected with a strong focus on real-world impact. The Tournament can be used to continually advance the field of protein engineering by selecting problems that push the limits of current techniques. Proteins possess a diverse array of functions, from enzymatic catalysis and molecular binding to chemical transportation, and our protein engineering challenges in the in silico and in vitro rounds will continuously evolve over time to reflect this myriad of functional possibilities. The first Tournament will likely focus on a single function, with enzyme engineering or protein binder design as strong initial candidates, but in future tournaments, the design challenges will expand to encompass more domains of function (Figure 4B). The order in which we introduce new functions will be driven by practical application, technical feasibility, and amenability to high-throughput experimentation. As our computational methods improve, our challenges will expand into increasingly more difficult and complex domains, such that the frontier of scientific capabilities is always represented in the Tournament's challenges. The final protein engineering challenges will be decided on by our Target Selection team, with input from our scientific advisory board, philanthropic funding partners, and the larger scientific community. (Figure S1). The scientific advisory board will be composed of researchers in academia and industry. Their knowledge on the current frontier of challenges and opportunities in the field of protein engineering will further guide our selection of protein challenges. Additionally, we will invite input from funding partners who will use this opportunity to help advance the application of engineered proteins to problems salient to their philanthropic goals. ## 4 Tournament Creation Engine The Tournament represents a long-term investment in benchmarking the field of protein engineering. The tournament will produce new datasets for method development, new automated assays for protein characterization, and new benchmarked results for the community's Figure 4: **Selecting Protein Design Challenges.****(A)** The Tournament provides a unique opportunity to highlight important protein engineering challenges that would otherwise fall outside the purview of both academic and industry incentives. **(B)** The Tournament will continually expand to new domains of protein design; in each domain, we will continually select challenges that push the limits of current techniques. current-best algorithms, with the biennial cycle of the tournament also creating an opportunity to routinely take stock of current methods and assess the state of the field (Figure 5). ### Cloud laboratories and method development To maximize our impact, we want the assays we develop for each tournament to be available long after the tournament has concluded. To accomplish this goal and capitalize on the benefits of automated experimentation, we will design and execute our experimental workflows in cloud laboratories (Figure 5). Commercial cloud science laboratories, such as Emerald Cloud Labs, and academic cloud science laboratories, such as those found at Boston University and Carnegie Mellon, enable scientists to forgo the lab bench in favor of running experimental biology protocols in automation-enabled facilities. In this paradigm researchers write their assays in a symbolic laboratory programming language that specifies the instructions for each step of their protocol. Once written, scientists can queue up their experiments, wet-lab robots execute each step, and the resulting data is uploaded for analysis. This approach offers the potential to greatly improve the accessibility and reproducibility of life science research by enabling scientists to share experimental protocols as easily as we share software. Experimental assays written with a symbolic lab language can be uploaded to code-sharing websites like Github, allowing any researcher from around the world to access and reproduce this work. Cloud laboratories will also improve the productivity of biological experimentation by transferring time-consuming experimental work from the hands of human researchers onto the decks of precise, high-throughput robots. Cloud laboratories will serve as the experimental backbone of the Protein Engineering Tournament to ensure our in vitro experiments are high-throughput, reproducible, and accessible (Figure 5). At the conclusion of each Tournament, the protocols we developed to execute characterization assays in the in vitro round will be made openly available to the scientific community. Therefore, researchers will be able to continually benchmark their computational methods on the same standardized assays even after the Tournament has concluded. Since each Tournament will introduce new protein engineering challenges, this approach will lead to an ever-expanding corpus of open-source protein engineering workflows to help benchmark new computational methods for years to come. ### Generating datasets The automated experimental protocols discussed above will be used to generate the datasets for the in silico and in vitro rounds and also be used to generate additional data based on participants' submitted protein sequences during the in vitro round. After the Tournament has ended, all data produced during the Tournament will be made publicly available along with the corresponding automated protocols (Figure 5). Furthermore, the Tournament can act as an avenue for academic and corporate research entities to make unpublished datasets of protein function available as predictive challenges in the in silico round. ### Analysis of the State of the Field Finally, we will aggregate the learnings from our tournament's results into a State of the Field report (Figure 5). This report will analyze the performance of our participant's computational approaches, noting the relative standing of different techniques across different challenges, and discussing our collective progress throughout the various domains of protein engineering. ### Governance The Protein Engineering Tournament is operated by Align to Innovate, a non-profit dedicated to improving the reproducibility, scalability, and shareability of life science research through community-driven initiatives. The Tournament will be run by a combination of Align to Innovate employees and volunteers from the protein engineering community (Figure S1). The Tournament Coordinator, who's responsibilities entail coordinating the teams to ensure a successful tournament, will be a full time employee of Align to Innovate. Members of the Target Selection, Data Science, Organization and Outreach, and Cloud Assay Development teams will largely be composed of volunteers from the research commu Figure 5: **Tournament Creation Engine.** Creating the Protein Engineering Tournaments will be a cyclical process. We will select new protein design challenges, develop automated assays for these challenges, host a tournament with these new assays and the datasets they’ve produced, and evaluate the tournament results. The assays, datasets, and evaluation will be open-sourced. nity, in addition to supporting members from within the Align to Innovate team. ## 5 Pilot Tournament A pilot tournament began May 1st 2023 with the theme of Enzyme Design based on six datasets received from both industry and academic groups. Initial interest in the pilot tournament led to the registration of just over 30 teams, representing a mix of academic (55%), industry (30%), and independent (15%) teams, with research experience running from Nobel Laureates to high school students. For the pilot tournament, the in vitro round experimentation will be performed in-house by a corporate partner. Development of the cloud laboratory assays for future tournaments is currently underway within Align to Innovate, the non-profit parent organization of the Protein Engineering Tournament. ## 6 Conclusion The Protein Engineering Tournament introduces an innovative, community-driven platform to accelerate the advancement of computational protein engineering. This open science initiative combines in silico and in vitro methods, employing cloud laboratories to ensure reproducibility and continued access to experimental workflows. By creating a unified benchmark, the Tournament stimulates collaboration and competition among researchers and presents opportunities for the community to test their computational models on novel protein engineering challenges. It offers a dynamic space for academia, industry, and independent groups to broaden the horizons of protein engineering, particularly in areas of high societal value that might be currently underserved. Further, through its integration with Align to Innovate, the Tournament builds upon the strengths of a diverse scientific community, fostering transparency and sharing of knowledge. As we look ahead to the first official Tournament, we anticipate that this initiative will contribute significantly to the evolving landscape of protein engineering, driving forward both technical and translational breakthroughs. By democratizing access to protein design, we aspire to inspire a new generation of computational protein engineers and guide the future of life science research.
2305.19887
The Markov chain embedding problem in a low jump frequency context
We consider the problem of finding the transition rates of a continuous-time homogeneous Markov chain under the empirical condition that the state changes at most once during a time interval of unit length. It is proven that this conditional embedding approach results in a unique intensity matrix for a transition matrix with non-zero diagonal entries. Hence, the presented conditional embedding approach has the merit to avoid the identification phase as well as regularization for the embedding problem. The resulting intensity matrix is compared to the approximation for the Markov generator found by Jarrow.
Philippe Carette, Marie-Anne Guerry
2023-05-31T14:24:25Z
http://arxiv.org/abs/2305.19887v1
# The Markov chain embedding problem in a low jump frequency context ###### Abstract We consider the problem of finding the transition rates of a continuous-time homogeneous Markov chain under the empirical condition that the state changes at most once during a time interval of unit length. It is proven that this conditional embedding approach results in a unique intensity matrix for a transition matrix with non-zero diagonal entries. Hence, the presented conditional embedding approach has the merit to avoid the identification phase as well as regularization for the embedding problem. The resulting intensity matrix is compared to the approximation for the Markov generator found by Jarrow in [1]. ## 1 Introduction The embedding problem of Markov chains is a long standing problem where a given stochastic matrix is examined as the 1-step transition matrix of some continuous-time homogeneous Markov chain (CTHMC) ([2, 3]). This problem boils down to characterizing the empirical transition matrix \(\widehat{\mathbf{P}}\) as the exponential of some matrix \(\mathbf{Q}\) with all non-negative off-diagonal entries and zero row-sums, called an intensity matrix. This matrix \(\mathbf{Q}\) represents the transition rates of the underlying CTHMC. If such a \(\mathbf{Q}\) exists, \(\widehat{\mathbf{P}}\) is said to be embeddable. It turns out that the embedding problem is a formidable one in a number of respects. First, \(\widehat{\mathbf{P}}\) may not be embeddable. In that case, a regularization algorithm can be used to find an intensity matrix \(\mathbf{Q}\) for which \(||\widehat{\mathbf{P}}-\exp(\mathbf{Q})||\) is minimized ([4, 5, 6]). Next, no embeddability criteria in terms of the matrix elements, which are easily verifiable in practice, seem at hand when the number of states exceeds 3. Lastly, for an embeddable \(\widehat{\mathbf{P}}\), there may not be a unique solution to the equation \(\exp(\mathbf{Q})=\widehat{\mathbf{P}}\) in the set of intensity matrices. The identification aspect of the embedding problem deals with the selection of the suitable intensity matrix reflecting the nature of the system under study ([7]). More recently, model specific embedding problems are studied for specific subcategories of transition matrices ([8, 9, 10, 1]). In modeling a specific context, the transition matrix as well as the generator matrix are expected to reflect the characteristics of the system under study. Hence, the transition matrix is subjected to constraints and, therefore, belongs to a specific subset of stochastic matrices, and similar, the generator matrix is expected to be an element of a specific subset of intensity matrices. Whereas model specific embedding problems are characterized by setting model assumptions and restrictions on the transition matrix, this paper presents an embedding approach that incorporates empirical assumptions. More specifically, we propose the conditional embedding approach where the empirical 1-step transition matrix \(\widehat{\mathbf{P}}\) corresponds with the conditional 1-step transition matrix of the CTHMC given the event that at most one jump has occurred during a time interval of unit length. For a Markov model the unit time interval can be defined in such a way that the empirical 1-step transition matrix meets this condition. Moreover, this condition is inherent in some applications. For example, in credit rating migration models the credit ratings are typically based on slowly varying characteristics, such that they do not tend to change more than once within the baseline time interval (e.g. a quarter). We found that, regardless the number of states, exactly one intensity matrix solves this conditional embedding problem when \(p_{ii}>0\) for all \(i\). Our approach results in an easy embeddability criterium and does not require identification neither regularization. Moreover, the presented conditional embedding approach and its proven properties, result in an embeddability roadmap reflecting that the conditional embedding approach is atmost useful in case either the transition matrix is not embeddable or no unique Markov generator can be identified based on the context of the system. ## 2 Conditional transition probabilities In order to state the conditional embedding problem, we first introduce the concept of conditional transition probability. Consider a continuous-time homogeneous Markov chain (CTHMC) \((X_{t})_{t\geq 0}\) on a probability triple \((\Omega,\mathcal{F},\mathbb{P})\) with state space \(\mathcal{S}=\{1,2,\ldots,n\}\). **Definition 1**.: _Let \(E\in\mathcal{F}\). We call the matrix \(\mathbf{P}^{E}\) with elements_ \[p_{ij}^{E}=\mathbb{P}(X_{1}=j\,|\,X_{0}=i\,,\,E),\quad i,j\in\mathcal{S}\text{,}\] _the conditional one-step transition probability matrix given the event \(E\) of the chain \((X_{t})_{t\geq 0}\)._ _Remark_.: The usual (unconditional) one-step transition probabilities \(p_{ij}=\mathbb{P}(X_{1}=j\,|\,X_{0}=i)\) can be obtained by setting \(E=\Omega\), that is, \(p_{ij}=p_{ij}^{\Omega}\). In the remainder of this paper, we are interested in the event \(E=\{N_{\!J}\leq 1\}\), where \(N_{\!J}\) is the random variable counting the state changes or jumps of the CTHMC up to time \(t=1\). The relationship between the conditional transition matrix \(\mathbf{P}^{\{N_{\!J}\leq 1\}}\) and the transition rate matrix \(\mathbf{Q}\) of the CTHMC is given by the following proposition. **Proposition 1**.: _For a CTHMC with transition rate matrix \(\mathbf{Q}=(q_{ij})\), it holds that_ \[p_{ij}^{\{N_{\!J}\leq 1\}}=\frac{p_{ij}^{*}}{\sum_{k=1}^{n}p_{ik}^{*}}\quad \text{for all $i$ and $j$,}\] _where_ \[p_{ij}^{*}=\begin{cases}q_{ij}\,\tau(q_{ii},q_{jj})&\text{if $i\neq j$}\\ \tau(q_{ii},q_{ii})&\text{if $i=j$}\end{cases}\] _and where the function \(\tau:\mathbb{R}^{2}\to\mathbb{R}\) is defined as_ \[\tau(x,y)=\int_{0}^{1}\mathrm{e}^{ux+(1-u)y}\,\mathrm{d}u=\begin{cases}\frac{ \mathrm{e}^{x}-\mathrm{e}^{y}}{x-y}&\text{if $x\neq y$}\\ \mathrm{e}^{x}&\text{if $x=y$}\end{cases}. \tag{1}\] Proof.: Using the definition of conditional probability, \[\mathbb{P}(A\,|\,B\cap C)=\frac{\mathbb{P}(A\cap B\,|\,C)}{\mathbb{P}(B\,|\,C )},\quad\text{if $\mathbb{P}(B\,|\,C)>0$.}\] Hence, \[p_{ij}^{\{N_{t}\leq 1\}}=\mathbb{P}(X_{1}=j\,|\,X_{0}=i,N_{\!J}\leq 1)=\frac{ \mathbb{P}(X_{1}=j,N_{\!J}\leq 1\,|\,X_{0}=i)}{\mathbb{P}(N_{\!J}\leq 1\,|\,X_{0}=i)}. \tag{2}\] Let us denote \(p_{ij}^{*}=\mathbb{P}(X_{1}=j,N_{\!J}\leq 1\,|\,X_{0}=i)\). Using the sum rule for disjoint events, we then have \[p_{ij}^{\{N_{\!J}\leq 1\}}=\frac{p_{ij}^{*}}{\sum_{k=1}^{n}p_{ik}^{*}}.\] Let us now calculate \(p_{ij}^{*}\), which is the joint probability of being in state \(j\) at \(t=1\) in at most one jump, starting from state \(i\) at \(t=0\). For \(k\in\{1,\ldots,n\}\), let \(f_{k}\) be the density function of the holding time \(H_{k}\) in state \(k\) and \(F_{k}\) the associated cumulative distribution function. For \(i\neq j\), denote by \(s_{ij}\) the transition probability from state \(i\) to state \(j\) conditional on transitioning out of state \(i\). Marginalising on \(H_{i}\), we find for \(i\neq j\) \[p_{ij}^{*} =\mathbb{P}(H_{i}<1,X_{H_{i}}=j,H_{j}>1-H_{i}\,|\,X_{0}=i)\] \[=\int_{0}^{1}\mathbb{P}(X_{u}=j,H_{j}>1-u\,|\,H_{i}=u,X_{0}=i)f_{i} (u)\,\mathrm{d}u\] \[=\int_{0}^{1}\mathbb{P}(H_{j}>1-u\,|\,X_{u}=j,H_{i}=u,X_{0}=i) \mathbb{P}(X_{u}=j\,|\,H_{i}=u,X_{0}=i)f_{i}(u)\,\mathrm{d}u\] \[=\int_{0}^{1}(1-F_{j}(1-u))\,s_{ij}\,f_{i}(u)\,\mathrm{d}u\] and \[p_{ii}^{*}=\mathbb{P}(H_{i}>1\,|\,X_{0}=i)=1-F_{i}(1).\] Since \(H_{k}\) has an exponential distribution with rate parameter \(-q_{kk}\) and since \(s_{ij}=\frac{q_{ij}}{-q_{ii}}\) (\(i\neq j\)), we have for \(i\neq j\) \[p_{ij}^{*}=\int_{0}^{1}\mathrm{e}^{q_{jj}(1-u)}\,\frac{q_{ij}}{-q_{ii}}\,(-q_ {ii})\mathrm{e}^{q_{ii}u}\,\mathrm{d}u=q_{ij}\int_{0}^{1}\mathrm{e}^{q_{ii}u+q _{jj}(1-u)}\,\mathrm{d}u=q_{ij}\,\tau(q_{ii},q_{jj})\] where the function \(\tau\) is defined as in (1). Finally, \(p_{ii}^{*}=1-F_{i}(1)=\mathrm{e}^{q_{ii}}=\tau(q_{ii},q_{ii})\). One can remark that Minin et al. [11] arrive at the same result for \(p_{ij}^{*}\) using a recursive relation for the joint probabilities \(\mathbb{P}(X_{1}=j\,|\,X_{0}=i,N_{\!J}=n)\), \(i,j\in\mathcal{S}\). **Corollary 1**.: _For all \(i\), we have that \(p_{ii}^{\{N_{\!J}\leq 1\}}>0\)._ Proof.: This follows from the fact that \(p_{ii}^{*}=\tau(q_{ii},q_{ii})=\mathrm{e}^{q_{ii}}>0\). _Remark_.: An alternative argument for corollary 1 goes as follows. Given the event \(\{N_{\!J}\leq 1\}\), the only way of going from state \(i\) at time \(t=0\) to state \(i\) at time \(t=1\), is to remain in that state throughout the entire time interval from \(t=0\) to \(t=1\). The probability of this event is non-zero, since the holding time in a state has an exponential distribution. According to proposition 1, the conditional transition matrix \(\mathbf{P}^{\{N_{\!J}\leq 1\}}\) depends on the transition rate matrix \(\mathbf{Q}\) of the CTHMC involved. In what follows, and when needed, we explicitly indicate this dependency using the notation \(\mathbf{P}^{\{N_{\!J}\leq 1\}}(\mathbf{Q})\). ## 3 Conditional embedding problem When building a discrete time Markov model, the choice of the time unit and time interval is important to end up with a valid model ([12]). In this respect, an appropriate choice can be made by comparing for diverse values of the time unit the internal validity of the corresponding models. The internal validity of a model is determined by the discrepancy between the observed stock vectors and the stock vectors that are estimated by the model. Based on goodness of fit tests a time unit can be selected that results in a model for which the discrepancy between observed and estimated stock vectors is limited. ([13]). For an appropriate time unit it is acceptable to assume that there is at most \(1\) jump in between \(t=0\) and \(t=1\). Indeed, more than \(1\) jump during a one-unit time interval would result in a situation where the transitions to and from the intermediate state are not captured by the discrete time Markov model. A question that then naturally arises is whether, for a given stochastic matrix \(\mathbf{P}\), there does exist an intensity matrix \(\mathbf{Q}\) such that \(\mathbf{P}^{\{N_{J}\leq 1\}}(\mathbf{Q})=\mathbf{P}\). And if so, whether such an intensity matrix \(\mathbf{Q}\) is unique. It will be helpful to introduce some terminology before proceeding. **Definition 2**.: _A stochastic matrix \(\mathbf{P}\) is called \(J_{1}\)-embeddable iff there exist a CTHMC with transition rate matrix \(\mathbf{Q}\) satisfying \(\mathbf{P}=\mathbf{P}^{E}(\mathbf{Q})\), where \(E\) is the event that the CTHMC changes state at most once between \(t=0\) and \(t=1\). Such a transition rate matrix is called a \(J_{1}\)-generator of \(\mathbf{P}\)._ For a transition matrix \(\mathbf{P}\) that is not embeddable a \(J_{1}\)_-generator_ can be seen as a solution to the generalization problem where the intensity matrix \(\mathbf{Q}\) satisfies \(\mathbf{P}^{\{N_{J}\leq 1\}}(\mathbf{Q})=\mathbf{P}\). For a transition matrix \(\mathbf{P}\) that is embeddable, its Markov generator not necessarily equals the \(J_{1}\)_-generator_. In fact, the solution to the conditional embedding problem is generally different from the solution to the (general) embedding problem if the latter exists. However, if the time-scale is chosen such that no more than one transition occurs in the system during the unit time interval, we might expect Markov generator to be close to \(J_{1}\)_-generator_ in some sense. A matrix \(\mathbf{P}\) that is embeddable satisfies the necessary condition for embeddability formulated by Goodman in [14]: \(\prod_{i=1}^{n}p_{ii}\geq\det P>0\). Hence, all diagonal entries of such an embeddable matrix \(\mathbf{P}\) are non-zero. Consequently, a matrix \(\mathbf{P}\) with \(p_{ii}=0\), for some \(i\), is neither embeddable nor \(J_{1}\)-embeddable, according to corollary 1. For this reason, and without loss of generality, we examine in the remainder of the paper stochastic matrices \(\mathbf{P}=(p_{ij})\) satisfying \(p_{ii}>0\) for all \(i\). It turns out that the off-diagonal elements of a \(J_{1}\)-generator of \(\mathbf{P}\) are uniquely determined by its diagonal elements and the elements of \(\mathbf{P}\). To formulate this relationship, we introduce the function \(\rho:\mathbb{R}_{+}^{2}\rightarrow\mathbb{R}_{+}\), defined as follows: \[\rho(x,y)=\frac{\mathrm{e}}{\tau(1-\ln x,1-\ln y)}=\begin{cases}xy\frac{\ln x -\ln y}{x-y}&\text{if }x\neq y\\ x&\text{if }x=y.\end{cases} \tag{3}\] **Proposition 2**.: _Suppose \(\mathbf{P}=(p_{ij})\) is a \(n\times n\) stochastic matrix satisfying \(p_{ii}>0\) for all \(i\). If \(\mathbf{Q}=(q_{ij})\) is a \(J_{1}\)-generator of \(\mathbf{P}\), then_ \[q_{ij}=\frac{\rho(\theta_{i},\theta_{j})p_{ij}}{\theta_{i}p_{ii}},\quad\text{ for all }i\neq j\text{,}\] _where \(\theta_{i}=\mathrm{e}^{1-q_{ii}}\) for all \(i\) and the function \(\rho:\mathbb{R}_{+}^{2}\rightarrow\mathbb{R}_{+}\) is given by (3)._ Proof.: Suppose \(\mathbf{Q}\) is a \(J_{1}\)-generator of \(\mathbf{P}=(p_{ij})\) and let \(i\neq j\). Then, according to proposition 1, we have \[\frac{p_{ij}}{p_{ii}}=\frac{p_{ij}^{\{N_{\leq}\leq 1\}}}{p_{ii}^{\{N_{\leq}\leq 1\}} }=\frac{p_{ij}^{*}}{p_{ii}^{*}}=\frac{q_{ij}\tau(q_{ii},q_{jj})}{\tau(q_{ii},q_{ ii})}.\] Consequently, since \(\tau(q_{ii},q_{ii})=\mathrm{e}^{q_{ii}}=\mathrm{e}/\theta_{i}\) and \(\tau(q_{ii},q_{jj})=\tau(1-\ln\theta_{i},1-\ln\theta_{j})=\mathrm{e}/\rho( \theta_{i},\theta_{j})\), we get \[q_{ij}=\frac{\tau(q_{ii},q_{ii})p_{ij}}{\tau(q_{ii},q_{jj})p_{ii}}=\frac{( \mathrm{e}/\theta_{i})p_{ij}}{(\mathrm{e}/\rho(\theta_{i},\theta_{j}))p_{ii} }=\frac{\rho(\theta_{i},\theta_{j})p_{ij}}{\theta_{i}p_{ii}},\quad\text{for all $i\neq j$.}\qed\] The result of proposition 3 yields a condition on the diagonal elements of any \(J_{1}\)-generator of \(\mathbf{P}\). **Proposition 3**.: _Suppose \(\mathbf{P}=(p_{ij})\) is a \(n\times n\) stochastic matrix satisfying \(p_{ii}>0\) for all \(i\). Then, if \(\mathbf{Q}=(q_{ij})\) is a \(J_{1}\)-generator of \(\mathbf{P}\), the \(n\)-tuple \((\mathrm{e}^{1-q_{11}},\ldots,\mathrm{e}^{1-q_{nn}})\) is a fixed point of the vector function \(\mathbf{T}=(T_{1},\ldots,T_{n}):\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\) defined as follows_ \[T_{i}(x_{1},\ldots,x_{n})=\exp W_{0}\Big{(}\frac{1}{p_{ii}}\sum_{j=1}^{n}p_{ij }\rho(x_{i},x_{j})\Big{)}\quad\text{for all $i$,} \tag{4}\] _where \(W_{0}\) denotes the principal branch of the Lambert W function and where the function \(\rho:\mathbb{R}_{+}^{2}\to\mathbb{R}_{+}\) is defined as in (3)._ Proof.: Denote \(\theta_{i}=\mathrm{e}^{1-q_{ii}}\) for all \(i\). Then \(\theta_{i}>0\) and \(q_{ii}=1-\ln\theta_{i}\) for all \(i\). Using proposition 2 and the fact that \(\mathbf{Q}\) is an intensity matrix, we then have \[-1+\ln\theta_{i}=-q_{ii}=\sum_{j:j\neq i}q_{ij}=\sum_{j:j\neq i}\frac{\rho( \theta_{i},\theta_{j})p_{ij}}{\theta_{i}p_{ii}},\quad\text{for all $i$,}\] which can be rewritten, using the fact that \(\rho(\theta_{i},\theta_{i})=\theta_{i}\), as \[\theta_{i}\ln\theta_{i}=\frac{1}{p_{ii}}\sum_{j=1}^{n}p_{ij}\rho(\theta_{i}, \theta_{j})\quad\text{for all $i$.} \tag{5}\] Using the principal branch \(W_{0}\) of the Lambert W function (which is the multi-valued inverse of the function \(w\mapsto w\mathrm{e}^{w}\) (\(w\in\mathbb{C}\)), see [15]), we find that \[\ln\theta_{i}=W_{0}\Big{(}\frac{1}{p_{ii}}\sum_{j=1}^{n}p_{ij}\rho(\theta_{i}, \theta_{j})\Big{)}\quad\text{for all $i$,}\] which proves the result. Proposition 2 and proposition 3 entail that a \(J_{1}\)-generator of \(\mathbf{P}\) defines a fixed point of the vector function \(\mathbf{T}\). The converse is also true, as is stated in the following proposition. **Proposition 4**.: _Let the stochastic \(n\times n\) matrix \(\mathbf{P}=(p_{ij})\) be such that \(p_{ii}>0\) for all \(i\). Suppose \(\theta=(\theta_{1},\ldots,\theta_{n})\) is a fixed point of the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined in (4). Then, the matrix \(\mathbf{Q}=(q_{ij})\) with elements_ \[q_{ii}=1-\ln\theta_{i},\qquad q_{ij}=\frac{\rho(\theta_{i},\theta_{j})p_{ij}}{ \theta_{i}p_{ii}}\quad(i\neq j)\] _where \(\rho\) is defined by (3), is a \(J_{1}\)-generator of \(\mathbf{P}\)._ Proof.: Let \(\theta=(\theta_{1},\theta_{2},\ldots,\theta_{n})\in\mathbb{R}_{+}^{n}\) be a fixed point of \(\mathbf{T}\) and let the matrix \(\mathbf{Q}\) be constructed as stated above. We first show that \(\mathbf{Q}\) is an intensity matrix. By definition, all off-diagonal elements of \(\mathbf{Q}\) are non-negative. Since \(T_{i}(\theta)=\theta_{i}\), we have \(\ln\theta_{i}=W_{0}(\frac{1}{p_{ii}}\sum_{j}p_{ij}\rho(\theta_{i},\theta_{j}))\) yielding \(\theta_{i}\ln\theta_{i}=\frac{1}{p_{ii}}\sum_{j}p_{ij}\rho(\theta_{i},\theta_{ j})\), by definition of the Lambert \(W_{0}\)-function. Using \(q_{ij}=\frac{\rho(\theta_{i},\theta_{j})p_{ij}}{\theta_{i}p_{ii}}\)\((i\neq j)\) and \(\rho(\theta_{i},\theta_{i})=\theta_{i}\), we can rewrite this equation as \(\theta_{i}(1-q_{ii})=\theta_{i}+\frac{1}{p_{ii}}\sum_{j:j\neq i}q_{ij}\theta_{ i}p_{ii}\). After simplification, we get \(q_{ii}=-\sum_{j:j\neq i}q_{ij}\). Thus \(\mathbf{Q}\) has zero row-sums. Consequently, \(\mathbf{Q}\) is an intensity matrix. It remains to be shown that \(p_{ij}^{\{N_{J}\leq 1\}}(\mathbf{Q})=p_{ij}\) for all \(i\) and \(j\). By proposition 1, we have \(p_{ij}^{\{N_{J}\leq 1\}}(\mathbf{Q})=\frac{p_{ij}^{*}}{\sum_{k}p_{ik}^{*}}\), where \(p_{ik}^{*}=q_{ik}\tau(q_{ii},q_{kk})\) if \(i\neq k\) and \(p_{ii}^{*}=\tau(q_{ii},q_{ii})\) and where the function \(\tau:\mathbb{R}^{2}\to\mathbb{R}\) is defined by (1). Using the definition of \(\mathbf{Q}\) and (3), we then have that \[p_{ik}^{*}=\frac{\rho(\theta_{i},\theta_{k})p_{ik}}{\theta_{i}p_{ii}}\tau(1- \ln\theta_{i},1-\ln\theta_{k})=\frac{\mathrm{e}\,p_{ik}}{\theta_{i}p_{ii}}, \qquad i\neq k\] and \[p_{ii}^{*}=\tau(q_{ii},q_{ii})=\mathrm{e}^{q_{ii}}=\mathrm{e}^{1-\ln\theta_{i} }=\frac{\mathrm{e}}{\theta_{i}}.\] Thus, \(p_{ik}^{*}=\frac{\mathrm{e}\,p_{ik}}{\theta_{i}p_{ii}}\) for all \(i\) and \(k\), which yields \[\sum_{k}p_{ik}^{*}=\frac{\mathrm{e}}{\theta_{i}p_{ii}}\sum_{k}p_{ik}=\frac{ \mathrm{e}}{\theta_{i}p_{ii}},\] since \(\mathbf{P}\) has all row sums equal to \(1\). Consequently, for all \(i\) and \(j\), we get \[p_{ij}^{\{N_{J}\leq 1\}}(\mathbf{Q})=\frac{p_{ij}^{*}}{\sum_{k}p_{ik}^{*}}=\, \frac{\mathrm{e}\,p_{ij}}{\theta_{i}p_{ii}}\bigg{/}\frac{\mathrm{e}}{\theta_{ i}p_{ii}}=p_{ij},\] which concludes the proof. The following lemma states some properties of the vector function \(\mathbf{T}\), which will play a crucial role in its number of fixed points. **Lemma 1**.: _Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Consider the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined as in (4), and the set_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{6}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}.\] Proof.: Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Consider the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined as in (4), and the set_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{7}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}.\] Proof.: Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Consider the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined as in (4), and the set_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{8}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{9}\] Proof.: Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Consider the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined as in (4), and the set_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{10}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{11}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{12}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\delta}\}. \tag{13}\] _Then,_ \[\mathcal{X}=\{\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,|\,\forall i: \mathrm{e}^{1/\Delta}\leq x_{i}\leq\mathrm{e}^{1/\Delta}\}. \tag{14}\] Proof.: Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Consider the vector function \(\mathbf{T}:\mathbb{R}_{+ 1. _every fixed point of_ \(\mathbf{T}\) _belongs to_ \(\mathcal{X}\)_._ 2. \(\mathbf{T}\) _maps_ \(\mathcal{X}\) _into_ \(\mathcal{X}\)_._ Proof.: 1. Let \(\theta=(\theta_{1},\ldots,\theta_{n})\in\mathbb{R}_{+}^{n}\) be a fixed point of \(\mathbf{T}\). Let \(m=\min\{\theta_{1},\ldots,\theta_{n}\}\) and \(M=\max\{\theta_{1},\ldots,\theta_{n}\}\). We shall prove that \(m\geq\mathrm{e}^{1/\Delta}\) and that \(M\leq\mathrm{e}^{1/\delta}\). Let \(r\) be an index such that \(\theta_{r}=m\). Then, by lemma 5(4), we have \(\rho(\theta_{r},\theta_{j})\geq m\) for all \(j\). Since \(T_{r}(\theta)=\theta_{r}\), we have \(\ln\theta_{r}=W_{0}(\frac{1}{p_{rr}}\sum_{j}p_{rj}\rho(\theta_{r},\theta_{j}))\) yielding \(\theta_{r}\ln\theta_{r}=\frac{1}{p_{rr}}\sum_{j}p_{rj}\rho(\theta_{r},\theta_ {j})\) by definition of the Lambert \(W_{0}\)-function. Using the fact that \(\sum_{j=1}^{n}p_{rj}=1\), we then obtain \[m\ln m=\theta_{r}\ln\theta_{r}=\frac{1}{p_{rr}}\sum_{j=1}^{n}p_{rj}\rho(\theta _{r},\theta_{j})\geq\frac{m}{p_{rr}}\geq\frac{m}{\Delta},\] which implies \(\ln m\geq\frac{1}{\Delta}\) and thus \(m\geq\mathrm{e}^{1/\Delta}\). To prove the second inequality, let \(s\) be an index such that \(\theta_{s}=M\). Then, by lemma 5(4), we have \(\rho(\theta_{s},\theta_{j})\leq M\) for all \(j\). Hence, by \(T_{s}(\theta)=\theta_{s}\) and the unit row-sums property of \(\mathbf{P}\), \[M\ln M=\theta_{s}\ln\theta_{s}=\frac{1}{p_{ss}}\sum_{j=1}^{n}p_{sj}\rho(\theta _{s},\theta_{j})\leq\frac{M}{p_{ss}}\leq\frac{M}{\delta},\] which yields \(\ln M\leq\frac{1}{\delta}\) and thus \(M\leq\mathrm{e}^{1/\delta}\). 2. Let \((x_{1},\ldots,x_{n})\in\mathcal{X}\). By lemma 5(4), \[\mathrm{e}^{1/\Delta}\leq\min\{x_{i},x_{j}\}\leq\rho(x_{i},x_{j})\leq\max\{x_ {i},x_{j}\}\leq\mathrm{e}^{1/\delta}\quad\text{for all $i$ and $j$.}\] Then, since \(\mathbf{P}\) has unit row sums, we have for all \(i\) \[\frac{\mathrm{e}^{1/\Delta}}{\Delta}\leq\frac{\mathrm{e}^{1/\Delta}}{p_{ii}} \leq\frac{1}{p_{ii}}\sum_{j=1}^{n}p_{ij}\rho(x_{i},x_{j})\leq\frac{\mathrm{e}^ {1/\delta}}{p_{ii}}\leq\frac{\mathrm{e}^{1/\delta}}{\delta}.\] Now, \(W_{0}\) and \(\exp\) are increasing functions, therefore \[\exp W_{0}(\tfrac{1}{\Delta}\mathrm{e}^{1/\Delta})\leq T_{i}(x_{1},\ldots,x_{ n})\leq\exp W_{0}(\tfrac{1}{\delta}\mathrm{e}^{1/\delta})\quad\text{for all $i$.}\] Finally, using the property \(W_{0}(x\mathrm{e}^{x})=x\) for \(x>0\), we conclude the proof. Lemma 1 entails that the diagonal elements of \(\mathbf{P}\) bound the diagonal elements of the \(J_{1}\)-generators of \(\mathbf{P}\). **Corollary 2**.: _Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Then, if \(\mathbf{Q}=(q_{ij})\) is a \(J_{1}\)-generator of \(\mathbf{P}\), we have_ \[1-\frac{1}{\delta}\leq q_{ii}\leq 1-\frac{1}{\Delta}\quad\text{for all $i$.}\] Proof.: If \(\mathbf{Q}=(q_{ij})\) is a \(J_{1}\)-generator of \(\mathbf{P}\), we have by proposition 3 that the vector \((\theta_{1},\dots,\theta_{n})\), where \(\theta_{i}=\mathrm{e}^{1-q_{ii}}\) for all \(i\), is a fixed oint of \(\mathbf{T}\). Applying lemma 1(1), we have \(\mathrm{e}^{1/\Delta}\leq\theta_{i}\leq\mathrm{e}^{1/\delta}\) for all \(i\), from which the result follows readily. By combining propositions (2), (3) and (4), it turns out that there is a one-to-one correspondence between the possible \(J_{1}\)-generators of \(\mathbf{P}\) and the fixed points of the vector function \(\mathbf{T}\). Regarding these fixed points, we now prove the following important result. **Theorem 1**.: _Let the stochastic \(n\times n\) matrix \(\mathbf{P}=(p_{ij})\) be such that \(p_{ii}>0\) for all \(i\). Then, the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\), defined as in (4), has a unique fixed point._ Proof.: From lemma 1(2), we know that \(\mathbf{T}\) maps the compact convex set \(\mathcal{X}\subset\mathbb{R}_{+}^{n}\), defined by (6), into itself. Also, \(\mathbf{T}\) is continuous as the function \(\rho\), defined by (3), is continuous (lemma 5(2)) and continuity is preserved by linear combination and composition of continuous functions. Hence, by the Brouwer fixed-point theorem, \(\mathbf{T}\) has a fixed point. By definition of \(\mathbf{T}\), this fixed point must have all positive components. We now show that the function \(\mathbf{g}:\mathbb{R}_{+}^{n}\to\mathbb{R}^{n}\) defined as \(\mathbf{g}=\mathbf{T}-\mathbf{Id}\), where \(\mathbf{Id}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\) is the identity mapping, satisfies all conditions of Theorem 3.1 in [16]. This theorem states sufficient conditions in order for the function \(\mathbf{g}\) to have at most one vector \(\mathbf{x}\in\mathbb{R}_{+}^{n}\) with \(\mathbf{g}(\mathbf{x})=\mathbf{o}\). These conditions are (a) \(\mathbf{g}\) is quasi-increasing and (b) \(\mathbf{g}\) is strictly \(R\)-concave. Both (a) and (b) are proven in this paper, see lemma 7. So, we have established that \(\mathbf{T}\) has exactly one fixed point. We are now in the position to formulate and prove our main theorem. **Theorem 2**.: _Let the stochastic \(n\times n\) matrix \(\mathbf{P}=(p_{ij})\) be such that \(p_{ii}>0\) for all \(i\). Then, \(\mathbf{P}\) has exactly one \(J_{1}\)-generator. Moreover, this \(J_{1}\)-generator \(\mathbf{Q}=(q_{ij})\) has elements given by_ \[q_{ii}=1-\ln\theta_{i},\qquad q_{ij}=\frac{\rho(\theta_{i},\theta_{j})p_{ij}} {\theta_{i}p_{ii}}\quad(i\neq j) \tag{7}\] _where the scalar function \(\rho:\mathbb{R}_{+}^{2}\to\mathbb{R}_{+}\) is defined by (3) and \((\theta_{1},\dots,\theta_{n})\) is the unique fixed point of the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\) defined by (4)._ Proof.: We first prove that \(\mathbf{P}\) has a \(J_{1}\)-generator. By theorem 1, the vector function \(\mathbf{T}\) has a unique fixed point \(\theta=(\theta_{1},\theta_{2},\dots,\theta_{n})\in\mathbb{R}_{+}^{n}\). Starting from \(\theta\), construct the matrix \(\mathbf{Q}=(q_{ij})\) according to (7). Then, \(\mathbf{Q}\) is a \(J_{1}\)-generator of \(\mathbf{P}\), by proposition 4. To prove the uniqueness of the \(J_{1}\)-generator, suppose that \(\mathbf{P}\) has \(J_{1}\)-generators \(\mathbf{R}=(r_{ij})\) and \(\mathbf{S}=(s_{ij})\). Then, by proposition 3, the vectors \(\theta_{\mathbf{R}}=(\mathrm{e}^{1-r_{11}},\dots,\mathrm{e}^{1-r_{nn}})\) and \(\theta_{\mathbf{S}}=(\mathrm{e}^{1-s_{11}},\dots,\mathrm{e}^{1-s_{nn}})\) are fixed points of the vector function \(\mathbf{T}\). By theorem 1, \(\theta_{\mathbf{R}}=\theta_{\mathbf{S}}\). Hence, by proposition 2, we must have \(\mathbf{R}=\mathbf{S}\). Finally, the fact that a \(J_{1}\)-generator assumes the form 7 is a consequence of proposition 2 and proposition 3. The proof is now complete. According to theorem 2, the unique \(J_{1}\)-generator is completely determined by the fixed point of the function \(\mathbf{T}\). Besides, under the conditions of lemma 9, the function \(\mathbf{T}\) is a contraction. Hence, under these conditions, an algorithm based on the fixed point iteration approach guarantees an appropriate estimation for the fixed point \((\theta_{1},\ldots,\theta_{n})\) as outcome. For a transition matrix \(\mathbf{P}\) with identical positive diagonal elements, a closed-form formula for its unique \(J_{1}\)-generator can be given by virtue of corollary 2. This type of transition matrices appear in e.g. models of DNA sequence evolution [17]. **Corollary 3**.: _Suppose \(\mathbf{P}=(p_{ij})\) is a \(n\times n\) stochastic matrix satisfying \(p_{ii}=p>0\) for all \(i\). Then, its unique \(J_{1}\)-generator \(\mathbf{Q}\) is given by \(\mathbf{Q}=\frac{1}{p}(\mathbf{P}-\mathbf{I})\), where \(\mathbf{I}\) is the \(n\times n\) identity matrix._ Proof.: Let \(\mathbf{Q}=(q_{ij})\) be a \(J_{1}\)-generator of \(\mathbf{P}\). It follows from corollary 2 that \(q_{ii}=1-1/p\) for all \(i\). Moreover, if \(i\neq j\), theorem 2 and equation (3) imply that \[q_{ij}=\frac{\rho(\mathrm{e}^{1-q_{ii}},\mathrm{e}^{1-q_{ii}})p_{ij}}{\mathrm{e }^{1-q_{ii}}p_{ii}}=\frac{\rho(\mathrm{e}^{1/p},\mathrm{e}^{1/p})p_{ij}}{ \mathrm{e}^{1/p}p}=\frac{p_{ij}}{p}.\] In summary, we have \[q_{ii}=1-\frac{1}{p}=\frac{1}{p}(p_{ii}-1),\qquad q_{ij}=\frac{1}{p}p_{ij} \quad(i\neq j),\] concluding the proof. ## 4 Illustrations The aim of this section is twofold, namely (1) to illustrate the conditional embedding approach for some concrete transition matrices and (2) to compare the new approach with alternative low jump frequency approaches for embedding problems. In [1] Jarrow, Lando and Turnbull found an approximation for the Markov generator in closed form under the model assumption that the probability of more than one jump per year is negligible. Their Markov generator \(\mathbf{Q}_{\mathrm{JLT}}=(q_{ij}^{\mathrm{JLT}})\) is a product of the model assumption \[\mathbb{P}(H_{i}\geq 1\,|\,X_{0}=i)=p_{ii}\quad,\quad\mathbb{P}(H_{i}<1,X_{H_{i}} =j\,|\,X_{0}=i)=p_{ij}\quad(i\neq j), \tag{8}\] where \(H_{i}\) is the holding time in state \(i\). This system of equations in the unknowns \(q_{ij}^{\mathrm{JLT}}\) can be solved explicitly to obtain \[q_{ii}^{\mathrm{JLT}}=\ln p_{ii}\quad,\quad q_{ij}^{\mathrm{JLT}}=\frac{p_{ij }\ln p_{ii}}{p_{ii}-1}\quad(i\neq j). \tag{9}\] In this paper we study the embedding problem under the assumption \[\mathbb{P}[X_{1}=j\,|\,X_{0}=i,N_{J}\leq 1]=p_{ij}\quad\text{for all $i$ and $j$}. \tag{10}\] In contrast to assumption (8), assumption (10) is about the data and not about the process. In fact, this paper does not preclude the model from having multiple transitions between \(t=0\) and \(t=1\). It does, however suppose that the data at hand are realisations of the underlying process with no more than one transition between \(t=0\) and \(t=1\). Hence, the estimated one step transition matrix is based solely on observations that did not jump more than once between \(t=0\) and \(t=1\). Both approaches have the merit to make the identification phase as well as regularization redundant in the embedding problem. To avoid confusion, let us denote for the transition matrix \(\mathbf{P}\), Jarrow's generator as \(\mathbf{Q}_{\mathrm{JLT}}\) and the \(J_{1}\)-generator as \(\mathbf{Q}_{J_{1}}\). Then, an interesting question emerges of which of the matrices \(\exp(\mathbf{Q}_{\mathrm{JLT}})\) and \(\exp(\mathbf{Q}_{J_{1}})\) is the best approximation to \(\mathbf{P}\)? The following section compares both approach for some interesting illustrations. As in [4], throughout this paper the maximum absolute row sum is used as matrix norm, i.e. for an \(n\times n\) matrix \(\mathbf{M}=(m_{ij})\), \(||\mathbf{M}||_{\infty}=\max_{1\leq i\leq n}\sum_{j=1}^{n}m_{ij}\). ### Credit rating transition matrix Consider the empirical transition matrix \[\mathbf{P}=\begin{bmatrix}0.8910&0.0963&0.0078&0.0019&0.0030&0.0000&0.0000\\ 0.0086&0.9010&0.0747&0.0099&0.0029&0.0029&0.0000&0.0000\\ 0.0009&0.0291&0.8896&0.0649&0.0101&0.0045&0.0000&0.0009\\ 0.0006&0.0043&0.0656&0.8428&0.0644&0.0160&0.0018&0.0045\\ 0.0004&0.0022&0.0079&0.0719&0.7765&0.1043&0.0127&0.0241\\ 0.0000&0.0019&0.0031&0.0066&0.0517&0.8247&0.0435&0.0685\\ 0.0000&0.0000&0.0116&0.0116&0.0203&0.0754&0.6492&0.2319\\ 0.0000&0.0000&0.0000&0.0000&0.0000&0.0000&0.0000&1.0000\\ \end{bmatrix}\] based1 on Table 3 in Jarrow et al. [1, p. 506]. For this matrix, it can be proven that the vector function \(\mathbf{T}:\mathbb{R}_{+}^{8}\rightarrow\mathbb{R}_{+}^{8}\), defined by (4), is a contraction mapping (according to 9). Using fixed-point iteration and (7), we find that the unique \(J_{1}\)-generator, truncated to 4 decimal places, is: Footnote 1: We have adjusted five entries on the main diagonal to ensure all rows sum up to one. \[\mathbf{Q}_{J_{1}}=\begin{bmatrix}-0.1221&0.1075&0.0088&0.0022&0.0036&0.0000&0.0 000&0.0000\\ 0.0096&-0.1114&0.0836&0.0114&0.0035&0.0034&0.0000&0.0000\\ 0.0010&0.0325&-0.1271&0.0752&0.0122&0.0053&0.0000&0.0009\\ 0.0007&0.0049&0.0755&-0.1874&0.0798&0.0192&0.0024&0.0049\\ 0.0005&0.0026&0.0094&0.0886&-0.2759&0.1301&0.0178&0.0270\\ 0.0000&0.0022&0.0036&0.0079&0.0647&-0.2121&0.0592&0.0746\\ 0.0000&0.0000&0.0152&0.0157&0.0287&0.1031&-0.4460&0.2834\\ 0.0000&0.0000&0.0000&0.0000&0.0000&0.0000&0.0000\\ \end{bmatrix}\] In contrast, the rate matrix \(\mathbf{Q}_{\mathrm{JLT}}\), published in Jarrow et al. [1] and defined by equation (9), is \[\mathbf{Q}_{\mathrm{JLT}}=\begin{bmatrix}-0.1154&0.1020&0.0083&0.0020&0.0032&0.0 000&0.0000&0.0000\\ 0.0091&-0.1043&0.0787&0.0104&0.0031&0.0031&0.0000&0.0000\\ 0.0010&0.0308&-0.1170&0.0688&0.0107&0.0048&0.0000&0.0010\\ 0.0007&0.0047&0.0714&-0.1710&0.0701&0.0174&0.0020&0.0049\\ 0.0005&0.0025&0.0089&0.0814&-0.2530&0.1180&0.0144&0.0273\\ 0.0000&0.0021&0.0034&0.0073&0.0568&-0.1927&0.0478&0.0753\\ 0.0000&0.0000&0.0143&0.0143&0.0250&0.0929&-0.4320&0.2856\\ 0.0000&0.0000&0.0000&0.0000&0.0000&0.0000&0.0000&0.0000\end{bmatrix}\] Notice that the elements on the main diagonal of \(\mathbf{Q}_{\mathrm{JLT}}\) are in absolute value smaller than their counterparts in the \(J_{1}\)-generator. This property remains true for all matrices \(\mathbf{P}\) having non-zero diagonal elements and follows from lemma 8. It can be shown that \(||\mathbf{P}-\exp\mathbf{Q}_{J_{1}}||_{\infty}<||\mathbf{P}-\exp\mathbf{Q}_{ \mathrm{JLT}}||_{\infty}\). ### Transition matrices with same diagonal entries In case the stochastic matrix \(\mathbf{P}\) has coinciding diagonal elements equal to \(p>0\), according to corollary 3 a closed-form solution to the equation \(\mathbf{P}^{\{N_{J}\leq 1\}}=\mathbf{P}\) exists: \(\mathbf{Q}_{J_{1}}=\frac{1}{p}(\mathbf{P}-\mathbf{I})\), where \(\mathbf{I}\) is the \(n\times n\) identity matrix. Besides, Jarrow's solution (see [1, eqs. 30a & 30b]) is \(\mathbf{Q}_{\mathrm{JLT}}=\frac{\ln p}{p-1}(\mathbf{P}-\mathbf{I})\). It is interesting to investigate to what extent \(\exp\mathbf{Q}_{J_{1}}\) and \(\exp\mathbf{Q}_{\mathrm{JLT}}\) differ from \(\mathbf{P}\). Note that both matrices \(\mathbf{Q}_{J_{1}}\) and \(\mathbf{Q}_{\mathrm{JLT}}\) are of the form \(\mathbf{Q}(k):=k(\mathbf{P}-\mathbf{I})\) with \(k\) constant, and equal to \(\frac{1}{p}\) and \(\frac{\ln p}{p-1}\) respectively. For the \(2\times 2\) case \(\mathbf{P}=\begin{bmatrix}p&1-p\\ 1-p&p\end{bmatrix}\) it is known that \(\mathbf{P}\) is embeddable, with unique generator \(\mathbf{Q}=\frac{\ln\left(2p-1\right)}{2(p-1)}(\mathbf{P}-\mathbf{I})\), if and only if \(p>\frac{1}{2}\) ([7]). For transition matrices \(\mathbf{P}\) with \(p\leq\frac{1}{2}\) the conditional embedding approach results in a unique \(J_{1}\)-generator \(\mathbf{Q}_{J_{1}}\) for which \(\exp\mathbf{Q}_{J_{1}}\) is a better approximation to \(\mathbf{P}\) than \(\exp\mathbf{Q}_{\mathrm{JLT}}\): **Lemma 2**.: _Let \(\mathbf{P}=\begin{bmatrix}p&1-p\\ 1-p&p\end{bmatrix}\), where \(0<p<1\) and define \(\mathbf{Q}(k)=k(\mathbf{P}-\mathbf{I})\), where \(\mathbf{I}\) is the \(2\times 2\) identity matrix. Then,_ \[||\mathbf{P}-\exp\mathbf{Q}(\tfrac{1}{p})||_{\infty}<||\mathbf{P}-\exp \mathbf{Q}(\tfrac{\ln p}{p-1})||_{\infty}.\] Proof.: It can be shown that \[\mathbf{P}-\exp\mathbf{Q}(k)=\left(\frac{1}{2}+\frac{\mathrm{e}^{2k(p-1)}}{2} -p\right)\begin{bmatrix}-1&1\\ 1&-1\end{bmatrix},\] so that we have \[||\mathbf{P}-\exp\mathbf{Q}(k)||_{\infty}=|1+\mathrm{e}^{2k(p-1)}-2p|.\] Let \(f(k)=1+\mathrm{e}^{2k(p-1)}-2p\). Since \(p<1\), the function \(f\) is strictly decreasing. Also, since \(\ln x<x-1\) for all \(x>0\) and \(x\neq 1\), we have that \(\ln\frac{1}{p}<\frac{1}{p}-1=\frac{1-p}{p}\) yielding \(\frac{\ln p}{p-1}<\frac{1}{p}\). Hence, \(f(\frac{\ln p}{p-1})>f(\frac{1}{p})\). Furthermore, \(f(\frac{1}{p})>0\) ( lemma 10, 1.). Consequently, \[||\mathbf{P}-\exp\mathbf{Q}(\tfrac{1}{p})||_{\infty}=|f(\tfrac{1}{p})|=f( \tfrac{1}{p})<f(\tfrac{\ln p}{p-1})=|f(\tfrac{\ln p}{p-1})|=||\mathbf{P}-\exp \mathbf{Q}(\tfrac{\ln p}{p-1})||_{\infty}.\qed\] Hence, for all \((2\times 2)\) transition matrices with same diagonal entries, it is proven that \(||\mathbf{P}-\exp\mathbf{Q}_{J_{1}}||_{\infty}<||\mathbf{P}-\exp\mathbf{Q}_{ \mathrm{JLT}}||_{\infty}\). For the \((3\times 3)\) case, we investigate the transition matrices as introduced in lemma 3. Those transition matrices are not embeddable, since \(p_{13}=0\) but \(p_{13}^{(2)}=(1-p)^{2}/2>0\)[18, Theorem 5, p. 126]. Since no generator does exist, it is worth to investigate the \(J_{1}\)-generator and Jarrow's generator. Lemma 3 proves that \(\exp\mathbf{Q}_{J_{1}}\) better approximates \(\mathbf{P}\) than \(\exp\mathbf{Q}_{\mathrm{JLT}}\), i.e. \(||\mathbf{P}-\exp\mathbf{Q}_{J_{1}}||_{\infty}<||\mathbf{P}-\exp\mathbf{Q}_{ \mathrm{JLT}}||_{\infty}\). **Lemma 3**.: _Let \(\mathbf{P}=\begin{bmatrix}p&1-p&0\\ \frac{1}{2}(1-p)&p&\frac{1}{2}(1-p)\\ 0&1-p&p\end{bmatrix}\), where \(0<p<1\) and define \(\mathbf{Q}(k)=k(\mathbf{P}-\mathbf{I})\), where \(\mathbf{I}\) is the \(3\times 3\) identity matrix. Then,_ \[||\mathbf{P}-\exp\mathbf{Q}(\tfrac{1}{p})||_{\infty}<||\mathbf{P}-\exp \mathbf{Q}(\tfrac{\ln p}{p-1})||_{\infty}.\] Proof.: It can be shown (e.g. using Sylvester's theorem for computing functions of a matrix) that \[\mathbf{P}-\exp\mathbf{Q}(k)=\begin{bmatrix}-\alpha(k)&\beta(k)&\alpha(k)- \beta(k)\\ \frac{1}{2}\beta(k)&-\beta&\frac{1}{2}\beta(k)\\ \alpha(k)-\beta(k)&\beta(k)&-\alpha(k)\end{bmatrix}\] where \[\alpha(k)=\tfrac{1}{4}\,\mathrm{e}^{2k(p-1)}+\tfrac{1}{2}\,\mathrm{e}^{k(p-1)} +\tfrac{1}{4}-p\quad\text{and}\quad\beta(k)=\tfrac{1}{2}\,\mathrm{e}^{2k(p-1)} +\tfrac{1}{2}-p.\] It holds that \[\alpha(k)-\beta(k)=-\tfrac{1}{4}(1-\mathrm{e}^{k(p-1)})^{2}\leq 0, \tag{11}\] whence, \[||\mathbf{P}-\exp\mathbf{Q}(k)||_{\infty} =|\alpha(k)|+|\beta(k)|+|\alpha(k)-\beta(k)|\] \[=|\alpha(k)|+|\beta(k)|+\beta(k)-\alpha(k). \tag{12}\] Note that \(\alpha\) and \(\beta\) are both strictly decreasing in \(k\) since \(p<1\). Also, we have \[\alpha(\tfrac{\ln p}{p-1}) =\tfrac{1}{4}(1-p)^{2}>0, \tag{13}\] \[\beta(\tfrac{\ln p}{p-1}) =\tfrac{1}{2}(1-p)^{2}>0,\] (14) \[\beta(\tfrac{1}{p}) =\tfrac{1}{2}(\mathrm{e}^{2-2/p}+1-2p)>0\quad\text{by lemma \ref{ lemma:10} (1.)}. \tag{15}\] Hence, by (12), (13) and (14), \[||{\bf P}-\exp{\bf Q}(\tfrac{\ln p}{p-1})||_{\infty}=\alpha(\tfrac{\ln p}{p-1})+ \beta(\tfrac{\ln p}{p-1})+\beta(\tfrac{\ln p}{p-1})-\alpha(\tfrac{\ln p}{p-1})=2 \beta(\tfrac{\ln p}{p-1}). \tag{16}\] In case \(\alpha(\tfrac{1}{p})\geq 0\), then by (12) and (15), \[||{\bf P}-\exp{\bf Q}(\tfrac{1}{p})||_{\infty}=\alpha(\tfrac{1}{p})+\beta( \tfrac{1}{p})+\beta(\tfrac{1}{p})-\alpha(\tfrac{1}{p})=2\beta(\tfrac{1}{p}),\] which yields \(||{\bf P}-\exp{\bf Q}(\tfrac{1}{p})||_{\infty}<||{\bf P}-\exp{\bf Q}(\tfrac{ \ln p}{p-1})||_{\infty}\), using (16) and because \(\tfrac{\ln p}{p-1}<\tfrac{1}{p}\) and \(\beta\) is strictly decreasing. In case \(\alpha(\tfrac{1}{p})<0\), we have by (12) and (11), \[||{\bf P}-\exp{\bf Q}(\tfrac{1}{p})||_{\infty}=-\alpha(\tfrac{1}{p})+\beta( \tfrac{1}{p})+\beta(\tfrac{1}{p})-\alpha(\tfrac{1}{p})=\tfrac{1}{2}(1-{\rm e }^{1-1/p})^{2}.\] By lemma 10 (2.) and the assumption \(0<p<1\), it holds that \(0<1-{\rm e}^{1-1/p}<\tfrac{4}{3}(1-p)\). Hence, using (14) and (16), \[||{\bf P}-\exp{\bf Q}(\tfrac{1}{p})||_{\infty}<\tfrac{8}{9}(1-p)^{2}<(1-p)^{2 }=2\beta(\tfrac{\ln p}{p-1})=||{\bf P}-\exp{\bf Q}(\tfrac{\ln p}{p-1})||_{ \infty}.\] In either case, we have proven the result. ## Appendix: Lemma's and proofs **Lemma 4**.: _The function \(f\) with \(f(t)={\rm e}^{W_{0}(t)}\), \(t\geq 0\), is strictly concave._ Proof.: By taking second order derivatives and since \(W_{0}^{\prime}(t)=\frac{W_{0}(t)}{t(1+W_{0}(t))}\) and \(W_{0}^{\prime\prime}(t)=\frac{-2W_{0}(t)^{2}-W_{0}(t)^{3}}{t^{2}(1+W_{0}(t))^ {3}}\) (see e.g. [19]), we find \[f^{\prime\prime}(t)=f(t)\big{(}W_{0}^{\prime}(t)^{2}+W_{0}^{\prime\prime}(t) \big{)}=f(t)\frac{-W_{0}(t)^{2}}{t^{2}(1+W_{0}(t))^{3}},\] which is negative for all \(t>0\) since \(W_{0}(t)>0\) if \(t>0\). Let \({\bf o}=(0,\ldots,0)\in\mathbb{R}^{n}\). In what follows, we consider the partial ordering of \(\mathbb{R}^{n}\) induced by componentwise ordering. For example, if \({\bf x}=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\), and \({\bf y}=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\), we write \({\bf x}\preceq{\bf y}\) if and only if \(x_{i}\leq y_{i}\) for all \(i\). Likewise, we write \({\bf x}\succ{\bf o}\) if and only if \(x_{i}>0\) for all \(i\). **Lemma 5**.: _The function \(\rho:\mathbb{R}^{2}_{+}\to\mathbb{R}_{+}\), defined as in (3), satisfies the following properties:_ 1. _It is linearly homogeneous, i.e._ \(\rho(\lambda{\bf x})=\lambda\rho({\bf x})\) _for all_ \({\bf x}\in\mathbb{R}^{2}_{+}\) _and_ \(\lambda>0\)_._ 2. _It is continuous on_ \(\mathbb{R}^{2}_{+}\)_._ 3. _It is increasing, i.e._ \(\rho({\bf x})\leq\rho({\bf y})\) _for all_ \({\bf x},{\bf y}\in\mathbb{R}^{2}_{+}\) _with_ \({\bf x}\preceq{\bf y}\) _._ 4. \(\min\{x,y\}\leq\rho(x,y)\leq\max\{x,y\}\) _for all_ \((x,y)\in\mathbb{R}_{+}^{2}\)_._ Proof.: Let \(\mathbf{u}=(u_{1},u_{2})\in\mathbb{R}_{+}^{2}\). It is easy to see that \(\rho(\mathbf{u})=u_{2}f(u_{1}/u_{2})=u_{1}f(u_{2}/u_{1})\), where \(f\) is the continuous function defined by \[f(t)=\begin{cases}\frac{t\ln t}{t-1}&\text{if $t>0$ and $t\neq 1$}\\ 1&\text{if $t=1$.}\end{cases}\] 1. Follows directly from the above. 2. A direct consequence of the above. 3. By standard calculus, we have \[f^{\prime}(t)=\begin{cases}\frac{t-1-\ln t}{(t-1)^{2}}&\text{if $t>0$ and $t\neq 1$}\\ 1/2&\text{if $t=1$,}\end{cases}\] hence \(f\) is (strictly) increasing on the positive real half-line since \(\ln t<t-1\) for all \(t>0\) with \(t\neq 1\). Now, take \(\mathbf{x}=(x_{1},x_{2})\) and \(\mathbf{y}=(y_{1},y_{2})\), so that \(\mathbf{o}\prec\mathbf{x}\preceq\mathbf{y}\). Then, as \(f\) is increasing, \(\rho(\mathbf{x})=x_{2}f(x_{1}/x_{2})\leq x_{2}f(y_{1}/x_{2})=y_{1}f(x_{2}/y_{ 1})\leq y_{1}f(y_{2}/y_{1})=\rho(\mathbf{y})\). 4. Consider the case \(0<x\leq y\). Since \(f\) is increasing on \(\mathbb{R}_{+}\), we have \[x=xf(1)\leq xf(y/x)=\rho(x,y)=yf(x/y)\leq yf(1)=y.\] The result for the case \(0<y\leq x\) is proven analogously. **Lemma 6**.: _The vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\) from proposition 3 is increasing, i.e. \(\mathbf{T}(\mathbf{x})\preceq\mathbf{T}(\mathbf{y})\) for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}_{+}^{n}\) with \(\mathbf{x}\preceq\mathbf{y}\)._ Proof.: Let \(i\in\{1,\dots,n\}\) and take \(\mathbf{x},\mathbf{y}\in\mathbb{R}_{+}^{n}\) so that \(\mathbf{x}\preceq\mathbf{y}\). Denote, for \(\mathbf{u}=(u_{1},\dots,u_{n})\), \(F_{i}(\mathbf{u})=\frac{1}{p_{ii}}\left(\sum_{j=1}^{n}p_{ij}\rho(u_{i},u_{j})\right)\). By lemma 5(3), we have \(F_{i}(\mathbf{x})\leq F_{i}(\mathbf{y})\), which yields \(T_{i}(\mathbf{x})=\mathrm{e}^{W_{0}(F_{i}(\mathbf{x}))}\leq\mathrm{e}^{W_{0} (F_{i}(\mathbf{y}))}=T_{i}(\mathbf{y})\) since the principal branch \(W_{0}\) of the Lambert W function is increasing (see e.g. [15]). **Lemma 7**.: _Let the vector function \(\mathbf{g}:\mathbb{R}_{+}^{n}\to\mathbb{R}^{n}\) given by \(\mathbf{g}(\mathbf{x})=\mathbf{T}(\mathbf{x})-\mathbf{x}\) for all \(\mathbf{x}\succ\mathbf{o}\), where the vector function \(\mathbf{T}:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\) is defined as in (4). Then, \(\mathbf{g}=(g_{1},\dots,g_{n})\) is_ 1. quasi-increasing_, i.e. for all_ \(i\) _holds that_ \(\mathbf{o}\prec\mathbf{x}\preceq\mathbf{y}\) _and_ \(x_{i}=y_{i}\) _imply_ \(g_{i}(\mathbf{x})\leq g_{i}(\mathbf{y})\)_,_ 2. strictly \(R\)-concave_, i.e. if_ \(\mathbf{x}\succ\mathbf{o}\) _and_ \(\mathbf{g}(\mathbf{x})=\mathbf{o}\) _and_ \(0<\lambda<1\)_, then_ \(\mathbf{g}(\lambda\mathbf{x})\succ\mathbf{o}\)_._ Proof.: 1. Take \(i\in\{1,\dots,n\}\) and suppose \(\mathbf{o}\prec\mathbf{x}\preceq\mathbf{y}\) with \(x_{i}=y_{i}\). Then, \(g_{i}(\mathbf{x})\leq g_{i}(\mathbf{y})\), since \(x_{i}=y_{i}\) and \(T_{i}(\mathbf{x})\leq T_{i}(\mathbf{y})\) (lemma 6). 2. Let \(\mathbf{x}=(x_{1},\ldots,x_{n})\succ\mathbf{o}\) so that \(\mathbf{g}(\mathbf{x})=\mathbf{o}\). Let \(0<\lambda<1\) and take \(i\in\{1,\ldots,n\}\). Denote \(F_{i}(\mathbf{x})=\frac{1}{p_{ii}}\left(\sum_{j=1}^{n}p_{ij}\rho(x_{i},x_{j})\right)\). By lemma 5(1), \(F_{i}(\lambda\mathbf{x})=\lambda F_{i}(\mathbf{x})\). Hence, \[T_{i}(\lambda\mathbf{x})=\mathrm{e}^{W_{0}(F_{i}(\lambda\mathbf{x}))}= \mathrm{e}^{W_{0}(\lambda F_{i}(\mathbf{x}))}>\lambda\mathrm{e}^{W_{0}(F_{i}( \mathbf{x}))}=\lambda T_{i}(\mathbf{x}),\] where the inequality follows from the fact that the function \(t\mapsto\mathrm{e}^{W_{0}(t)}\) is strictly concave (lemma 4) and \(W_{0}(0)=0\). Consequently, for all \(i\), \[g_{i}(\lambda\mathbf{x})=T_{i}(\lambda\mathbf{x})-\lambda x_{i}>\lambda(T_{i} (\mathbf{x})-x_{i})=\lambda g_{i}(\mathbf{x})=0,\] i.e. \(\mathbf{g}(\lambda\mathbf{x})\succ\mathbf{o}\). **Lemma 8**.: _Let the stochastic \(n\times n\) matrix \(\mathbf{P}=(p_{ij})\) be such that \(p_{ii}>0\) for all \(i\). Let \(\mathbf{Q}=(q_{ij})\) be the \(J_{1}\)-generator of \(\mathbf{P}\). Then, \(q_{ii}\leq\ln p_{ii}\) for all \(i\)._ Proof.: By theorem 2, \(q_{ii}=1-\ln\theta_{i}\) for all \(i\), where \(\theta=(\theta_{1},\ldots,\theta_{n})\) is the unique fixed point of the vector function \(\mathbf{T}=(T_{1},\ldots,T_{n})\), defined by (4). Take \(i\in\{1,\ldots,n\}\). Thus, \[\theta_{i}=T_{i}(\theta)=\exp W_{0}(\tfrac{1}{p_{ii}}\sum_{j}p_{ij}\rho(\theta _{i},\theta_{j})),\] which yields \[\theta_{i}\ln\theta_{i}=\frac{1}{p_{ii}}\sum_{j}p_{ij}\rho(\theta_{i},\theta_{ j}).\] Using the definition of the function \(\rho\) in (3), the above equation can be rewritten as \[\left(\frac{\theta_{i}}{\mathrm{e}}-1\right)\rho(\theta_{i},\mathrm{e})= \theta_{i}\ln\theta_{i}-\theta_{i}=\frac{1}{p_{ii}}\sum_{j:j\neq i}p_{ij}\rho( \theta_{i},\theta_{j}). \tag{17}\] Now, since \(1-\ln\theta_{j}=q_{jj}\leq 0\), we have that \(\theta_{j}\geq\mathrm{e}\) for all \(j\). By lemma 5(3), \(\rho(\theta_{i},\theta_{j})\geq\rho(\theta_{i},\mathrm{e})\) for all \(j\). Hence, it follows from (17) that \[\left(\frac{\theta_{i}}{\mathrm{e}}-1\right)\rho(\theta_{i},\mathrm{e})\geq \frac{1}{p_{ii}}\sum_{j:j\neq i}p_{ij}\rho(\theta_{i},\mathrm{e})=\frac{1-p_{ii }}{p_{ii}}\rho(\theta_{i},\mathrm{e}),\] which simplifies to \(\theta_{i}\geq\mathrm{e}/p_{ii}\). Upon taking logarithms of both sides of this inequality and using \(q_{ii}=1-\ln\theta_{i}\), we arrive at \(q_{ii}\leq\ln p_{ii}\). **Lemma 9**.: _Let \(\mathbf{P}=(p_{ij})\) be a \(n\times n\) stochastic matrix. Let \(\Delta=\max\{p_{11},\ldots,p_{nn}\}\) and \(\delta=\min\{p_{11},\ldots,p_{nn}\}\). Suppose \(\delta>0\). Then, with respect to the max norm \(||\cdot||_{\infty}\), the function \(\mathbf{T}:\mathcal{X}\rightarrow\mathcal{X}\), defined as in (4), is Lipschitz continuous with Lipschitz constant \(K=\frac{1+(\frac{1}{2}-1)C(\alpha)}{1+\frac{1}{\Delta}}\) where \(C(\alpha)=-1+\frac{\alpha+1}{\alpha-1}\ln\alpha\) and \(\alpha=\mathrm{e}^{\frac{1}{\delta}-\frac{1}{\Delta}}\)._ **Lemma 10**.: _For all \(0<p<1\), it holds that_ 1. \(1+\mathrm{e}^{2-2/p}-2p>0\)_,_ 2. \(1-\mathrm{e}^{1-1/p}<\frac{4}{3}(1-p)\)_._ Proof.: To prove the first inequality, let \(f(t)=1+\mathrm{e}^{2-2/t}-2t\), which is continuous on the half-open interval \((0,1]\). A straightforward calculation reveals that \(f^{\prime}(t)=2t^{-2}(\mathrm{e}^{2-2/t}-t^{2})\) and \(f^{\prime\prime}(t)=4t^{-4}(1-t)\mathrm{e}^{2-2/t}\). So, \(f^{\prime\prime}(t)>0\) for all \(t\in(0,1)\). Consequently, \(f^{\prime}(t)<0\) for all \(t\in(0,1)\) because \(f^{\prime}\) is monotone increasing on \((0,1)\) and \(f^{\prime}(1)=0\). Hence, \(f\) is monotone decreasing on \((0,1)\). The result now follows from the fact that \(f(1)=0\). To prove the second inequality, consider the function \(f(t)=\frac{4}{3}(1-t)-1+\mathrm{e}^{1-1/t}\) which is differentiable on \(\{t\in\mathbb{R}\,|\,t>0\}\). Let \(p_{0}\) be a critical point of \(f\), then \(f^{\prime}(p_{0})=-\frac{4}{3}+\mathrm{e}^{1-1/p_{0}}p_{0}^{-2}=0\), yielding \(\mathrm{e}^{1-1/p_{0}}=\frac{4}{3}p_{0}^{2}\). Clearly, \(p_{0}\neq\frac{1}{2}\), hence, \[f(p_{0})=\tfrac{4}{3}(1-p_{0})-1+\tfrac{4}{3}p_{0}^{2}=\tfrac{4}{3}(\tfrac{1} {4}-p_{0}+p_{0}^{2})=\tfrac{4}{3}(\tfrac{1}{2}-p_{0})^{2}>0.\] So, all critical points of \(f\) have positive function values. In addition, we have \(f(1)=0\) and \(\lim_{t\to 0^{+}}f(t)=1/3>0\). Therefore, \(f(t)>0\) for all \(t\in(0,1)\).
2309.09799
Watch the Speakers: A Hybrid Continuous Attribution Network for Emotion Recognition in Conversation With Emotion Disentanglement
Emotion Recognition in Conversation (ERC) has attracted widespread attention in the natural language processing field due to its enormous potential for practical applications. Existing ERC methods face challenges in achieving generalization to diverse scenarios due to insufficient modeling of context, ambiguous capture of dialogue relationships and overfitting in speaker modeling. In this work, we present a Hybrid Continuous Attributive Network (HCAN) to address these issues in the perspective of emotional continuation and emotional attribution. Specifically, HCAN adopts a hybrid recurrent and attention-based module to model global emotion continuity. Then a novel Emotional Attribution Encoding (EAE) is proposed to model intra- and inter-emotional attribution for each utterance. Moreover, aiming to enhance the robustness of the model in speaker modeling and improve its performance in different scenarios, A comprehensive loss function emotional cognitive loss $\mathcal{L}_{\rm EC}$ is proposed to alleviate emotional drift and overcome the overfitting of the model to speaker modeling. Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work. Another extensive comparative experiments and ablation studies on three benchmarks are conducted to provided evidence to support the efficacy of each module. Further exploration of generalization ability experiments shows the plug-and-play nature of the EAE module in our method.
Shanglin Lei, Xiaoping Wang, Guanting Dong, Jiang Li, Yingjian Liu
2023-09-18T14:18:16Z
http://arxiv.org/abs/2309.09799v2
Watch the Speakers: A Hybrid Continuous Attribution Network for Emotion Recognition in Conversation With Emotion Disentanglement ###### Abstract Emotion Recognition in Conversation (ERC) has attracted widespread attention in the natural language processing field due to its enormous potential for practical applications. Existing ERC methods face challenges in achieving generalization to diverse scenarios due to insufficient modeling of context, ambiguous capture of dialogue relationships and overfitting in speaker modeling. In this work, we present a Hybrid Continuous Attributive Network (HCAN) to address these issues in the perspective of emotional continuation and emotional attribution. Specifically, HCAN adopts a hybrid recurrent and attention-based module to model global emotion continuity. Then a novel Emotional Attribution Encoding (EAE) is proposed to model intra- and inter-emotional attribution for each utterance. Moreover, aiming to enhance the robustness of the model in speaker modeling and improve its performance in different scenarios, A comprehensive loss function emotional cognitive loss \(\mathcal{L}_{\text{EC}}\) is proposed to alleviate emotional drift and overcome the overfitting of the model to speaker modeling. Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work. Another extensive comparative experiments and ablation studies on three benchmarks are conducted to provided evidence to support the efficacy of each module. Further exploration of generalization ability experiments shows the plug-and-play nature of the EAE module in our method. Natural language processing, emotion recognition in conversation, context modeling, dialogue relationship ## I Introduction Emotion Recognition in Conversation (ERC) is a rapidly growing research field within Natural Language Processing (NLP) that focuses on identifying the emotions conveyed in each utterance of a conversation. Different from the single sentence's emotional classification in explicit sentiment analysis [1, 2, 3, 4], this task contains samples with vastly different conversation lengths, ambiguous emotional expressions, and complex conversational relationships. Fig. 1 illustrates an example of the conversation scenario, where the utterance to be predicted (the last utterance) is influenced by the historical utterances of that conversation. As expected, ERC task has attracted the attention of many researchers due to its potential applications in various fields such as political campaigning and public opinion analysis [5, 6], human-robot interaction [7] and task-oriented dialogue system [8, 9, 10, 11]. Pervious ERC methods generally formulate the task as a supervised learning task based on different architectures of neural networks. This places a significant demand on the model's ability to capture the context of each utterance and effectively utilize speaker information [12]. Moreover, various modeling methods for context and speaker have significantly raised the baseline, but there are still two remaining challenges of ERC need to solve. (1) **Insufficient modeling of context.** Existing works on context modeling can be broadly categorized into two types: The recurrent based methods [13, 14, 15, 16] focus on establishing more natural context temporal correlation. However, these methods may struggle to capture the global emotional continuity in long conversations. Although attention-based methods [17, 18, 19, 20] aim to aggregate emotional features at multiple levels, they may not be as effective as temporal models in capturing emotional continuity between speakers over time. These methods adopt a single Fig. 1: A example for the conversation in the MELD dataset. and redundant network architecture, which results in a lack of generalization in context modeling. (2) **Ambiguous capture of dialogue relationships.** Studies [21, 22] provide evidence that generating emotional responses can effectively improve the performance of ERC models. It can be inferred that in real-life conversations, more direct conversational relationships often lead to more direct emotional transmission. Nonetheless, the ERC field still lacks of detailed modeling of the emotional influence within and between speakers in the perspective of dialogue relationship. (3) **Overfitting in speaker modeling.** In the ERC task, speakers often exhibit distinct characteristics in their emotional expressions due to differences in identity and personality. To better leverage fine-grained information, several studies have made significant contributions [23, 24]. Although intricate network designs have been developed from various perspectives, such as speaker psychological states, dialogue memory, and relative positional relationships, these approaches have yielded limited results. Specifically, The models have encountered overfitting issues in different dialogue scenarios, which has hindered their effectiveness. Therefore, these three limitations greatly hinder the application of ERC models in real-world scenarios, which is precisely what our work aims to address. We have proposed HCAN to effectively address the aforementioned issues. To tackle the problem of insufficient context modeling, we propose Emotional Continuation Encoding (ECE) to extract more robust features in different conversation situations, which comprehensively utilizes both the recurrent units and the attention blocks. The _Attribution Theory_[25] proposes that a stimulus triggers perception, which leads individuals to consider the situation, and physiological reactions lead to cognitive interpretation of physiological changes, both of which together result in emotional expression. Drawing inspiration from the _Attribution Theory_ and accurately capturing dialogue relationships, we present Emotional Attribution Encoding (EAE) based on IA-attention, which models the intra-attribution and inter-attribution of each sentence in an attribution perspective. Due to the diverse input perturbations in conversations [26, 27], we also design emotional cognitive loss to effectively enhance the model's robustness and extend the applicability of the overall model. The Emotional Cognitive loss \(\mathcal{L}_{EC}\) is composed of cross-entropy \(\mathcal{L}_{cross}\), KL divergence \(\mathcal{L}_{\mathrm{KL}}\) for predicting and recognizing emotions, and Adversarial Emotion Disentanglement loss \(\mathcal{L}_{adv}\)[28]. Among them, cross-entropy calculation serves as the main emotional loss, KL divergence can alleviate emotional drift, and Adversarial Emotion Disentanglement loss can mitigate the overfitting of the model to speaker modeling. Our contributions are three-fold: (1) By combining the recurrent and attention-based approaches, our proposed ECE module achieves strong robustness in global emotion continuity modeling across different datasets, particularly demonstrating outstanding performance on long conversation samples. (2) Consider capturing dialogue relationships in the perpective of _Attribution Theory_, we propose an original IA-attention to extract intra-attribution and inter-attribution features, which offers a more direct and accurate modeling of human emotional comprehension. (3) Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work. The proposed EAE module is a plugin module that exhibits strong generalization and effectiveness across different baselines. ## II Related Work ### _Emotion Recognition of Conversation_ The significant advancement of deep learning has greatly promoted the improvement of baseline performance in ERC tasks. Recently, ERC models can be categorized into two types: recurrent-based methods and attention-based methods. #### Ii-A1 Recurrent-Based Methods Through the use of a sequential network structure, recurrent-based methods have the potential to offer a more precise and authentic representation of the emotional dynamics present in a conversation: DialogueRNN [13] is the first to utilize a recurrent neural network for monitoring both speaker states and global states in conversations. COSMIC is a conversational model that integrates commonsense knowledge to enhance its performance. This model injects commonsense knowledge into Gated Recurrent Units (GRU) to capture features related to the internal state, external state, and intent state. The performance of SKAIG is enhanced by integrating action information inferred from the preceding context and the intention suggested by the subsequent context. DialogueCRN [29] is designed with multi-turn reasoning modules that extract and integrate emotional clues. These modules perform an iterative process of intuitive retrieval and conscious reasoning, which imitates the distinctive cognitive thinking of humans. With the goal of achieving a comprehensive understanding of the dialogue, CauAIN [30] first retrieves and enhances causal clues in the dialogue through an external knowledge base. Then, it models intra- and inter-speaker interactions using GRUs. #### Ii-A2 Attention-Based Methods To enable the extraction of emotional features at both coarse-grained and fine-grained levels, attention-based methods often employ a variety of encoders and decoders with different levels and structures. KET [17] extracts concepts related to non-pause words in neutral discourse from a knowledge base and enhances the semantic representation of vectors using a dynamic context graph attention mechanism. Finally, a hierarchical self-attention mechanism is utilized to model the dialogue level. By leveraging four distinct attention mechanisms, DialogXL [19]utilizes the language model layers of XLNet to encode multi-turn dialogues that are arranged in a sliding window. By regarding the internal structure of dialogue as a directed acyclic graph to encode utterances, DAG-ERC offers a more intuitive approach to modeling the information flow between the distant conversation background and the nearby context. TODKAT, as proposed in [31], presents a language model (LM) that is enhanced with topics through an additional layer specialized in detecting them. This model also incorporates commonsense statements obtained from a knowledge base based on the dialogue context. ### _Dialogue Relation Extraction_ The task of Relationship Extraction (RE) aims to identify the relationships that exist between pairs of entities within a document. While in dialogue scenarios, the task of extracting dialogue relations becomes more challenging due to the ellipsis of expression, the fuzziness of semantic reference and the presence of long-distance context dependencies. DialogRE [32] introduced the first human-annotated dataset for dialogue relationship extraction (DiaRE), which aims to capture the relationships between two arguments that arise in predictive conversations. Building upon this dataset, Chen [21] proposed a DiaRE method based on a graphical attention network that constructs meaningful graphs connecting speakers, entities, entity types, and corpus nodes to model the relationships between critical speakers. Similarly, Sun [22] proposed an utterance-aware graph neural network (ERMC-DisGCN) for ERMC, which leverages a relational convolution to propagate contextual information and takes into account the self-speaker dependency of interlocutors. Despite the promising results achieved by the aforementioned methods, they have not been validated on the ERC dataset. Furthermore, unlike directly identifying the current emotional state based on DiaRE, our approach extracts dialogue relationships from an attributional perspective and adds an emotional prediction loss to the task, which better aligns with human thought processes and enhances the robustness of the model in different scenarios. ## III Methodology In this section, we present the details of how to approach conversation modeling from a continuation-attribution perspective. The overview of HCAN is shown in Fig. 2, which is consist of Emotional Continuation Encoding and Emotional Attribution Encoding. ### _Task Statement_ In the ERC task, the goal is to identify the emotion \(s_{i}\) of each utterance \(u_{i}\) in a conversation \([u_{1},u_{2},...,u_{N}]\) by analyzing the dialogic context and the related speaker information \(p_{i}\) in speaker set \(\{p_{i},\dots,p_{M}\}\), where the emotion should be selected from a pre-defined emotional target set \(S\) and each utterance corresponds to one speaker in the set of speakers. ### _Emotional Continuation Encoding_ To mimic the natural conversational flow between speakers, the bidirectional LSTM is employed to encode the utterances' feature \(\mathbf{c}_{i}\in\mathbb{R}^{d_{u}}\) in a temporal sequence as follows: \[\mathbf{g}_{i}^{l},\mathbf{h}_{i}=\overbrace{\mathrm{LSTM}}^{l}(\mathbf{c}_ {i},\mathbf{h}_{i-1}) \tag{1}\] where \(\mathbf{h}_{i}\in\mathbb{R}^{2d_{u}}\) is the hidden state of the LSTM. Noted that the feature at the utterance-level of \(u_{i}\) is represented by \(\mathbf{c}_{i}\in\mathbb{R}^{d_{u}}\), and it is obtained through the employment of the COSMIC method for extraction. To avoid the vanishing of emotional continuity over long time spans, we utilized a multi-head attention module to aggregate the global information from the LSTM encoding result \(\mathbf{G}^{l}\) as follows: \[\mathbf{G}=\mathrm{Multi-Attn}(W_{Q}\mathbf{G}^{l},W_{K}\mathbf{G}^{l},W_{V} \mathbf{G}^{l})+\mathbf{G}^{l} \tag{2}\] Fig. 2: The overall architecture of HCAN consisting of two main components, namely Emotional Continuation Encoding and Emotional Attribution Encoding. where \(\mathbf{G}^{l}=[\mathbf{g}_{1}^{l},\ldots,\mathbf{g}_{n}^{l}]\), \(\mathbf{G}=[\mathbf{g}_{1},\ldots,\mathbf{g}_{n}]\), \(\mathbf{g}_{n}^{l}\in\mathbb{R}^{2d_{u}}\), \(\mathbf{g}_{n}\in\mathbb{R}^{2d_{u}}\) and \(W_{Q},W_{K},W_{V}\) are trainable parameters. The use of residual connections \(+\) ensures that even in the worst-case scenario, the global emotional state degrades to a temporal emotional state, thereby enhancing the robustness of the model. ### _Emotional Attribution Encoding_ Emotional Attribution Encoding is the core of this article, consisting of the IA-attention module and Emotional Cognitive loss. The IA-attention module efficiently captures the dialogue relationship and establishes emotional influence from the perspective of attribution. The Emotional Cognitive loss effectively mitigates the overfitting of modeling on different datasets. #### Iii-C1 IA-Attention Inspired by the attribution theory of emotion, we examine the emotional influence in dialogue relationships in an attributional perspective. Specially, we model emotional influence as intra-attribution and inter-attribution. To achieve this, we introduce IA-attention, which is inspired by several works about self-attention mechanism [33, 34, 35]. This method views each global utterance representation \(\mathbf{g}_{i}\) as a query, which is mapped to intra-attribution partial space \(Q_{a}\) and inter-attribution partial space \(Q_{e}\) to get two different query embeddings \(\mathbf{q}_{i_{a}},\mathbf{q}_{i_{e}}\). Meanwhile, the historical utterance \([\mathbf{g}_{1},\ldots\mathbf{g}_{i-1}]\) are also projected to \(K\) and \(V\) partial space to obtain \(\mathbf{k}_{i}\) and \(\mathbf{v}_{i}\). To summarize, for each utterance, we apply different attribution attention matrices to get the intra-attribution weighted sum and inter-attribution weighted sum which are divided by each utterance's speaker \(p_{i}\). The specific formula is as follows: \[[\mathbf{q}_{i_{a}};\mathbf{q}_{i_{e}}]=[W_{Q_{a}};W_{Q_{e}}] \mathbf{g}_{i} \tag{3}\] \[[\mathbf{k}_{1},\ldots,\mathbf{k}_{n}]=W_{K_{\mathrm{IA}}}[ \mathbf{g}_{1},\ldots,\mathbf{g}_{n}]\] (4) \[[\mathbf{v}_{1},\ldots,\mathbf{v}_{n}]=W_{\hat{V}_{\mathrm{IA}}} [\mathbf{g}_{1},\ldots,\mathbf{g}_{n}]\] (5) \[\tilde{\mathbf{v}_{i}}=\sum_{j<i}\left(\delta_{p_{j},p_{i}}\frac{ e^{\mathbf{q}_{i_{a}}^{T}\mathbf{k}_{j}}}{Z}+(1-\delta_{p_{j},p_{i}})\frac{e^{ \mathbf{q}_{i_{e}}^{T}\mathbf{k}_{j}}}{Z}\right)\mathbf{v}_{j} \tag{6}\] where \(W_{Q_{a}},W_{Q_{e}},W_{K_{\mathrm{IA}}},W_{V_{\mathrm{IA}}}\in\mathbb{R}^{2d_ {u}\times 4d_{u}}\) are trainable parameters, \(\mathbf{q}_{i_{a}},\mathbf{q}_{i_{e}},\mathbf{k}_{j},\mathbf{k}_{i}\in \mathbb{R}^{4d_{u}}\) and \(Z\) is the normalized factor. To enable a more realistic perception in the dialogic relationship, the Gaussian Self-attention Mechanism [36] is introduced to distinguish the varying effects of dialogic temporal location. Assuming that the emotional attribution of historical utterances to the current utterance follows a normal distribution, the encoding results of the IA-attention module will be assigned weights that obey a Gaussian distribution, which is calculated as follows: \[\hat{\mathbf{v}}_{i}=\sum_{j<i}\phi(d_{i,j}|\mu,\sigma)\tilde{ \mathbf{v}}_{j} \tag{7}\] where \(\hat{\mathbf{v}}_{i}\in\mathbb{R}^{4d_{u}}\),\(\phi\) is a Gaussian distribution, \(\mu\) and \(\sigma\) are their corresponding learnable parameters, \(d_{i,j}\)[36] stands for distance measuring the turn-taking interval between speakers. #### Iii-C2 Emotional Cognitive Loss The emotional overfitting of the ERC task mainly focuses on emotional drift and speaker modeling. Motivated by multi-task learning [37, 38], our proposed Emotional Cognitive loss \(\mathcal{L}_{\mathrm{EC}}\) is mainly composed of basic cross-entropy \(\mathcal{L}_{cross}\), KL divergence \(\mathcal{L}_{\mathrm{KL}}\) for predicting and recognizing emotions, and Adversarial Emotion Disentanglement loss \(\mathcal{L}_{adv}\). Among them, cross-entropy calculation is the main emotional loss, KL divergence can alleviate emotional drift, and Adversarial Emotion Disentanglement loss can overcome the overfitting of the model to speaker modeling. **Cross-Entropy Loss \(\mathcal{L}_{cross}\)**, the key elements of which are computed as follows: \[\mathcal{D}_{i}^{\mathrm{src}}=\mathrm{Softmax}(W_{\mathrm{D}}( \lambda_{\theta}(\hat{\mathbf{v}}_{i})+\mathbf{g}_{i})) \tag{8}\] \[\hat{y}_{i}=\mathrm{Softmax}(W_{o}\mathcal{D}_{i}^{\mathrm{src}}+ b_{o})\] (9) \[\mathcal{L}_{\mathrm{cross}}=-\frac{1}{\sum_{l=1}^{L}\tau(l)}\sum _{i=1}^{L}\sum_{k=1}^{\tau(i)}y_{i,k}\log(\hat{y}_{i,k}) \tag{10}\] where \(L\) is the total number of conversations in the trainset, \(\tau(i)\) is the number of utterances in the conversation, \(y_{i,k}\) denotes the one-hot vector and \(\hat{y}_{i,k}\) denotes probability vector for candidate emotional class \(n\) of the \(i^{th}\) utterance in \(l^{th}\) sample. **KL Divergence \(\mathcal{L}_{\mathrm{KL}}\)** are calculated as follows: \[\mathcal{D}_{i}^{\mathrm{tmp}}=\mathrm{Softmax}(W_{\mathrm{D}} \lambda_{\theta}(\hat{\mathbf{v}}_{i})) \tag{11}\] \[\mathcal{L}_{\mathrm{KL}}=\mathrm{KL-Divergence}(\mathcal{D}_{i}^ {\mathrm{tmp}},\mathcal{D}_{i}^{\mathrm{src}}) \tag{12}\] where \(\lambda_{\theta}\in\mathbb{R}^{4d_{u}\times 2d_{u}}\) and \(W_{\mathrm{D}}\in\mathbb{R}^{2d_{u}\times|\mathcal{E}|}\) denotes the emotional state generation network. \(|\mathcal{E}|\) is the number of emotion labels. By utilizing a shared weight matrix \(W_{\mathrm{D}}\) to map the predicted emotion \(\mathcal{D}^{\mathrm{tmp}}\) and the recognized emotion \(\mathcal{D}^{\mathrm{src}}\), the model is able to generate more accurate emotional representations in the current emotional state and make more precise inferences based on historical utterances. **Adversarial Emotion Disentanglement**: loss \(\mathcal{L}_{adv}\) is proposed to further prevent the model from excessively focusing on the emotional information of a dialogue role, inspired by adversarial training methods [26, 39, 40, 28]. To be more specific, given an input sentence, we obtain its hidden representations using LSTM. Next, the model classify them based on predicted probability distributions. Then, we obtain the classification cross-entropy loss \(L_{cross}\). However, existing methods often being influenced by a specific dialogue role, it is difficult to consider the overall semantic information of the whole conversation, Therefore, we apply the Fast Gradient Value (FGV) technique [39, 40] to approximate the worst-case perturbation as a noise vector: \[v_{noise}=\epsilon\frac{g}{\|g\|};\;\text{where}\;g=\nabla_{e} \mathcal{L}_{cross} \tag{13}\] Here, the gradient represents the first-order derivative of the loss function \(\mathcal{L}_{cross}\), and \(e\) denotes the direction of rapid increase in the loss function. We perform normalization and use a small \(\epsilon\) to ensure the approximation is reasonable. Then, we add the noise vector \(v_{noise}\) and conduct a second forward pass, obtaining a new adversarial loss \(\mathcal{L}^{\prime}_{cross}\). Therefore, we obtain the adversarial disentanglement loss function as follow: \[\mathcal{L}_{adv}=\mathcal{L}_{cross}+\mathcal{L}^{\prime}_{cross} \tag{14}\] The overall training loss, namely \(\mathcal{L}_{\mathrm{EC}}\) calculated as: \[\mathcal{L}_{\mathrm{EC}}=\mathcal{L}_{\mathrm{cross}}+\alpha\mathcal{L}_{ \mathrm{KL}}+\beta\mathcal{L}_{adv} \tag{15}\] where \(\alpha and\beta\) are hyperparameter mentioned in Implementation Details. As a result, the combined loss facilitates the model's learning of emotional continuity coding and emotional attribution coding, ultimately improving its overall performance. ## IV Experiments ### _Dataset_ We assess the performance of HCAN on three benchmark datasets which are IEMOCAP [42], MELD [43] and EmoryNLP [44]. **IEMOCAP** is a dataset recorded as dyadic conversational video clips with eight speaker participating in the training set while two speaker in testing set. **MELD** dataset is a multimodal dataset that has been expanded from the EmotionLines dataset with seven emotional labels. MELD is obtained from the popular TV show _Friends_ and comprises over 1400 dialogues and 13000 utterances. **EmoryNLP** is a textual dataset also collected from the TV series _Friends_. The dataset comprises utterances that are categorized into seven distinct emotional classes. In this work, we only consider the emotional classes for the MELD and EmoryNLP datasets. Additionally, we maintain consistency with COSMIC in terms of the train/val/test splits. The details of datasets are presented in TABLE I and TABLE II. ### _Baselines_ For the baselines, we mainly select two groups of outstanding models to compare with our approach. #### Iv-B1 Recurrent-Based Methods **DialogueRNN**[13] dynamically models emotions by taking into account the current speaker, contextual content, and emotional state, with a focus on distinguishing between different speakers. **COSMIC**[14] is a conversational model that incorporates commonsense knowledge to improve its performance which injects commonsense knowledge into Gated Recurrent Units to capture the internal state, external state, and intent state' features. **SKAIG**[45] is improved by incorporating action information inferred from the preceding context and the intention suggested by the subsequent context. Additionally, it utilized a CSK method to represent the edges with knowledge, and introduced a graphics converter to handle them. **DialogueCRN**[29] designs multi-turn reasoning modules to extract and integrate emotional clues which performs an iterative process of intuitive retrieval and conscious reasoning, mimicking the unique cognitive thinking of humans. #### Iv-B2 Attention-Based Methods **KET**[17] utilizes external commonsense knowledge through the use of hierarchical self-attention and context-aware graph attention. This approach allows for dynamic incorporation of knowledge into transformers, resulting in a knowledge-enriched model. **DAG-ERC**[12] regards the internal structure of dialogue as a directed acyclic graph to encode utterances, providing a more intuitive approach to model the information flow between the distant conversation background and the nearby context. **TODKAT**[31] proposes a language model (LM) augmented with topics, which includes an additional layer specialized in detecting topics, and incorporates commonsense statements obtained from a knowledge base based on the dialogue context. **CoG-BART**[20] presents a new method that employs a contrastive loss and a task for generating responses to ensure that distinct emotions are mutually exclusive. ### _Implementation Detail_ Following COSMIC [14], we only utilize utterance-level text features that are fine-tuned using RoBERTa [46] to accomplish the ERC task. We conduct all HCAN experiments with a learning rate of 1e-4. The batch size is set to 32 and the dropout rate is kept at 0.2. The number of LSTM layers was set to 2, 1, and 1 on IEMOCAP, MELD, and EmoryNLP datasets, respectively. The number of heads in standard multi-head attention and IA-attention are 8 and 4, respectively. The hyperparameter \(\alpha\) is set as 0.1, 0.2, 0.2 for IEMOCAP, MELD, and EmoryNLP datasets while \(\beta\) is unified as 0.05. The results reported in our experiments are based on the average score of 5 random runs on the test set. A server with one NVIDIA A100(40G) GPU is used to conduct our experiments. The addtional reproduction experiments are aligned to the baselines strictly. ### _Main Result_ TABLE III shows the main results of HCAN on three benchmarks compared to previous methods. The results demonstrate that our HCAN achieves the best performance across all three datasets. Furthermore, compared to the previous state-of-the-art (SOTA) models on IEMOCAP, MELD, and EmoryNLP, HCAN outperforms them by 1.18%, 0.95%, and 0.63%, respectively. IEMOCAP is known for having longer multi-turn dialogues and a well-balanced distribution of emotions, which allows for a more comprehensive evaluation of model performance. Our significant improvement(1.18%) in performance on this dataset successfully demonstrates the model's ability to model long-distance emotional continuity and effectiveness in dyadic conversational scenario. MELD and EmoryNLP datasets consist of multiple dialogue roles and shorter conversations, which closely resemble real-life scenarios. Additionally, these datasets have highly imbalanced emotion categories. Our model's improvement on these datasets demonstrates its effectiveness in capturing complex dialogue relationships and interpersonal emotional dependencies, as well as its robustness in recognizing different emotions. It is worth noting that the previous SOTA models were achieved using different models for each dataset, as the sample characteristics of each dataset vary significantly. However, our method unifies the SOTA across these benchmarks, demonstrating the generalizability of our approach in different application scenarios. ### _Ablation Studies_ As shown in TBALE IV, we conducted more detailed ablation experiments to quantify the contributions of the ECE module, EAE module, \(\mathcal{L}_{\mathrm{KL}}\), \(\mathcal{L}_{\mathrm{sec}}\) to the performance. (1) For ECE module, the ablation experiments leads to a performance decrease of 2.75%, 0.60%and 0.32% on IEMOCAP, MELD and EmoryNLP respectively, demonstrating its generalization on different scenarios and especially effectiveness in long conversation. (2) For EAE module, the removal of EAE leads to a performance decrease of 0.67%, 1.65% and 1.99%on IEMOCAP, MELD and EmoryNLP respectively. The results elaborate the effectiveness of EAE and the importance of emotional attribution modeling based on dialogue relationship. (3) For KL loss, the removal of \(\mathcal{L}_{\mathrm{KL}}\) causes a decrease in model performance by 0.93% on the EmoryNLP dataset. This suggests the effectiveness of KL in detecting emotional shifts, as this dataset often contains emotional shifting samples. Overall, the unique contributions of different modules jointly contribute to the generalization and effectiveness of HCAN. ### _The Exploration of Generality_ Regarding the universality of the EAE module, as it has strong transferability, we conducted experiments by adding it to different models based on recurrent and attention-based methods shown in TBALE II. The results show that our EAE module can effectively improve the performance of models based on different architectures. Moreover, the performance improvement on IEMOCAP, a dataset with long dialogues, is stronger than that on MELD, which has shorter conversations. Meanwhile, we observe that the improvement in models based on recurrent methods(i.e. COSMIC) is greater than that in models based on attention mechanisms(i.e TODKAT). This is logical because our EAE module is implemented based on attention mechanisms, which are naturally superior to temporal structures in modeling various levels of emotional attribution. It is reasonable to assume that attention-based methods implicitly capture emotional attribution to some extent, while our method captures more comprehensive emotional attribution information, leading to performance improvement. ### _The Robustness of Speaker Modeling_ By incorporating the ECE module to capture the conversational dynamics, our model has successfully captured rich speaker characteristics. Our approach to modeling speaker robustness is primarily reflected in the **Adversarial Emotion Disentanglement** loss. To quantify the contribution of this loss in mitigating speaker modeling overfitting, we conducted experiments similar to those in the EAE module's generalization study. TBALE V shows that removing the \(\mathcal{L}_{adv}\) module results in a certain degree of performance degradation for the HCAN model. Conversely, adding the \({}_{+\mathcal{L}_{adv}}\) module to other baselines leads to significant performance improvements. For the SKAIG and COSMIC models, which utilize a large amount of common sense knowledge to model speaker emotions, our loss function effectively prevents overfitting on the IEMOCAP and MELD datasets, while maintaining their performance improvements. However, for models that focus on modeling conversational dynamics, such as DAG-ERC, the effect of loss improvement is limited. This is because their modeling of conversational dynamics enhances the robustness of speaker modeling to some extent. ### _Case Study_ Fig. 3 shows a segment of a dyadic conversation. Intuitively, the anger expressed by speakerB in the \(n^{th}\) sentence seems to have been mainly triggered by his own surprise towards "kiss" and the question posed by speakerA in the \(n\)-\(1^{th}\) sentence. Meanwhile, the perfunctory response from speakerA in the \(2^{rd}\) sentence may have also contributed to some extent.It is evident that the distribution of predictive emotion aligns with that of identifying the emotion, and the attention weights further demonstrate model's ability to effectively capture the relationship in long-distance conversations. ## V Conclusion Insufficient modeling of context and ambiguous capture of dialogue relationships have been persistent challenges in improving the performance of ERC models. In this work, we propose HCAN to significantly addresses these issues. Our proposed ECE module achieves strong robustness in modeling global emotion continuity across different datasets by combining recurrent and attention-based approaches. It particularly demonstrates outstanding performance on long conversation samples. Meanwhile, the proposed EAE module extracts intra-attribution and inter-attribution features, which offers a more direct and accurate modeling of human emotional comprehension in the perspective of Attribution Theory. The proposed comprehensive loss function, namely Emotional Cognitive Loss \(\mathcal{L}_{\mathrm{EC}}\), which effectively mitigates emotional drift and addresses the issue of overfitting in speaker modeling. Moreover, EAE module exhibits strong generalization and effectiveness when added to current models. Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work.
2310.20268
Constructing Sample-to-Class Graph for Few-Shot Class-Incremental Learning
Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples, without forgetting knowledge of old classes. The challenges of FSCIL lies in the limited data of new classes, which not only lead to significant overfitting issues but also exacerbates the notorious catastrophic forgetting problems. As proved in early studies, building sample relationships is beneficial for learning from few-shot samples. In this paper, we promote the idea to the incremental scenario, and propose a Sample-to-Class (S2C) graph learning method for FSCIL. Specifically, we propose a Sample-level Graph Network (SGN) that focuses on analyzing sample relationships within a single session. This network helps aggregate similar samples, ultimately leading to the extraction of more refined class-level features. Then, we present a Class-level Graph Network (CGN) that establishes connections across class-level features of both new and old classes. This network plays a crucial role in linking the knowledge between different sessions and helps improve overall learning in the FSCIL scenario. Moreover, we design a multi-stage strategy for training S2C model, which mitigates the training challenges posed by limited data in the incremental process. The multi-stage training strategy is designed to build S2C graph from base to few-shot stages, and improve the capacity via an extra pseudo-incremental stage. Experiments on three popular benchmark datasets show that our method clearly outperforms the baselines and sets new state-of-the-art results in FSCIL.
Fuyuan Hu, Jian Zhang, Fan Lyu, Linyan Li, Fenglei Xu
2023-10-31T08:38:14Z
http://arxiv.org/abs/2310.20268v1
# Constructing Sample-to-Class Graph for Few-Shot Class-Incremental Learning ###### Abstract Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples, without forgetting knowledge of old classes. The challenges of FSCIL lies in the limited data of new classes, which not only lead to significant overfitting issues but also exacerbates the notorious catastrophic forgetting problems. As proved in early studies, building sample relationships is beneficial for learning from few-shot samples. In this paper, we promote the idea to the incremental scenario, and propose a Sample-to-Class (S2C) graph learning method for FSCIL. Specifically, we propose a Sample-level Graph Network (SGN) that focuses on analyzing sample relationships within a single session. This network helps aggregate similar samples, ultimately leading to the extraction of more refined class-level features. Then, we present a Class-level Graph Network (CGN) that establishes connections across class-level features of both new and old classes. This network plays a crucial role in linking the knowledge between different sessions and helps improve overall learning in the FSCIL scenario. Moreover, we design a multi-stage strategy for training S2C model, which mitigates the training challenges posed by limited data in the incremental process. The multi-stage training strategy is designed to build S2C graph from base to few-shot stages, and improve the capacity via an extra pseudo-incremental stage. Experiments on three popular benchmark datasets show that our method clearly outperforms the baselines and sets new state-of-the-art results in FSCIL. The code is available at github.com/DemonJianZ/S2C. ## I Introduction The volume of data on the internet is constantly increasing, and in response to this growing data, incremental learning [31] has seen significant development in recent years. When new data is labeled for new classes, it introduces the challenge of Class-Incremental Learning (CIL) [15, 29, 24], and a prominent issue that emerges is catastrophic forgetting [16]. The catastrophic forgetting refers to the decline in discriminative ability for previously learned classes. While many solutions to CIL involve abundant training samples [7], practical applications sometimes have only few samples, because of the challenges of data collection or labeling. For example, in scenarios involving personalized content recommendations while considering user privacy, the available data is often severely limited. This scenario of CIL with few training samples is termed Few-Shot Class-Incremental Learning (FSCIL) [45]. Similar to CIL, learning new classes in FSCIL results in catastrophic forgetting of prior classes. Furthermore, due to the scarcity of instances from new classes, _overfitting_ tends to occur on these restricted inputs. This, in turn, heightens the learning difficulty of incremental tasks. As shown in Fig. 1, the training of FSCIL is class-incremental and in sequence, and the data of past classes is unavailable. The incremental model is evaluated across all previously encountered classes at any sessions. When addressing FSCIL challenges, one plausible approach is to employ traditional CIL methods, including widely used techniques like knowledge distillation [46]. While CIL approach has partially alleviated the problem of catastrophic forgetting, straightforwardly adopting there methods in FSCIL is ill-advised, given the scarcity of training samples that leads to overfitting and inadequate performance on previously learned classes [20]. On the other hand, for each few-shot session, another approach is to applied Few-Shot Learning (FSL) methods to the current few samples. For example, as proved in [27, 33], using class means (prototype features) to mitigate overfitting is effective in FSL. In several recent FSL works [47], building sample relationships using Graph Neural Network (GNN) [48] is beneficial for learning from very few samples. GNN can express complex interactions between samples by performing feature aggregation from neighbors, and mining refined information from a few samples between support and query data. However, these FSL methods ignore the incremental sessions, and show unacceptable catastrophic forgetting. In summary, current FSCIL methods face a challenge in balancing the effective learning of new tasks with the forgetting suppression of old tasks. But some of these methods [5, 50, 46] focus on bringing techniques from CIL to suppress catastrophic forgetting, while some others [14, 44, 35] aim to enhance model adaptation for few-shot tasks, thus they could hardly effectively address both Fig. 1: Illustration of our proposed S2C for FSCIL. **Top:** the setting of FSCIL. **Bottom**: Sample-level to Class-level graphs. aspects in FSCIL. Inspired by the use of GNN in FSL, in this paper, we investigate to build the relationships of cross-session classes using limited samples in FSCIL, aiming to enhance the performance of individual few-shot tasks and reduce the forgetting at the same time. As shown in Fig. 1, this paper introduces an innovative _Sample-to-Class (S2C)_ graph learning approach, which establishes connections from the sample level to the class level. **The model**: The S2C model has two major components to build graph relations from sample-level to class-level. First, the Sample-level Graph Network (SGN) evaluates the similarity between samples within a single few-shot session, clusters samples from the same class, and distinguishes samples from different classes. The SGN yields more refined features and mitigates the overfitting problem to some extent. Moreover, to construct the semantic relationship among multiple classes from different sessions during incremental learning, we propose a Class-level Graph Network (CGN). The CGN forges connections between old and novel classes, thereby augmenting the capacity to differentiate classes across sessions and alleviating the catastrophic forgetting. **The training**: To smoothly deploy the S2C model in FSCIL, we propose a novel training strategy, which comprises three main stages. The first stage takes advantage of the ample training data available in the base session to initialize the CGN, thereby preserving a substantial amount of prior knowledge for the subsequent learning of few-shot tasks. The second stage is designed to address the issue of insufficient sample-level relationship mining due to the limited number of samples. This is achieved through the S2C pseudo incremental learning, which adapts the S2C model to the FSL task beforehand. During this pseudo-incremental process, FSL tasks are randomly sampled from the base dataset, and virtual FSCIL tasks are generated. In the last stage, we deploy the S2C model to a real FSCIL scenario for further optimisation. Our contributions can be summarized in three main aspects: 1. We introduce a novel S2C method for FSCIL, comprising the SGN and the CGN. This innovative structure serves to bridge the relationships between old and new classes at two distinct levels. To the best of our knowledge, our work pioneers the incorporation of graph neural networks into FSCIL from two unique perspectives. 2. We propose a novel S2C multi-stage training strategy, which trains the S2C model incrementally, allowing S2C to adapt and construct graphs effectively even with limited samples. With the three stages, S2C establishes semantic relationships across multiple sessions, mitigating the issue of catastrophic forgetting. 3. We conduct comprehensive experiments on benchmark datasets, including CIFAR100, miniImageNet, and CUB200. The empirical results substantiate the superiority of our approach over state-of-the-art methods, demonstrating a substantial performance margin. ## II Related Work **Few-Shot Learning.** Few-shot learning aims at rapidly generalizing to new tasks with limited samples, leveraging the prior knowledge learned from a large-scale base dataset. The existing methods can be divided into two groups. Optimization-based methods [10, 17, 38] try to enable fast model adaptation with few-shot data. Metric-based algorithms [26, 12, 39] utilize a pretrained backbone for feature extraction, and employ proper distance metrics between support and query instances. Recent research tries to leverage GNNs to explore complex similarities among examples. DPGN [25] builds up a dual graph to model distribution-level relations of examples for FSL. ECKPN [4] proposes an end-to-end transductive GNN to explore the class-level knowledge. **Meta-learning.** Meta-learning is commonly described as the concept of "learning to learn." This approach involves the extraction of knowledge and insights from multiple learning episodes and then leveraging this acquired experience to enhance performance in future learning tasks [49]. Meta-learning is typically divided into two distinct stages. In the first stage, known as the meta-training stage, a model is trained using multiple source or training tasks. This training process aims to acquire initial network parameters that exhibit robust generalization capabilities. In the second stage, known as the meta-testing stage, new tasks are introduced, and the conditions for these tasks are identical to those of the source tasks. Meta-learning is inherently well-suited for FSL, and numerous research studies have employed meta-learning as an approach for FSL. This enables models to acquire knowledge and adapt from a limited number of samples associated with new tasks [50, 51]. **Class-Incremental Learning.** Class-Incremental Learning aims to learn from a sequence of new classes without forgetting old ones, which is now widely discussed in various computer vision tasks. Current CIL algorithms can be divided into three groups. The first group estimates the importance of each parameter and prevents important ones from being changed [1, 40]. The second group utilizes knowledge distillation to maintain the model's discriminability [16]. Other methods rehears former instances to overcome forgetting [28, 34, 41, 42, 43, 44]. [14] pre-allocates classifiers for future classes, which needs extra memory for feature tuning and is unsuitable for FSCIL. Various approaches have been developed to address the challenge of retaining knowledge in incremental learning scenarios. iCaRL [16] employs replay and knowledge distillation to maintain previously learned knowledge. Other works explore different strategies such as saving embeddings instead of raw images, leveraging generative models for data rehearsal, task-wise adaptation, and output normalization to combat forgetting and adapt to new knowledge. **Few-Shot Class-Incremental Learning.** FSCIL addresses the dual challenges of FSL and CIL. Specifically, FSCIL focuses on learning from a minimal number of novel samples while retaining previously acquired knowledge. TOPIC [45] introduced the concept of FSCIL and utilized neural gas for topology preservation in the embedding space. Subsequent works [50] adapted existing CIL approaches to tackle FSCIL challenges. Other methods like [5] leverage word vectors to mitigate the intrinsic difficulty of data scarcity in FSCIL. An emerging approach involves meta-training on base class data, as seen in [50], by simulating a number of fake incremental episodes for test scenarios. However, this often requires extra meta-training phases and parameter freezing, limiting practicality in real-world scenarios and the adaptability of models to novel concepts. Indeed, while there has been significant progress in addressing forgetting and overfitting issues, achieving a unified framework to tackle both problems remains a challenge. The distribution calibration method [51] introduced a promising approach to mitigate overfitting, but it faces limitations in scalability when applied to the context of FSCIL. Finding solutions that effectively combine both forgetting and overfitting mitigation in a scalable framework remains an active area of research. ## III Problem Description: FSCIL FSCIL has multiple continual tasks or sessions that appears in streams. Once the model starts to learn the current task, none of the previous data is available anymore. Besides, the evaluation of the model at each session involves the class in all previous sessions and current sessions. In concrete terms, given \(T\) classification tasks with \(\mathcal{D}_{\mathrm{train}}\) = \(\{\mathcal{D}_{\mathrm{train}}^{t}\}_{t=0}^{T}\), where \(\mathcal{D}_{\mathrm{train}}^{t}\) ={(\(x_{i}\), \(y_{i}\) )}\({}_{i=0}^{NK}\) represents the training samples at session \(t\). \(x_{i}\in\mathcal{X}^{t}\) and \(y_{i}\in\mathcal{Y}^{t}\) are the \(i\)-\(th\) data and the corresponding label. We also denote \(\mathcal{X}^{t}\) and \(\mathcal{Y}^{t}\) as the sample set and label space at \(t\)-th session. FSCIL task is to train a model from a continuous data stream in a class-incremental form, _i.e._, training sets \(\{\mathcal{D}_{\mathrm{train}}^{0},\mathcal{D}_{\mathrm{train}}^{1},\ldots \mathcal{D}_{\mathrm{train}}^{t}\}\). The label set from different sessions are disjoint, _i.e._, \(\mathcal{Y}^{i}\) [\(\mathcal{Y}^{j}\) = \(\varnothing\) for \(i\neq j\). At the \(t\)-th learning session, only \(\mathcal{D}_{\mathrm{train}}^{t}\) can be obtained for network training. When we step into the evaluation stage, the test dataset \(\mathcal{D}_{\mathrm{test}}^{t}\) should include test data from all classes that appears in previous and current sessions, _i.e._, all encountered label sets \(\{\mathcal{Y}^{0}\cup\mathcal{Y}^{1}\cdots\cup\mathcal{Y}^{t}\}\) at the \(t\)-th session. For the first session, \(\mathcal{D}_{\mathrm{train}}^{0}\) has sufficient samples which is also called base training session. For each class in the subsequent sessions, we have only a few samples. This training data is usually organized as a \(N\)-way \(K\)-shot, where \(N\) denotes \(N\) classes and \(K\) denotes \(K\) samples per class in dataset. To measure an FSCIL model, we calculate the accuracy on the test set \(\mathcal{D}_{\mathrm{test}}^{t}\) at each session \(t\). ## IV Method In FSCIL, the number of each session is small, and the incremental training makes the old tasks forget. Traditional FSL methods [47] use GNN to establish relationships among few-shot samples, which effectively mitigate overfitting problems. Inspired by the use of GNN in FSL, we introduce GNN into FSCIL to create a sample-level graph that builds the underlying relationships among few-shot samples for each session. However, only the graph inside of each session is infeasible for the incremental scenario, because the previous samples are not available in the current few-shot training. We seek to further establish dependencies among multiple classes from different sessions during the incremental learning process. To this end, we introduce to build cross-session class-level graph on the basis of sample-level graph. As shown in Fig. 2, given the two kinds of graphs, we also develop a novel Sample-to-Class (S2C) graph training strategy to leverage the deep relations in prediction. The framework includes sample-level and class-level graph networks, and leverage a multi-stage training strategy to improve the graph networks. ### _Sample to Class (S2C) Graph Network_ #### Iv-A1 Sample-Level Graph Network In traditional FSL, GNN is used to establish relationships between support and query samples. Inspired by this, we introduce the Sample-level Graph Network (SGN) to facilitate the learning of each FSL task. As shown in Fig. 3, for a current few-shot task, we first define the nodes of the SGN using the all available sample features belonging to different classes. Let \(\mathcal{G}_{\mathrm{SGN}}\) = \(\{\mathcal{V}_{\mathrm{SGN}},\mathcal{E}_{\mathrm{SGN}}\}\), where node set \(\mathcal{V}_{\mathrm{SGN}}\) = \(\{\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{k}\}\) consists of the features \(\mathbf{z}\) of each sample. The edge set \(\mathcal{E}_{\mathrm{SGN}}\) of SGN is defined as relationship between nodes within each FSL task: \[e_{ij}^{\mathrm{SGN}}=\phi(\mathbf{z}_{i}-\mathbf{z}_{j}), \tag{1}\] where \(\phi\) containing two Conv-BN-ReLU blocks, is the encoding network that transforms the instance similarity to a certain scale. In this way, we construct a fully-connected sample-level graph based on the feature representations of all samples in the few-shot task. In the sample-level graph, each node corresponds to a feature, and each edge represents the relationship between the two connected nodes. By applying iterative aggregation operations of the GNN on both node information and edge information, the features of the samples are continuously updated, and the relationships between samples are re-established during this process. This allows for refined sample-level feature and a more accurate understanding of the relationships between samples. Then, the obtained embeddings by SGN are averaged for each class as a refined class-level feature: \[\mathbf{p}_{c}^{\mathrm{SGN}}=\frac{1}{K}\sum_{i=1}^{K}(\mathbf{z}_{i}+\sum_{j }(e_{ij}^{\mathrm{SGN}}\cdot\mathbf{z}_{j})), \tag{2}\] where \(\mathbf{p}_{c}^{SGN}\) represents the \(c\)-th refined class-level feature of few-shot task, \(K\) is the number of samples in each class. In addition, to enhance the SGN model's capability to discover relationships between few-shot samples, we introduce the triplet loss into SGN: \[L_{\mathrm{SGN}}=\max(0,\|\mathbf{z}_{i}-\mathbf{z}_{P}\|^{2}-\|\mathbf{z}_{i}- \mathbf{z}_{N}\|^{2}+m), \tag{3}\] where \(m\) is a margin parameter which can be used to control the distance between positive and negative samples. \(z_{P}\) and \(z_{N}\) is represented as the features of positive samples and negative samples respectively. This loss function is designed to increase the distance between samples from the same class while simultaneously decreasing the distance between samples from different classes. This strategy aims to improve the discriminative power of the SGN in distinguishing between samples and effectively capturing sample-level relationships. After SGN in-depth exploration of the relationships among the few-shot samples, we obtain the class-level features of the most representative few-shot classes. However, SGN can only assess sample-level relationships within a few-shot session. That is, when a new session begins, the relationships of the old samples cannot be used in the current training, yielding catastrophic forgetting. Motivate by this, we try to establish class-level relationships among multiple few-shot sessions. #### 4.1.2 Class-Level Graph Network The relationship established by SGN is limited to the samples within a same session and cannot be established for class-level features under different sessions. In other words, the model need to adapt to new FSL tasks while simultaneously retaining proficiency in previously encountered tasks. To this end, we use class-level features as a medium to form dependencies between old and new classes, and construct Class-level Graph Network (CGN) in the incremental learning scenarios. CGN leverages previously learned knowledge to aid in the learning of the current few-shot task, allowing for more robust and efficient learning across multiple sessions. As shown in Fig. 4, in CGN, we combine the Transformer [21] with the GNN to build links between novel and old classes by utilizing the precise capture of global information. Specifically, the base graph and the refined class-level features exported by SGN are used as input to the CGN. Then, we use the multi-head attention mechanism to construct the relationship between the old class and new class, and use the GNN to aggregate these information to iteratively calibrate the prototypes of the novel class. Eventually a class-level feature graph with well-established relationships is outputted. We set the parameters query \(\mathbf{q}\), key \(\mathbf{k}\) and value \(\mathbf{v}\) to \[\mathbf{v}=\mathbf{p}_{c}^{\mathrm{SGN}},\quad\mathbf{k}=W_{k}^{T}\mathbf{v},\quad\mathbf{q}=W_{q}^{T}\mathbf{v}, \tag{4}\] \(W_{k}\) and \(W_{q}\) are the learnable parameter of linear projection function. The class-level features formula after CGN calibrating operation is as follows: \[\mathbf{p}_{c}^{\mathrm{CGN}}=\mathbf{p}_{c}^{\mathrm{SGN}}+\frac{\mathbf{k} ^{T}\mathbf{q}}{\sqrt{d}}\mathbf{v}, \tag{5}\] where \(\sqrt{d}\) is a scaled factor. To keep the distinction between the new class and the old class, we define the following per-sample loss function to learn CGN: \[L_{\mathrm{CGN}}=L\left(G\left[cos(\mathbf{z}_{i},\mathbf{p}_{c}^{\mathrm{CGN }})\right],y_{i}\right). \tag{6}\] Fig. 3: Sample-level Graph Neural Network. Fig. 2: Our Sample-to-Class learning scheme for few-shot class-incremental learning. In the base session, we pre-train our feature extractor and construct the base class graph. In the Pseudo-incremental learning stage, we sythesize virtual tasks to make model fast adapt to few-shot scenario. #### 4.1.3 S2C loss function S2C is trained by optimizing the following loss function: \[L=L_{\mathrm{SGN}}+\alpha L_{\mathrm{CGN}}, \tag{7}\] where \(\alpha\) is a pre-defined scaled factor. With the help of the SGN, the CGN connects class-level features with the rich semantic information obtained from SGN. CGN establishes connections between class-level features from all sessions through an attention mechanism, resulting in a graph with abundant class-level features. This graph is then used for subsequent label prediction tasks, enhancing the model's ability to make predictions. ### _S2C Training Procedure for FSCIL_ Nevertheless, it is still difficult to build S2C graph, because of the very small number of samples for each session. In FSCIL, before the few-shot incremental sessions, a base session is used for pre-training the model [27]. In the base session, there are an ample number of training instances available to build the initial model. Inspired by the meta learning [49], we propose to pre-learn how to build graph from sample-level to class-level within the base session. Specifically, as shown in Fig. 2, we design a multi-stage training strategy for S2C. The strategy consists of three stages, namely Graph pre-construction stage, S2C pseudo-incremental training stage and Few-shot incremental training stage. #### 4.2.1 Graph pre-construction stage Before few-shot sessions, the base session offers a substantial volume of data that can serve as prior knowledge for the model to tackle subsequent few-shot tasks, thereby helping to alleviate the overfitting issue. Nevertheless, this prior knowledge is often underutilized and doesn't effectively aid in learning subsequent knowledge, creating a significant hindrance to FSCIL. To tackle this problem, we employ a strategy to compute class-level features enriched with semantic knowledge by extracting features from a substantial number of samples. A base graph is built based on the similarity relationships between these class-level features, which can be updated and adapted to subsequerrt tasks. Specifically, we first pretrain a feature extractor in the base session, using training samples from \(\mathcal{D}^{0}_{\mathrm{train}}\): \[\theta^{*}=\min_{\theta}\mathcal{L}\left(G\left[f_{\theta}(x)\right],y\right), \tag{8}\] where \(\mathcal{L}(\cdot)\) represents cross-entropy loss function, \(f_{\theta}(\cdot)\) is the feature extractor parameterized by \(\theta\) and \(G(\cdot)\) denotes the classifier. Let \(\mathcal{G}_{\mathrm{base}}=\{\mathcal{V},\mathcal{E}\}\) denote the base graph, where \(\mathcal{V}_{\mathrm{base}}=\{\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_ {M}\}\) is the node set and \(\mathcal{E}_{\mathrm{base}}\) is the edge set. In the base graph, we first initiate the nodes with base class prototype: \[\mathbf{v}_{m}=\frac{1}{N}\sum_{n=1}^{\left\lfloor D^{0}_{\mathrm{train}} \right\rfloor}f_{\theta}(x_{n})\cdot\mathbb{I}(y_{m}=y_{n}), \tag{9}\] where \(N\) is the number of samples belonging to the \(m\)-\(th\) class and \(\mathbb{I}(\cdot)\) is the indicator function. Then, our base graph edges \(\mathcal{E}\) is defined as similarity between nodes \(\mathbf{v}_{m}\) and \(\mathbf{v}_{n}\) : \[e_{mn}=\frac{\mathbf{v}_{m}^{\mathrm{T}}\mathbf{v}_{n}}{\|\mathbf{v}_{m}\|\| \mathbf{v}_{n}\|}. \tag{10}\] Establishing base graph lays the foundation for subsequent incremental class learning. The base graph not only provide prior knowledge for the learning of new classes but also serve as a medium for connecting SGN to CGN. #### 4.2.2 S2C pseudo-incremental training stage In order to enhance S2C model's capability to learn from few-shot data, we design to make model learn how to construct graphs in FSCIL scenarios ahead of time. To this end, we devise the pseudo-incremental learning process. This process operates within the base session and is tailored to bolster the model's capacity to effectively adapt to new FSL tasks. To enhance the model's discriminative ability for new classes in forthcoming tasks, we introduce a _meta-learning-based pseudo-incremental training paradigm_. This paradigm equips the model with the skills to learn how to effectively grasp a new class using only a few samples. Specifically, we stochastically draw \(N\) FSL tasks, denoted as \(T_{1}\) to \(T_{N}\) from the training set \(\mathcal{D}^{0}_{\mathrm{train}}\). These tasks are characterized by an \(N\)-way \(K\)-shot setup, satisfying the condition \(\mathcal{Y}^{1}\cap\mathcal{Y}^{2}\cap\ldots\mathcal{Y}^{n}=\varnothing\). Note that these FSL tasks serve as foundational tasks within the pseudo-incremental process. Moreover, we employ manifold mixup [22] to fuse instances, treating the resulting fused instances as virtual incremental classes. We fuse two samples from different FSL tasks to generate new virtual samples \(\mathbf{z}\) which serve as data for virtual task \(\mathcal{T}\): \[\mathbf{z}=\sum_{i}^{NK}\lambda f_{\theta}(x_{i}^{t_{1}})+(1-\lambda)f_{ \theta}(x_{i}^{t_{2}}), \tag{11}\] where \(\lambda\in[0,1]\) is sampled from Beta distribution, and \(\mathbf{z}\) represents the feature of the sample in FSL task. Superscript \(t_{1}\) and \(t_{2}\) denotes different tasks. In this way, we strive to imbue the model with enhanced proficiency in assimilating and adapting to new knowledge in the FSCIL context. The pseudo-incremental learning paradigm enables S2C model to achieve the capility of building graph relationships among samples and classes before few-shot sessions. In the following subsections, we introduce how to build sample-level to class-level graph in the FSCIL process. #### 4.2.3 Few-shot incremental training stage Once the feature backbone is stabilized during the base session, and both the SGN and CGN have been trained in Fig. 4: Class-level Graph Neural Network. the S2C adaptation stage, our S2C model is ready to be applied to the task of few-shot class-incremental learning. In the subsequent stages, we feed the novel few-shot data into the pre-trained SGN, which update the nodes within the CGN. During the prediction phase, we utilize a metric-based evaluation approach to make predictions regarding the labels of the query nodes. In S2C, SGN (see Fig. 3) is built to analyze the relationship of a few samples to aggregate similar samples and obtains refined class-level features. SGN matches the class-level features after learning with the base graph, which not only strengthens SGN's ability to learn FSL tasks but also reduces the interference to other classes. CGN (see Fig. 4) extends the calibrated class-level features to the base class graph and predicts the label of query samples. With the full cooperation of SGN and CGN, our S2C model learns more representative features while construct the links between multiple classes from different sessions. #### 4.2.4 Discussion In the multi-stage training process of S2C, we initially build the base graph to preserve the knowledge from the base dataset, which could aid in subsequent class-incremental learning. Then, we conducted a S2C adaptation stage, allowing the S2C model to adapt to the few-shot data beforehand. Finally, we deployed the S2C model in the real FSCIL tasks. This multi-stage approach enables the S2C model to perform effectively in FSCIL. In general, we introduce the S2C model for FSCIL which comprise two essential components: SGN and CGN. S2C is designed to establish feature dependencies among various sessions based on both sample-level and class-level features. We have also outlined a multi-stage training strategy for S2C, which enables the model to be effectively deployed in FSCIL tasks. ## V Experiment ### _Dataset_ We evaluate the effectiveness of the proposed method on datasets MiniImageNet, CUB200-2011 and CIFAR100. * _MiniImageNet_[53] is a subset of the ImageNet dataset, specifically designed for evaluating models' performance in scenarios where only a limited number of examples are available for each class. MiniImageNet contains 100 classes, each with 600 color images of size 84\(\times\)84 pixels. * _CIFAR100_[52] consists of 100 classes, each representing a different object category. The dataset contains 6,000 32\(\times\)32 RGB images, with 600 images per class. * _Caltech-UCSD Birds-200-2011_[54] CUB-200 is a widely used benchmark dataset in the field of fine-grained bird species recognition. The dataset contains 200 different bird species, each of which is with a set of annotated images. The dataset consists of 11,788 images in total. For MiniImageNet and CIFAR100, 100 classes are divided into 60 base classes and 40 new classes. The new classes are formulated into eight 5-way 5-shot incremental tasks. For CUB200, 200 classes are divided into 100 base classes and 100 incremental classes, and the new classes are formulated into ten 10-way 5-shot incremental tasks. ### _Training and evaluation protocol_ For CIFAR100, we use ResNet20, while for other datasets we use ResNet18. We optimize with stochastic gradient descent using momentum 0.9, and the learning rate is set to 0.1 and decays with cosine annealing. We evaluate models after each session on the test set \(\mathcal{D}_{\mathrm{test}}\) and report the Top 1 accuracy. We also use a performance dropping rate (PD) that measures the absolute accuracy drops in the last session w.r.t. the accuracy in the first session, _i.e._, \(\text{PD}=A_{0}-A_{N}\), where \(A_{0}\) is the classification accuracy of the base session and \(A_{N}\) is the accuracy of the last session. ### _Training details_ We adhere to standard data preprocessing and augmentation protocols, encompassing random resizing, random flipping, and color jittering. Our model training employs a batch size of 512 during the base session, and a batch size of 128 in each incremental session. On the miniImageNet dataset, the base session spans 500 epochs, with each incremental session spanning 100 iterations. Initial learning rates stand at 0.1 for the base session and 0.05 for incremental sessions. For CIFAR-100, we conduct 300 epochs in the base session, with each incremental session spanning 100 iterations. Initial learning rates remain consistent at 0.1 for both base and incremental sessions. On the CUB-200 dataset, we train for 100 epochs during the base session, and each incremental session covers 80 iterations. Initial learning rates remain consistent at 0.1 for the base session and 0.05 for incremental sessions. Across all experiments, a cosine annealing strategy governs the learning rate, and the optimizer utilized is SGD with momentum 0.9. The top-1 accuracy and performance dropping (forgetting) rate is introduced to evaluate models after each session. ### _Major comparison_ We compare our proposed S2C method with existing methods and report the performance on three FSCIL benchmark datasets in Tables I, II and III. These methods include classical CIL methods, such as iCaRL [16], EEIL [2], and Rebalancing [8], as well as continual-trainable FSCIL methods like TOPIC [20], and backbone-frozen FSCIL methods such as SPPR [35], DeepEMD/Cosine/NegCosine [11, 23, 26], CEC [27], and FACT [33] and model-complement methods such as MCNet [36], MFS3 [37]. We also include a simple baseline, labeled as 'finetune', where the model is directly fine-tuned using the limited available data. As the whole, we observe that S2C consistently outperforms the current SOTA method on benchmark datasets. The performance of S2C method is higher than that of other methods, and the performance dropping rate is lower than that of other methods. Specifically, our PD outperforms the SOTA results by 0.39 on CIFAR100, 0.82 on miniImageNet and 0.50 on CUB200. The poor performance of CIL method (such as iCaRL) indicates that classical CIL methods primarily focus on extending the model with sufficient instances and are not well-suited for few-shot task. S2C has better performance than Decoupled-DeepEMD/Cosine/NegCosine [11, 23, 26], CEC [27] and FACT [33], MCNet [36] and MFS3 [37]. It reveals that in FSCIL, continual-trainable FSCIL methods encounter overfitting issues and perform poorly in incremental sessions, it is important to make FSL tasks be trained well which strengthens new task constraints to reduce the impact on old tasks. As shown in Fig. 6, we compared the accuracy of each session on the MiniImageNet dataset with the CEC [27], FACT [33], MCNet [36] and MFS3 [37]methods. It can be seen from the figure that in the FSCIL task learning process, the performance of each session is higher than that of other methods. ### _Ablation Study_ We conducted an in-depth analysis of the significance of each component within the S2C approach on datasets MiniImageNet, CIFAR100, and CUB-200-2011. The results are presented in Fig. 5. We designed models with varying combinations of core S2C elements for comparison. The "Baseline" model denotes the scenario where the backbone network directly learns FSCIL tasks. By examining Fig. 5, we deduce the following insights: 1) The incorporation of the CGN module effectively mitigates the issue of catastrophic forgetting that is observed in the baseline model during FSCIL tasks. 2) The integration of the SGN module elevates the learning performance of FSL tasks. This enhancement is reflected not only in FSL tasks but also overall across sessions, highlighting the significance of SCN for FSL task training. 3) Combining both SGN and CGN modules not only enhances FSL task performance but also takes into consideration semantic conflicts arising due to data imbalance and other factors between old and new classes. Through ablation experiments, we establish that both the SGN and CGN modules significantly contribute to the success of FSCIL tasks. ### _Visualization of Incremental Session_ We visually represent the learned decision boundaries using t-SNE on the CUB-200-2011 dataset, as depicted in Fig 7: 1) Fig. 7(a): This panel illustrates the decision boundary of the training set, where we trained on five old classes and three new classes with a limited number of samples. In this visualization, circles denote the embedded space of samples, while stars represent class-level prototypes. Notably, we observe that few samples of the new class are closely clustered together. This is due to the SGN refining features through inter-sample associations. Furthermore, the CGN aids in aligning categories with strong similarities, fostering connections between old and new classes. The visualization reinforces that class-level attributes of both old and new classes remain distinguishable. 2) Fig. refrig:tsne(b): This panel shows the application of the trained FSCIL task to the test set. Notably, the use of S2C enhances prototype adaptation and fine-tunes the decision boundary between old and new classes. Overall, these visualizations underscore the efficacy of the S2C approach in adapting prototypes and refining decision boundaries for effective FSCIL tasks on the CUB-200-2011 dataset. ## VI Conclusion In this paper, we studied the FSCIL problems from the perspective of building the relationships between sample-level to class-level graph. We proposed a novel Sample-to-Class Graph Network (S2C) which consists of Sample-level Graph Network (SGN) and Class-level Graph Network (CGN). SGN is used to build the relationship between samples in the N-way K-shot few-shot tasks to mine more favorable refined features. CGN is used to construct the context relationship between old and novel classes. Moreover, a S2C multi-stage training strategy was employed to improve the adaptation of S2C to novel classes. In general, S2C enhances the long-term learning ability of the deep learning model by simultaneously overcoming the catastrophic forgetting and generalization problems. Experimental result on benchmark datasets showed that our model is superior in both performance and adaptability than the state of the art methods. In our future work, we plan to enhance the edge information between graph nodes by incorporating additional data to further investigate the relationships and dependencies within few-shot data and construct multiple mapping relationships from sample-level graph to class-level graph to establish a more stable and robust multi-task relationship.
2309.13919
The investigation of the hydride superconductor's parabolic-like critical temperature under high pressure
Under the weak coupling, we investigate the critical temperatures under pressure of H3S, LaH10, CaH6, and Tl2Ba2CaCu2O8+{\delta} superconductors. The superconducting mechanism takes into account the electron-phonon interaction as well as the Coulomb interaction. Under high pressure, the critical temperature equation is calculated as a function of the fractional volume of the unit cell, and the Birch-Murnaghan equation of state is used to determine the relationship between fraction volume and pressure. Using this equation, we can analyze the parabolic-like relationship between the critical temperature and pressure of a superconductor. The parabolic behavior of these superconductors' critical temperature versus pressure can fit well. The maximal critical temperature of Tl2Ba2CaCu2O8+{\delta}, H3S, LaH10, and CaH6 superconductors are predicted to be 112 K at 7 GPa, 197 K at 140 GPa, 252 K at 143 GPa, and 207 K at 174 GPa, respectively.
P. Tongkhonburi, P. Udomsamuthirun, A. Changjan, T. Kruaehong
2023-09-25T07:36:21Z
http://arxiv.org/abs/2309.13919v1
The investigation of the hydride superconductor's parabolic-like critical temperature under high pressure ## Abstract Under the weak coupling, we investigate the critical temperatures under pressure of H\({}_{3}\)S, LaH\({}_{10}\), CaH\({}_{6}\), and Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\,\delta}\) superconductors. The superconducting mechanism takes into account the electron-phonon interaction as well as the Coulomb interaction. Under high pressure, the critical temperature equation is calculated as a function of the fractional volume of the unit cell, and the Birch-Murnaghan equation of state is used to determine the relationship between fraction volume and pressure. Using these equation, we can analyze the parabolic-like relationship between the critical temperature and pressure of a superconductor. The parabolic behavior of these superconductors' critical temperature versus pressure can fits well. The maximal critical temperature of Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\,\delta}\), H\({}_{3}\)S, LaH\({}_{10}\), and CaH\({}_{6}\) superconductors are predicted to be 112 K at 7 GPa, 197 K at 140 GPa, 252 K at 143 GPa, and 207 K at 174 GPa, respectively. ## 1 Introduction One of the most significant expectations of superconductors in current physics has been the existence of superconductors at ambient temperature. Since 1911, Onnes [1] has discovered superconductivity in mercury with a critical temperature of around 4.2 K, and in 1986, Bednorz and Muller [2] have discovered cuprate superconductor. After that, the physicist displayed the critical temperature, which is higher than liquid nitrogen's boiling point. In order to increase the critical temperature of the superconductor, one of the key variables that researchers intend to take into account is pressure. The Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\,\delta}\) ( Tl2212) superconductor experiment has demonstrated the effects of pressure on superconductivity with pressure up to 30 GPa, where the critical temperature displayed parabolic-like behavior with an elevated value of 114 K around 7 GPa [3]. At a pressure of 7 GPa, the maximum critical temperature in \(YBa_{2}Cu_{3}O_{7-d}\) (Y123), was around 132 K. Over the whole range of oxygen concentration, the investigation was seen under pressures of up to 17 GPa, and the results showed parabolic-like dependences [4.5]. The HgBa2Ca2Cu3O8,d (Hg1223) superconductor was found to have the greatest critical temperature, measuring 153 K at 22 GPa in a slightly underdoped sample [6] and 164 K at 31 GPa in an optimally doped sample.[7]. The extreme critical temperature found in a hydride superconductor has been shown at pressures greater than that of cuprate superconductors. The H\({}_{3}\)S superconductor has a critical temperature of 203 at 155GPa[8]. Calcium hydride (CaH\({}_{6}\)) has critical temperatures of 220-235 K at 150 GPa[9] and 215 K at 172 GPa.[10]. The equation of state, which depicts a link between the volume of a unit cell and the pressure of a substance under pressure, is another experiment regarding the impact of pressure on physical characteristics. During measuring superconductivity in cuprate superconductors at high pressures, anisotropic behavior became a key factor. The bulk module, the compressibilities, the interrelationships of the crystal structure, and the anisotropy of the cuprate material are all still consistent with the Murnaghan equation of state [11]. In the LaH\({}_{10}\) superconductor, the equation of state and superconductivity at pressures up to 140 GPa were provided [12], and third-order Birch-Murnaghan fitting was used to account for pressure-volume data [13-14]. There have been reports on the CaH\({}_{6}\) superconductor's critical temperature dependency on pressure as well as its equation of state.[15] According to the theoretical view on hydride superconductors, it was evident that electron-phonon interaction and Coulomb repulsion occurred when hydrogen-rich superconductors were under high pressure and in their superconducting state. The strong electron-phonon coupling and Coulomb potential were used for reporting on the H\({}_{3}\)S superconductor [16]. However, the isotope effect exponent was closer to the BCS framework according to the findings of an experiment using hydrogen and deuterium sulfide at high pressure [17]. The isotope effect exponent was noticeably seen close to the BCS model in the proposed LaH\({}_{10}\) superconductor [18] with critical temperature roughly 250 K at 170 GPa. There have been many suggestions to apply the weak-coupling model at high pressure with adjusted density of states and carrier dispersion relation [19-23] to explain the rise in critical temperature. The electron-phonon process is established as the essential framework for explaining superconductivity in the weak-coupling limit. Although a static electron-phonon interaction can be identified, the screening Coulomb interaction under high pressure caused by the electrical charge of the crystal structure can also collaborate. In conventional superconductor, the only important electron-phonon interaction is essential and a suitable approximation for the phonon spectrum, according to Morel and Anderson's model [24], which was developed after they researched the electron-electron interaction, including Coulomb repulsion. As a little decreasing mechanism for the critical temperature, Coulomb repulsion is employed. However, the hydride superconductors are subject to extremely high pressure. The Coulomb effect should be greater than before that the impact of Coulomb potential is taken into account. Using the weak-coupling interaction model, we aim to explain the parabolic-like critical temperature of the cuprate and hydride superconductor in this investigation. Extending the BCS model with parameters under pressure enabled the calculation of the critical temperature formula. Using our derived formula and the Murnaghan equation of state, we compared the experimental data of the cuprate and the hydride superconductor with the findings. Finally, we demonstrated that our model could explain the critical temperature of both cuprate and hydride superconductors, which resembles a parabolic curve of critical temperature at high pressure. **2. Model and calculation** The BCS theory's weak-coupling framework, which is appropriate for our calculation, takes into account the impact of external pressure on the critical temperature of superconductors. We can derive the Green's function of the superconducting state using the BCS Hamiltonian and the mean field theory as \(G(k,\omega_{n})=\frac{1}{i\omega_{n}-\varepsilon_{k}\tau_{3}+\Delta_{k}\tau_{ 1}}\), where \(\tau_{1}\)and \(\tau_{3}\) are the Pauli matrices and \(\omega_{n}\)is the Matsubara frequency. The gap equation, \(\Delta_{k}=\sum_{k}V_{kk^{\prime}}<C_{-k\downarrow}C_{k\uparrow}>\), is determined by the self-consistent equation and may be derived as \[\Delta_{k}=-\sum_{k^{\prime}}V_{kk^{\prime}}\frac{\Delta_{k^{\prime}}}{2 \varepsilon_{k^{\prime}}}\tanh(\frac{\mathcal{E}_{k^{\prime}}}{2T}) \tag{1}\] Here, the carrier energy \(\mathcal{E}_{k}\) is measured from the Fermi energy. \(\Delta_{k}\)is the superconducting gap\(\ldots\) In our calculation, the multi-interaction model accounted for the Coulomb effect. The mechanisms of superconductors are the attractive electron-phonon interaction \(V_{ph}\) and the repulsive Coulomb interaction \(U_{c}^{\prime}\), with the distinct cutoff energies of Debye phonon (\(\omega_{D}\)) and Coulomb interaction (\(\omega_{c}\)), respectively. It is suggests that the muti-interaction potential model of carrier \(V_{kk^{\prime}}\) are (25,26): \(V_{kk^{\prime}}=-V_{ph}+U_{c}\) for \(0<\left|\varepsilon_{k}\right|<\omega_{D}\), and \(V_{kk^{\prime}}=+U_{c}\) for \(\omega_{D}<\left|\varepsilon_{k}\right|<\omega_{c}\). The superconducting order parameter should be written in the similar behaviour as \(\Delta_{k}=\Delta_{ph}\) for \(0<\left|\varepsilon_{k}\right|<\omega_{D}\), And \(\Delta_{k}=\Delta_{c}\) for \(\omega_{D}<\left|\varepsilon_{k}\right|<\omega_{c}\). To incorporate pressure into our model, we assume that pressure can affect superconductors in two distinct ways: either by altering the density of state or by disturbing the carrier's energy dispersion. The narrow fluctuation constant observed in the form of the delta function also appeared in the density of state under pressure as form [19-23] \[N(\varepsilon)=N(0)(1+\chi\delta(\varepsilon-\varepsilon_{{}_{0}})) \tag{2}\] Here, \(\chi\) is the height of this fluctuation function and the shifted position from the unpressured state is set as \(\varepsilon_{{}_{0}}\) below the Fermi level. The density of state can be reduced to the BCS scenario by setting \(\chi=0\). As has been determined that pressure has an impact on the carrier dispersion relation. Due to the size of the volume distorting the crystal structure, unit cells now contain additional energy from external pressure. Ref.[19, 20, 21, 22, 23] states that they can extend the new state in terms of external pressure (\(p\)). Expanding the new state in a power series of the fraction volume \(\nu\) ( \(\nu=\frac{V}{V_{{}_{0}}}\) ) is the most practical technique to connect to the Murnaghan equation of state [13, 14]. Therefore, if the new stable state of the carrier's dispersion relation is \(\mathcal{E}_{k}\left(p\right)\), this may be extended to become \[\mathcal{E}_{k}\left(p\right)=\mathcal{E}_{k}(0)+p\left[\frac{d\mathcal{E}_{k }\left(p\right)}{dp}\right]_{p=0}+\frac{p^{2}}{2}\left[\frac{d^{2}\mathcal{E}_ {k}\left(p\right)}{dp^{2}}\right]_{p=0}+...\.\] The influence of additional pressure on volume is not a linear term, hence the power order of this relationship is assumed, in accordance with the Murnaghan equation of state [13, 14]. We intend for \[p\alpha\,\frac{1}{v^{\beta}}\,\,\,\text{to come out in}\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ And the pseudo Coulomb interaction potential is \({\mu_{c}}^{*}=\frac{-\mu_{c}}{1+\mu_{c}I_{22}}\). We are able to get the formula for the critical temperature as \[T_{c}=1.13(\omega_{{}_{D}}+Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)e^{\left( \frac{1}{\frac{\varepsilon_{0}+Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)}{2T_{c}} }\int_{0}^{\infty}\frac{\tanh(\frac{\varepsilon+Q_{{}_{c}}(\frac{1}{v^{{}^{ \beta}}}-1)}{2T_{c}})_{\varepsilon+Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)}}{ \varepsilon+Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)}\right]} \tag{5}\] We can estimate the term involved with \(Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)\) in integration into two possible scenario for \[\left|Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)\right|>2\,T_{{}_{c}}\,\,\,\mbox{ and}\,\,\,\left|Q_{{}_{c}}(\frac{1}{v^{{}^{\beta}}}-1)\right|<2\,T_{{}_{c}}\,\,\,\mbox{ that may provide the solution of the integration}\] \[\int_{0}^{\infty_{{}_{D}}}d\varepsilon\frac{\tanh(\frac{ \varepsilon+Q_{{}_{v}}(\frac{1}{v^{{}^{\beta}}}-1)}{2T_{c}})}{\varepsilon+Q_{ {}_{v}}(\frac{1}{v^{{}^{\beta}}}-1)}\,\,\,\mbox{as}\,\,\,\ln(\frac{Q_{{}_{v}}( \frac{1}{v^{{}^{\beta}}}-1)}{2T_{c}})\,\,\,\mbox{and}\,\,\,\frac{Q_{{}_{v}}( \frac{1}{v^{{}^{\beta}}}-1)}{2T_{c}}\,,\,\mbox{respectively}.\] The equation for the critical temperature of a superconductor at high pressure is Eq. (5), which demonstrates the relationship between the critical temperature and the fraction volume. And we have a relationship between fraction volume and pressure in the Murnaghan equation of state, which may relate to the experimental data of superconductors under high pressure. In order to demonstrate a relationship between the critical temperature and external pressure, our calculation employs Eq.(5) and the Murnaghan equation of state. ## 3 Result and discussion In order to understand the connection between the critical temperature and the external pressure, we then estimate the critical temperature of the cuprate superconductors and the hydride superconductors using equation (5) and the Murnaghan equation of state. The Birch-Murnaghan equation of state, one variant of the Murnaghan equation of state, the measurement on pressures to determine the volume, was the equation of state that was applied in our calculation. We use the Birch-Murnaghan [13, 14] as \[P(v)=\frac{3B_{{}_{0}}}{2}[v^{\frac{7}{3}}-v^{\frac{-5}{3}}]\{1+\frac{3}{4}( B_{{}_{0}}^{\prime}-4)[v^{\frac{-2}{3}}-1]\} \tag{6}\] Here, the volume fraction is define \(v=\frac{V}{V_{{}_{0}}}\)that \(V_{{}_{0}},B_{{}_{0}}\)and \(B_{{}_{0}}^{{}^{\prime}}\) are the equilibrium cell volume, the bulk modulus and the derivative of bulk modulus with respect to pressure. The cuprate superconductor has been one of the most remarkable superconductors over the past ten years. Many physicists have an interest in the parabolic-like critical temperature versus pressure. We begin by applying our model to cuprate superconductors, Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\), whose unit cell volume changes slightly at high pressure and the volume fractions are nearly one which the volume varies in range 370-435 A\({}^{3}\), and whose data are established. For the Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\) superconductor, which has almost all the experimental data, we apply our model to explain this behavior. The bilayer single crystal of the superconductor Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\) has been studied at pressures up to 30 _GPa_ through investigation of the lattice parameter, unit-cell volume, and critical temperature with no structure change in cell parameters; consequently, no structure phase transition was discovered in this material [3]. The Eq.(5) and Eq.(6) are used for numerical calculation and comparison to experimental data [3, 27, 28]. In Figure 1., we have the Birch-Murnaghan with \(B_{{}_{0}}=1\,1\,1\,.7\), and \(B_{{}_{0}}^{{}^{\prime}}=4\,[3]\). and the parameters used are (solid line) : \(\chi=\)460 \(\varepsilon_{{}_{0}}\) -10 \(\lambda=\)0.32 \(\mu=\)0.01 \(\omega_{D}=\)300 \(\omega_{c}-\)350 \(Qe=\)-200, \(\beta=\)3.3. Our calculation can get the beautiful parabolic- like and perfect consistent with the experimental results. The maximum critical temperature is about 112 K at 7 _GPa_ that agree with Ref.[3] which the maximum is found of 114 K at 6.8 _GPa_. Figure 1: The critical temperature of Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\) superconductor was shown together with the calculation (solid line) and experimental data [3, 27, 28](solid square). The \(H_{3}S\), LaH\({}_{10}\), and CaH\({}_{6}\) superconductors for hydride superconductors are of particular interest to us since they can exhibit the highest critical temperature with a parabolic-like form under high pressure. At ambient pressure, they are virtually in the gas phase, when they transition into the solid phase that superconductivity begins to appear. Since the fraction volumes of hydride superconductors are smaller than 1, we can modify these constraints by determining the appropriate \(\beta\) value of the variable in our \(p\alpha\,\frac{1}{\nu^{\,\beta}}\) assumption. The remaining pressure settings are being examined until the results of our calculations and the experiments agree. The Debye cutoff is obtained from each hydride's data, and the Coulomb cutoff is set to be greater than the Debye cutoff. The electron-phonon coupling constant was in the case of weak coupling. And, after doing several sampling calculations, we realized that the Coulomb coupling constant had little effect on our calculations, therefore just a modest quantity of Coulomb coupling constant was used. The experimental results for the hydrogen hydride superconductor are given in Figure 2 as solid triangles and squares, and our calculations using Eq.(5) and Eq.(6) are shown as solid lines. This material contains a variety of crystal phase structures. We take particular attention to the two crystallographic phases in this material that change when the pressure rises from Cecm to Im-3m [8, 29-32]. However, as there are insufficient data to definitively indicate the transition line, a mixed phase is postulated to exist between the two phase regimes. The Birch-Murnaghan \(\beta_{0}=86.63\) and \(\beta_{0}^{\prime}=3.9\{32\}\) were obtained by analyzing the lattice parameter, unit-cell volume, and critical temperature vs pressure up to 220 GPa. The \(\chi=\)600, \(\varepsilon_{0}=\)250, \(\lambda=\)0.3, \(\mu=\)0.01, \(\omega_{D}=\)870, \(\omega_{c}=\)970, \(Qe-\)1.75, \(\beta=\)4.0, \(\beta_{0}-\)10, \(\beta_{0}^{\prime}-\)4.2 are the parameters used in Figure 2. In phase of mixed and Im-3m phase, our calculation can display the parabolic-like and is highly compatible with the experimental data. The lower critical temperature in the Cecm phase prevents our calculation from fitting the data effectively. In the Cecm phase, we expect that the highest critical temperature will be around 197 K at 140 GPa. The experimental LaH\({}_{1\,0}\) data are compared with our calculation in Figure 3. As pressure rises, there exist three phases: C\({}_{2}\)/m, mixed, and Fm\(\cdot\)3m phase. Experimental data are shown as solid dots {33-35} and the calculations as solid and dashed lines. The Fm\(\cdot\)3m phase is a high-symmetry phase that is also found in areas of low pressure. Only the lower pressure area contains the lower phase C2/m. The lattice parameter, unit-cell volume, critical temperature vs pressure up to 220 GPa, were used to determine the Birch-Murnaghan, \(B_{0}\) = 27 and \(B_{0}^{\prime}\) = 4 [12]. There are two lines calculated for LaH\({}_{1\,0}\) superconductor: a solid line for greater pressure and a dashed line for lower pressure zones. The parameter used are solid line : \(\chi\) -420 \(\varepsilon_{0}\) -100 \(\lambda\) -0.5, \(\mu\) -0.01, \(\omega_{D}\) -700,\(\omega_{c}\) -800,\(Qe\) -4.7, \(\beta\) -4.0, \(\beta_{0}\) -57, \(\beta_{0}^{\prime}\) -3.1 and dashed line : \(\chi\) -520,\(\varepsilon_{0}\) -100, \(\lambda\) -0.42, \(\mu\) -0.01,\(\omega_{D}\) -700,\(\omega_{c}\) -800,\(Qe\) -4.1, \(\beta\) -4.2,\(\beta\) -20, \(\beta_{0}^{\prime}\) -4.0. The calculations and experimental findings were quite consistent. Take into account that the calculation result in the Fm-3m phase region was parabolic-like and could forecast the highest critical temperature at around 252 K at 143 GPa, which is consistent with the experiment's findings of roughly 250 K at 150-170 GPa. While the calculation for the R3m phase appeared to show a linear relationship with the greater critical temperature expected to be higher than for the Fm-3m phase. The critical temperature under varying pressure can be effectively matched within the 2 sets that comprise our parameters. The CaH\({}_{6}\) superconductor's calculated and experimental data are displayed in Figure 4 along with the relationship between critical temperature and pressure. In this superconductor, there are two phases known as P21/m and Im-3m phases [9-10] that these phases are stable at the pressure 50-100 and 150-200 GPa, respectively. The lattice parameter, unit-cell volume, critical temperature versus pressure up to 220 GPa were done which the Birch-Murnaghan : \(B_{0}=2\,2\,1\) and \(B_{0}^{\prime}=3\)[15]. The unit cell capacity changes from 24-20 A\({}^{3}\) as the pressure ranges from 110 to 220 GPa. Because of the anisotropic stress present, which causes varied distortion in various attempts, a broad range of critical temperatures were discovered to be between 100 and 220 \(K\). Due to the large range of critical temperatures and the existence of two phase transitions, we divided our calculation into two portions for P\({}_{21/}\)m and Im-3m, respectively: a solid line and a dashed line. Following some manifestation, we can identify the optimal consistency for both the experimental and the computational parts. The parameter used are solid line : \(\chi\) =550, \(\varepsilon_{0}\) =23, \(\lambda\) =0.33, \(\mu\) =0.01, \(\omega_{D}\) =960, \(\omega_{c}\) =1060, \(Qe\) = -3.7, \(\beta\) = 3.80, \(\beta_{0}\) =97, \(\beta_{0}^{\prime}\)=3.00 and dash line : \(\chi\) =380, \(\varepsilon_{0}\) =23, \(\lambda\) =0.375, \(\mu\) =0.01, \(\omega_{D}\) =960, \(\omega_{c}\) = 1060, \(Qe\) = -2.85, \(\beta\) = -3.97, \(\beta_{0}\) = 130, \(\beta_{0}^{\prime}\)=3.00. The critical temperature in the P21/m Phase rises as pressure rises, which higher critical temperature should be found. Additionally, the critical temperature for the Im-3m phase is seem to be constant, with a little parabolic-like phase. The critical temperature for the Im-3m phase is \(\omega_{D}\) = 1.00, \(\omega_{D}\) = 1. observation. In comparison to the experiment, which discovered a maximum critical temperature of 215 \(K\) at 172 \(GPa\) in the Im-3m phase, the maximum critical temperature may be expected to be about 207 \(K\) at 174 \(GPa\). ## 4 Conclusion The critical temperatures under pressure of H\({}_{3}\)S, LaH\({}_{10}\), CaH\({}_{6}\) and Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\) are investigated while performing under the constraint of weak coupling. The superconducting mechanism takes into account both the electron-phonon interaction and the Coulomb interaction. The equation of the critical temperature is calculated as a function of the unit cell volume fraction under high pressure. In order to determine the relationship between fraction volume and pressure, the Birch-Murnaghan equation of state is applied. Using this equation, we can investigate the relationship between superconductor's critical temperature and pressure. Cuprate superconductor and hydride superconductor are two types of superconductors that we would like to use. The phase transition in cuprate superconductors is caused by changes in the crystal structure, however the substance will continue to remain in the solid state even as the pressure increases. The phase Figure.4: The critical temperature of CaH\({}_{6}\) superconductor was calculated (solid and dashed line) and measured experimentally (solid square) [9,10]. transition of the hydride superconductor changes under high pressure; specifically, it goes from the gas phase to the solid phase during the process of increasing pressure. Since the fraction volumes of cuprate superconductors and hydride superconductors should be close to 1, we can impose constraints by determining the pressure- and volume-dependent factors. In cuprate superconductors, the experimental data for Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\) and our calculation are in good agreement. The superconducting hydride compounds H\({}_{3}\)S, LaH\({}_{10}\), CaH\({}_{6}\) are investigated. There are separate lower and upper regions. These regions can be described by their parameters, and they can be well-fitted. The maximal critical temperature is predicted to be 112 K at 7 GPa, 197 K at 140 GPa, 252 K at 143 GPa, and 207 K at 174 GPa for the superconductors Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8,\mathrm{d}}\), H\({}_{3}\)S, LaH\({}_{10}\), CaH\({}_{6}\).
2301.01281
Turbulent Drag Reduction in Magnetohydrodynamic Turbulence and Dynamo from Energy Flux Perspectives
In this review, we describe turbulent drag reduction in a variety of flows using a universal framework of energy flux. In a turbulent flow with dilute polymers and magnetic field, the kinetic energy injected at large scales cascades to the velocity field at intermediate scales, as well as to the polymers and magnetic field at all scales. Consequently, the kinetic energy flux, $ \Pi_u(k) $, is suppressed in comparison to the pure hydrodynamic turbulence. We argue that the suppression of $\Pi_u(k)$ is an important factor in the reduction of the inertial force $\langle {\bf u \cdot \nabla u} \rangle$ and \textit{turbulent drag}. This feature of turbulent drag reduction is observed in polymeric, magnetohydrodynamic, quasi-static magnetohydrodynamic, and stably-stratified turbulence, and in dynamos. In addition, it is shown that turbulent drag reduction in thermal convection is due to the smooth thermal plates, similar to the turbulent drag reduction over bluff bodies. In all these flows, turbulent drag reduction often leads to a strong large-scale velocity in the flow.
Mahendra K. Verma, Manohar K. Sharma, Soumyadeep Chatterjee
2022-12-29T02:40:33Z
http://arxiv.org/abs/2301.01281v1
# Turbulent Drag Reduction in Magnetohydrodynamic Turbulence and Dynamo from Energy Flux Perspectives ###### Abstract In this review, we describe turbulent drag reduction in a variety of flows using a universal framework of energy flux. In a turbulent flow with dilute polymers and magnetic field, the kinetic energy injected at large scales cascades to the velocity field at intermediate scales, as well as to the polymers and magnetic field at all scales. Consequently, the kinetic energy flux, \(\mathbf{\Pi_{u}(k)}\), is suppressed in comparison to the pure hydrodynamic turbulence. We argue that the suppression of \(\mathbf{\Pi_{u}(k)}\) is an important factor in the reduction of the inertial force \(\mathbf{\langle u\cdot\nabla u\rangle}\) and _turbulent drag_. This feature of turbulent drag reduction is observed in polymeric, magnetohydrodynamic, quasi-static magnetohydrodynamic, and stably-stratified turbulence, and in dynamos. In addition, it is shown that turbulent drag reduction in thermal convection is due to the smooth thermal plates, similar to the turbulent drag reduction over bluff bodies. In all these flows, turbulent drag reduction often leads to a strong large-scale velocity in the flow. **Keywords:** Turbulent drag reduction, Magnetohydrodynamic turbulence, Energy flux, Dynamo, Quasi-static magnetohydrodynamics, Turbulent thermal convection ## 1 Introduction It has been observed that an introduction of polymers and magnetic field to a turbulent flow reduces turbulent drag [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Turbulence drag is also suppressed over bluff bodies with particular shapes, e.g., aerofoils. This phenomena, known as _turbulent drag reduction_, or _TDR_ in short, depends on many factors--properties of the boundaries and fluids, bulk turbulence, nature of polymers, etc. In this review, using energy flux, we describe a universal framework to explain TDR in polymeric, magnetohydrodynamic (MHD), quasi-static MHD, and stably-stratified turbulence, and in dynamo. A pipe flow exhibits viscous drag at small Reynolds numbers, but it experiences turbulent drag at large Reynolds numbers [12; 13]. It has been observed that an introduction of small amount of polymers in the flow suppresses the turbulent drag up to 80% [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. In Fig. 1, we illustrate the mean normalized velocity profiles (\(V^{+}\)) as a function of normalized distance from the wall (\(y^{+}\)) in a hydrodynamic (HD) flow with and without polymers. The bottom curve with green dots represents \(V^{+}\) for pure HD turbulence and it exhibits Karman's log layer, whereas the chained curve with red squares is for polymeric turbulence and it shows TDR. L'vov _et al._[6] constructed a phenomenological model for the _maximum drag reduction asymptote_ (represented by the chained curve in the figure) that matches with numerical and experimental data quite well. Study of TDR is particularly important due to its wide-ranging practical applications. For example, firefighters mix polymers in water to increase the range of fire-hoses. Also, polymers are used to increase the flow rates in oil pipe, etc. Figure 1: For a wall-bound flow, mean normalized velocity profiles (\(V^{+}\)) as a function of the normalized distance from the wall (\(y^{+}\)). The bottom curve with green dots is for pure HD turbulence, whereas the chained-curve with red squares is for the polymeric turbulence. From L’vov _et al._[6]. Reproduced with permission from APS. Bluff bodies too experience viscous and turbulent drag at small and large Reynolds numbers respectively. Turbulent drag over bluff bodies depend on the surface properties, e.g., smoothness and curvature [14; 15]. Keeping these factors in mind, airplanes, automobiles, missiles, and ships are designed to minimize turbulent drag. In a recent paper, Verma _et al._[11] argued that TDR occurs in MHD turbulence analogous to TDR in turbulent flows with dilute polymers. They showed that the kinetic energy (KE) flux (\(\Pi_{u}(k)\)) is suppressed in polymeric and MHD turbulence due to the transfer of energy from the velocity field to polymers and magnetic field respectively. The energy fluxes in polymeric and MHD turbulence have been studied in a number of earlier works [1; 11; 16; 17; 18; 19]. It was argued that the turbulent drag and the nonlinearity \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) are proportional to \(\Pi_{u}(k)/U\), where \(\mathbf{u}\) is the velocity field, \(U\) is the large-scale velocity, and \(\langle.\rangle\) represents averaging. Thus, Verma _et al._'s [11] formalism provides a general framework for TDR in variety of flows, including polymeric and MHD turbulence. An introduction of polymers or magnetic field in a turbulent flow enhances the mean flow, but suppresses \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\)[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Verma _et al._[11] observed the above phenomena in a shell model of MHD turbulence. Note that \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) and \(\Pi_{u}(k)\) depend critically on the phase relations between the Fourier modes. Verma _et al._[11] argued that the velocity correlations in polymeric and MHD turbulence are enhanced compared to pure HD turbulence. These correlations lead to suppressed \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) and \(\Pi_{u}(k)\) in spite of amplification of \(U\). Thus, TDR, energy flux, and enhancement of \(U\) are related to each other. Based on past results, Verma _et al._[11] argued for TDR in quasi-static MHD (QSMHD) turbulence [20; 21]. The Joule dissipation suppresses \(\Pi_{u}(k)\) at all wavenumbers [20; 21; 22; 23], and hence \(\Pi_{u}(k)\) for QSMHD turbulence is lower than the corresponding flux for HD turbulence. In addition, large-scale \(U\) increases with the increase of interaction parameter, thus indicating TDR in QSMHD turbulence. Generation of magnetic field in astrophysical objects, such as planets, stars, and galaxies, are explained using dynamo mechanism [24; 25; 26; 27]. Here, magnetic field grows and saturates at some level due to the self-induced currents. In the present review, we discuss TDR in dynamo using the energy flux. Based on earlier dynamo simulations (e,g., [27; 28]), we show that the fluctuations in the velocity and magnetic fields are suppressed when a large-scale magnetic field emerges in the system. This feature signals TDR in dynamo. Planetary and stellar atmospheres often exhibit stably stratified turbulence. In such flows, lighter fluid is above the heavier fluid with gravity acting downwards [29; 30]. The KE flux in stably stratified turbulence is suppressed, as in polymeric and MHD turbulence. Based on these observations, we argue for TDR in stably stratified turbulence. Researchers have reported that compared to HD turbulence, viscous dissipation rate (\(\epsilon_{u}\)) and thermal dissipation rate (\(\epsilon_{T}\)) are suppressed in turbulent thermal convection. For example, Pandey _et al._[31] and Bhattacharya _et al._ [32] showed that \(\epsilon_{u}\sim(U^{3}/d)\mathrm{Ra}^{-0.2}\) and \(\epsilon_{T}\sim(U(\Delta T)^{2}/d)\mathrm{Ra}^{-0.2}\), where \(\Delta T\) is the temperature difference between the top and bottom thermal plates separated by distance \(d\), and \(\mathrm{Ra}\) is the Rayleigh number, which is the ratio of buoyancy and diffusion in thermal convection. In addition, Pandey _et al._[31] observed that \(\left\langle\mathbf{u}\cdot\nabla\mathbf{u}\right\rangle/(Ud/\nu)\approx \mathrm{ReRa}^{-0.14}\), where \(\mathrm{Re}\) is the Reynolds number. Thus, nonlinearity is suppressed in turbulent thermal convection. In this review, we relate the above suppression of nonlinearity and dissipation rates to TDR over bluff bodies. It has been argued that TDR in turbulent convection arises due to large-scale circulation (LSC) over thermal plates, and that the smooth thermal plates affect bulk turbulence. Thus, KE flux and \(\left\langle\mathbf{u}\cdot\nabla\mathbf{u}\right\rangle\) provide valuable insights into the physics of TDR. TDR is also related to the enhanced correlations in the velocity field. The present review focusses on these aspects for a variety of flows--polymeric, MHD, QSMHD, and stably-stratified turbulence; dynamo; and turbulent thermal convection. Here, we focus on bulk turbulence, and avoid discussion on boundary layers and smooth surfaces. The latter aspects are covered in many books and reviews, e.g., [3; 4; 5; 10; 14; 15]. We remark that the energy flux is a well known quantity in turbulence literature [33; 34; 35; 36; 37]. However, the connection between the energy flux and TDR has been brought out only recently [11], and the number of papers highlighting the above connection is relatively limited. The increase in the mean velocity field during TDR is related to relaminarization. Narasimha and Sreenivasan [38] studied relaminarization in stably stratified turbulence, rotating turbulence, and thermal convection, and related it to the reduction in \(\left\langle\mathbf{u}\cdot\nabla\mathbf{u}\right\rangle\). Thus, the mechanism of relaminarization is intimately related to the TDR. An outline of this review is as follows. In Section 2 we briefly review viscous and turbulent drag in a pipe flow and over a bluff body. In Section 3 we describe a general framework for TDR using energy fluxes. In Section 4 we review the energy fluxes in a turbulent flow with dilute polymers and relate it to TDR in the bulk. Section 5 contains a framework of TDR in MHD turbulence via energy fluxes. In Section 6 we describe signatures of TDR in direct numerical simulations (DNS) and shell models of MHD turbulence. Sections 7 and 8 deal with TDR in dynamos and in QSMHD turbulence respectively. In Section 9 we describe TDR in stably stratified turbulence and in turbulent thermal convection. We conclude in Section 10. ## 2 Viscous and turbulent drag in hydrodynamic turbulence The equations for incompressible hydrodynamics are \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\nabla(p/\rho)+\nu\nabla^{2}\mathbf{u}+\mathbf{F}_{\mathrm{ext}}, \tag{1}\] \[\nabla\cdot\mathbf{u} = 0, \tag{2}\] where \(\mathbf{u},p\) are respectively the velocity and pressure fields; \(\rho\) is the density which is assumed to be unity; \(\nu\) is the kinematic viscosity; and \(\mathbf{F}_{\mathrm{ext}}\) is the external force employed at large scales that helps maintain a steady state. An important parameter for the fluid flows is Reynolds number, which is \[\mathrm{Re}=\frac{UL}{\nu}, \tag{3}\] where \(L\) and \(U\) are the large-scale length and velocity respectively. For homogeneous and isotropic turbulence, \(\mathrm{Re}\) is the ratio of the nonlinear term and the viscous term. However, in more complex flows like polymeric turbulence, MHD turbulence, and turbulent convection, \[\frac{\text{Nonlinear term}}{\text{Viscous term}}=f\mathrm{Re}, \tag{4}\] where the prefactor \(f\) may differ from unity and may provide a signature for TDR. For example, \(f\approx\mathrm{Ra}^{-0.2}\) for turbulent convection, where \(\mathrm{Ra}\) is the Rayleigh number [31]. We expect complex \(f\) for MHD and polymeric turbulence. A fluid moving in a pipe of radius \(d\) experiences drag (see Fig. 2). At low Reynolds numbers, this drag is called _viscous drag_. In this case, under steady state, the pressure gradient, \(-\nabla(p/\rho)\), which can be treated as \(\mathbf{F}_{\mathrm{ext}}\), matches with the viscous term, \(\nu\nabla^{2}\mathbf{u}\). Hence, we estimate the viscous drag as [13; 39] \[F_{\mathrm{drag}}\approx\frac{\nu U}{d^{2}}. \tag{5}\] The proportionality constant is of the order of unity. At large Reynolds number, the nonlinear term becomes significant, and hence [12; 13; 14; 15], \[F_{\mathrm{drag}}\approx\frac{U^{2}}{d}+\frac{\nu U}{d^{2}}, \tag{6}\] Figure 2: Schematic illustrations of (a) pipe flow and (b) its viscous flow profile. (c) The profile of the mean velocity in a turbulent pipe flow. apart from the proportionality constants. In the above formula, \(U^{2}/d\) is the turbulent drag that is larger than the viscous drag by a factor of Re. Clearly, the turbulent drag dominates the viscous drag at large Re. Note that the above drag force is in the units of force per unit mass; we will follow this convention throughout the paper. A related problem is the frictional force experienced by a bluff body in a flow. Analogous to a pipe flow, a bluff body experiences viscous drag at small Re, but turbulent drag at large Re. In literature, the drag coefficient is defined as [13; 14] \[C_{d}=\frac{F_{\rm drag}}{\rho U^{2}A}, \tag{7}\] where \(A\) is the area of the bluff body. It is customary to describe fluid flows in Fourier space, where Eqs. (1, 2) get transformed to [35; 36; 37] \[\frac{d}{dt}\mathbf{u}(\mathbf{k})=-i\sum_{\mathbf{p}}\{\mathbf{k}\cdot \mathbf{u}(\mathbf{q})\}\mathbf{u}(\mathbf{p})-i\mathbf{k}p(\mathbf{k})-\nu k ^{2}\mathbf{u}(\mathbf{k})+\mathbf{F}_{\rm ext}(\mathbf{k}), \tag{8}\] where \(\mathbf{k}\), \(\mathbf{p}\), \(\mathbf{q}\) are the wavenumbers with \(\mathbf{k}=\mathbf{p}+\mathbf{q}\); and \(\mathbf{u}(\mathbf{k}),\mathbf{u}(\mathbf{p}),\mathbf{u}(\mathbf{q})\) are the corresponding velocity Fourier modes. An equation for the modal energy \(E_{u}(\mathbf{k})=|\mathbf{u}(\mathbf{k})|^{2}/2\) is [35; 36; 37; 40] \[\frac{d}{dt}E_{u}(\mathbf{k}) = T_{u}(\mathbf{k})+\mathcal{F}_{\rm ext}(\mathbf{k})-D_{u}( \mathbf{k}), \tag{9}\] where \[T_{u}(\mathbf{k}) = \sum_{\mathbf{p}}\Im\left[\{\mathbf{k}\cdot\mathbf{u}(\mathbf{q}) \}\{\mathbf{u}(\mathbf{p})\cdot\mathbf{u}^{*}(\mathbf{k})\}\right], \tag{10}\] \[\mathcal{F}_{\rm ext}(\mathbf{k}) = \Re[\mathbf{F}_{\rm ext}(\mathbf{k})\cdot\mathbf{u}^{*}(\mathbf{ k})],\] (11) \[D_{u}(\mathbf{k}) = 2\nu k^{2}E_{u}(\mathbf{k}). \tag{12}\] Here, \(\Re,\Im\) stand respectively for the real and imaginary parts of the argument; \(T_{u}(\mathbf{k})\) is the nonlinear energy transfer to the mode \(\mathbf{u}(\mathbf{k})\); \(D_{u}(\mathbf{k})\) is the energy dissipation rate at wavenumber \(\mathbf{k}\); and \(\mathcal{F}_{\rm ext}(\mathbf{k})\) is the KE injection rate to \(\mathbf{u}(\mathbf{k})\) by the external force \(\mathbf{F}_{\rm ext}(\mathbf{k})\). We assume that the external force injects KE at large scales, e.g., in a wavenumber band \((0,k_{f})\) with small \(k_{f}\). Therefore, the total KE injection rate, \(\epsilon_{\rm inj}\), is \[\int_{0}^{k_{f}}d\mathbf{k}\mathcal{F}_{\rm ext}(\mathbf{k})\approx\epsilon_{ \rm inj}. \tag{13}\] This injected KE cascades to intermediate and small scales as KE flux, \(\Pi_{u}(K)\), which is defined as the cumulative KE transfer rate from the velocity modes inside the sphere of radius \(K\) to velocity modes outside the sphere. In Fig. 3, we illustrate the inner and outer modes as \(\mathbf{u}^{<}\) and \(\mathbf{u}^{>}\) respectively. In terms of Fourier modes, the above flux is [16; 37; 41; 42] \[\Pi_{u}(K) = -\sum_{k\leq K}T_{u}(\mathbf{k})=\sum_{p\leq K}\sum_{k>K}\Im\left[ \{\mathbf{k}\cdot\mathbf{u}(\mathbf{q})\}\{\mathbf{u}(\mathbf{p})\cdot\mathbf{ u}^{*}(\mathbf{k})\}\right], \tag{14}\] where \(\mathbf{q}=\mathbf{k}-\mathbf{p}\). The above energy flux is dissipated in the dissipative range, with the total viscous dissipation rate as \[\epsilon_{u}=\int d\mathbf{k}D_{u}(\mathbf{k})=\int d\mathbf{k}2\nu k^{2}E_{u} (\mathbf{k}). \tag{15}\] At large Reynolds numbers, it has been shown that in the inertial range [33; 35; 36; 43; 44], \[\Pi_{u}(k)\approx\epsilon_{\mathrm{inj}}\approx\epsilon_{u}\approx\frac{U^{3} }{d}. \tag{16}\] That is, the inertial-range energy flux, the viscous dissipation rate, and the energy injection rate are all equal. Note that in the inertial range, \(\Pi_{u}(k)=\epsilon_{\mathrm{inj}}\) due to absence of external force and negligible viscous dissipation [33; 37; 40]. We show later that the magnetic field and polymers, as well as smooth walls, suppress the energy flux relative to \(\epsilon_{\mathrm{inj}}\). We argue that this feature leads to TDR. Figure 3: An illustration of KE flux \(\Pi_{u}(K)\). KE is injected into the small red sphere. \(\Pi_{u}(K)\) is constant in the inertial range, and it is dissipated at small scales with a dissipation rate of \(D_{u}\). From Verma _et al._[11]. Reprinted with permission from AIP. For a steady state, an integration of Eq. (1) over a bluff body yields the following formula for the drag force: \[\mathbf{F}_{\mathrm{drag}}=\int d\mathbf{r}\left[(\mathbf{u}\cdot\nabla)\mathbf{ u}+\nabla(p/\rho)-\nu\nabla^{2}\mathbf{u}\right]. \tag{17}\] The viscous force dominates the inertial term near the surface of a bluff body. Hence, for bluff bodies, the inertial term of the above equation is ignored. Prandtl [15; 45] was first to compute \(\mathbf{F}_{\mathrm{drag}}\) for a bluff body as a sum of viscous drag and adverse pressure gradient. The drag forces for a cylinder and aerofoil are computed in this manner [13; 14; 15]. Computation of \(\mathbf{F}_{\mathrm{drag}}\) for a pipe flow is also quite complex involving many factors--walls, fluid properties, bulk turbulence, Reynolds number, etc. In the present review, we focus on the turbulent drag in bulk where we can ignore the effects of walls. The above simplification enables us to compute turbulent drag in many diverse flows--polymeric turbulence, MHD turbulence, dynamo, liquid metals--using a common framework. We focus on a turbulent flow within a periodic box for which \(\int d\mathbf{r}\nabla(p/\rho)=0\). By ignoring the viscous drag, we deduce the turbulent drag as (see Eqs. (1, 17)) \[\mathbf{F}_{\mathrm{drag}}=\mathbf{F}_{\mathrm{ext}}=\int d\mathbf{r}\left[( \mathbf{u}\cdot\nabla)\mathbf{u}\right]. \tag{18}\] Since the external force is active at large scales, under steady state, \[\left\langle\mathbf{F}_{\mathrm{drag}}\right\rangle_{\mathrm{LS}}\approx \left\langle\left|(\mathbf{u}\cdot\nabla)\mathbf{u}\right|\right\rangle_{ \mathrm{LS}}\approx\left\langle\mathbf{F}_{\mathrm{ext}}\right\rangle, \tag{19}\] where \(\left\langle.\right\rangle_{\mathrm{LS}}\) represents ensemble averaging over large scales. To estimate \(\left\langle\mathbf{F}_{\mathrm{drag}}\right\rangle_{\mathrm{LS}}\), we perform a dot product of Eq. (1) with \(\mathbf{u}\) and integrate it over a wavenumber sphere of radius \(k_{f}\) (forcing wavenumber band) that leads to \[\int_{\mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{ext}}\cdot\mathbf{u}]=\int_ {\mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{drag}}\cdot\mathbf{u}]=f_{1}UF_{ \mathrm{drag}}, \tag{20}\] with \(f_{1}\approx 1\). Under steady state, using Eqs. (9,14) we deduce that \[\int_{\mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{ext}}\cdot\mathbf{u}]= \left\langle\left|\left[(\mathbf{u}\cdot\nabla)\mathbf{u}\right]\cdot\mathbf{ u}\right|\right\rangle_{\mathrm{LS}}=-\int_{0}^{k_{f}}T_{u}(k^{\prime})dk^{ \prime}=\Pi_{u}(k). \tag{21}\] Therefore, \[UF_{\mathrm{drag}}\approx\Pi_{u}\approx\frac{U^{3}}{d}\approx\epsilon_{ \mathrm{inj}}, \tag{22}\] or \[F_{\mathrm{drag}}\approx\frac{\Pi_{u}}{U}\approx\frac{U^{2}}{d}. \tag{23}\] Note that the viscous dissipation can be ignored at large scales. It has been observed that polymers and magnetic field suppress turbulent drag. We detail these phenomena in the subsequent sections. ## 3 General framework for TDR using energy flux In this section, we describe a general framework for TDR in a turbulent flow with a secondary field \(\mathbf{B}\). At present, for convenience, we assume \(\mathbf{B}\) to be a vector, however, it could also be a scalar or a tensor. The present formalism is taken from Verma _et al._[11]. The equations for the velocity and secondary fields are [11; 29; 37; 46]: \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\nabla(p/\rho)+\nu\nabla^{2}\mathbf{u}+\mathbf{F}_{u}(\mathbf{u },\mathbf{B})+\mathbf{F}_{\mathrm{ext}}, \tag{24}\] \[\frac{\partial\mathbf{B}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{B} = \eta\nabla^{2}\mathbf{B}+\mathbf{F}_{B}(\mathbf{u},\mathbf{B}),\] (25) \[\nabla\cdot\mathbf{u} = 0, \tag{26}\] where \(\mathbf{u},p\) are the velocity and pressure fields respectively; \(\rho\) is the density which is assumed to be unity; \(\nu\) is the kinematic viscosity; \(\eta\) is the diffusion coefficient for \(\mathbf{B}\); and \(\mathbf{F}_{u}\) and \(\mathbf{F}_{B}\) are the force fields acting on \(\mathbf{u}\) and \(\mathbf{B}\) respectively. Note that \(\mathbf{F}_{u}\) and \(\mathbf{F}_{B}\) typically represent interactions between \(\mathbf{u}\) and \(\mathbf{B}\). The external field \(\mathbf{F}_{\mathrm{ext}}\) is employed at large scales of the velocity field to maintain a steady state. Using Eq. (24) we derive the following equation for the KE density \(u^{2}/2\) (with \(\rho=1\)): \[\frac{\partial}{\partial t}\frac{u^{2}}{2}+\nabla\cdot\left[\frac{u^{2}}{2} \mathbf{u}\right]=-\nabla\cdot(p\mathbf{u})+[\mathbf{F}_{u}+\mathbf{F}_{ \mathrm{ext}}]\cdot\mathbf{u}-\nu\mathbf{u}\cdot\nabla^{2}\mathbf{u}. \tag{27}\] In Fourier space, the equation for the modal KE, \(E_{u}(\mathbf{k})=|\mathbf{u}(\mathbf{k})|^{2}/2\), is \[\frac{d}{dt}E_{u}(\mathbf{k}) = T_{u}(\mathbf{k})+\mathcal{F}_{u}(\mathbf{k})+\mathcal{F}_{ \mathrm{ext}}(\mathbf{k})-D_{u}(\mathbf{k}), \tag{28}\] where \[T_{u}(\mathbf{k}) = \sum_{\mathbf{p}}\Im\left[\{\mathbf{k}\cdot\mathbf{u}(\mathbf{q} )\}\{\mathbf{u}(\mathbf{p})\cdot\mathbf{u}^{*}(\mathbf{k})\}\right], \tag{29}\] \[\mathcal{F}_{u}(\mathbf{k}) = \Re[\mathbf{F}_{u}(\mathbf{k})\cdot\mathbf{u}^{*}(\mathbf{k})],\] (30) \[\mathcal{F}_{\mathrm{ext}}(\mathbf{k}) = \Re[\mathbf{F}_{\mathrm{ext}}(\mathbf{k})\cdot\mathbf{u}^{*}( \mathbf{k})],\] (31) \[D_{u}(\mathbf{k}) = -2\nu k^{2}E_{u}(\mathbf{k}), \tag{32}\] with \(\mathbf{q}=\mathbf{k}-\mathbf{p}\). We sum Eq. (28) over the \(\mathbf{u}\) modes of the wavenumber sphere of radius \(K\) that yields [37; 40]: \[\frac{d}{dt}\sum_{k\leq K}E_{u}(\mathbf{k}) = \sum_{k\leq K}T_{u}(\mathbf{k})+\sum_{k\leq K}\mathcal{F}_{u}( \mathbf{k})+\sum_{k\leq K}\mathcal{F}_{\mathrm{ext}}(\mathbf{k})-\sum_{k\leq K }D_{u}(\mathbf{k}). \tag{33}\] A physical interpretation of the terms in the right-hand side of Eq. (33) are as follows: 1. \(\sum_{k\leq K}T_{u}(\mathbf{k})\) is the net KE transfer from the \(\mathbf{u}\) modes outside the sphere to the \(\mathbf{u}\) modes inside the sphere due to the nonlinearity \((\mathbf{u}\cdot\nabla)\mathbf{u}\). Equivalently, \(\sum_{k\leq K}T_{u}(\mathbf{k})=-\Pi_{u}(K)\) of Eq. (14). 2. \(\sum_{k\leq K}\mathcal{F}_{u}(\mathbf{k})\) is the total energy transfer rate by the interaction force \(\mathbf{F}_{u}(\mathbf{k})\) to \(\mathbf{u}(\mathbf{k})\) modes inside the sphere. 3. \(\sum_{k\leq K}\mathcal{F}_{\mathrm{ext}}(\mathbf{k})\) is the net KE injected by the external force \(\mathbf{F}_{\mathrm{ext}}\) (red sphere of Fig. 4). For \(K>k_{f}\), \(\sum_{k\leq K}\mathcal{F}_{\mathrm{ext}}(\mathbf{k})=\epsilon_{\mathrm{inj}}\) because \(\mathbf{F}_{\mathrm{ext}}=0\) beyond \(k=k_{f}\). The \(\mathbf{u}^{<}\) modes lose energy to \(\mathbf{u}^{>}\) and \(\mathbf{B}\) modes via nonlinear interactions. The term \(-\sum_{k\leq K}\mathcal{F}_{u}(\mathbf{k})\) of Eq. (33) represents the net energy transfer from the \(\mathbf{u}^{<}\) modes (those inside the sphere) to all the \(\mathbf{B}\) modes (\(\mathbf{B}^{<}\) and \(\mathbf{B}^{>}\)) via the interaction force \(\mathbf{F}_{u}(\mathbf{k})\). We define the corresponding flux \(\Pi_{B}(K)\) as \[\Pi_{B}(K)=-\sum_{k\leq K}\mathcal{F}_{u}(\mathbf{k}). \tag{34}\] Thus, \(\mathbf{u}^{<}\) modes lose energy to \(\mathbf{u}^{>}\) modes, as well as to \(\mathbf{B}\) modes, via nonlinear interactions. In addition, \(\mathbf{u}^{<}\) modes lose energy via viscous dissipation, which is the last term of Eq. (33). Therefore, under steady state, the kinetic energy injected by \(\mathbf{F}_{\mathrm{ext}}\) must match (statistically) with the sum of \(\Pi_{u}(K)\), \(\Pi_{B}(K)\), and the viscous dissipation rate [37; 40]1. That is, Footnote 1: In this paper we do not discuss the energetics of \(\mathbf{B}\) field because TDR is related to the energy fluxes associated with the velocity field. \[\Pi_{u}(K)+\Pi_{B}(K)+\sum_{k\leq K}D_{u}(\mathbf{k})=\epsilon_{\mathrm{inj}}. \tag{35}\] In the inertial range where \(D_{u}(\mathbf{k})\approx 0\), we obtain \[\Pi_{u}(K)+\Pi_{B}(K)\approx\epsilon_{\mathrm{inj}}. \tag{36}\] In later sections, we show that \(\Pi_{B}(k)>0\) in MHD, QSMHD, polymeric, and stably-stratified turbulence. Therefore, using Eq. (36) we deduce that for the same injection rate \(\epsilon_{\mathrm{inj}}\), \(\Pi_{u}(k)\) in the mixture (with field \(\mathbf{B}\)) is lower than that in HD turbulence, that is, \[\Pi_{u,\mathrm{mix}}<\Pi_{u,\mathrm{HD}}. \tag{37}\] Now we estimate the drag force in the presence of \(\mathbf{B}\). As discussed below, there are several ways to estimate this drag force. 1. As discussed in Section 2, we average Eq. (24) over small wavenumbers. Using \[\int_{\mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{ext}}\cdot\mathbf{u}]=\int_{ \mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{drag}}\cdot\mathbf{u}]=f_{2}UF_{ \mathrm{drag,mix}}.\] (38) Under steady state, using Eqs. (9,14) we deduce that \[\int_{\mathrm{LS}}d\mathbf{r}[\mathbf{F}_{\mathrm{ext}}\cdot\mathbf{u}]=-\int_ {0}^{k_{f}}[T_{u}(k^{\prime})+\mathcal{F}_{u}(k^{\prime})]dk^{\prime}=\Pi_{u} (k)+\Pi_{B}(k).\] (39) Hence, \[F_{\mathrm{drag,mix}}\approx\frac{\Pi_{u}+\Pi_{B}}{f_{2}U}\approx\frac{ \epsilon_{\mathrm{inj}}}{f_{2}U}.\] (40) It is observed that in a mixture, \(U\) is typically larger than that in HD turbulence [5; 11]. Computation of \(f_{2}\) may be quite complex, and it is difficult to compare \(f_{1}\) and \(f_{2}\). Still, considering \(U_{\mathrm{mix}}>U_{\mathrm{HD}}\), we expect \(F_{\mathrm{drag,mix}}\) to be weaker than the corresponding drag in HD turbulence. This is the origin of TDR in the bulk when \(\mathbf{B}\) field (polymers or magnetic field) is present. 2. Considering the uncertainties in \(f_{2}\), it is proposed that turbulent drag is proportional to \((\mathbf{u}\cdot\nabla)\mathbf{u}\)[11]. For MHD turbulence, the force \(\mathbf{F}_{u}\), which is the Lorentz force, may be treated separately, and \((\mathbf{u}\cdot\nabla)\mathbf{u}\) may be considered Figure 4: The external force injects KE into the small red sphere with the rate of \(\epsilon_{\mathrm{int}}\). \(\Pi_{u}(K)\) is the KE flux for the velocity wavenumber sphere of radius \(K\) (yellow sphere), and \(\Pi_{B}(K)\) is the net energy transfer from \(\mathbf{u}\) modes inside the sphere to all the \(\mathbf{B}\) modes. The energy flux \(\Pi_{u}(K)\) is dissipated with dissipation rates \(D_{u}\). For small wavenumbers and inertial range, \(\Pi_{u}(K)+\Pi_{B}(K)\approx\epsilon_{\mathrm{int}}\). From Verma _et al._[11]. Reprinted with permission from AIP. as the drag force. This assumption simplifies the calculation with \[F_{\rm drag,mix}\approx\frac{\Pi_{u}}{U}. \tag{41}\] In a typical scenario, \(\Pi_{u,\rm mix}<\Pi_{u,\rm HD}\), and \(U_{\rm mix}>U_{\rm HD}\)[5; 11]. Therefore, we expect that \[F_{\rm drag,mix}<F_{\rm drag,HD}. \tag{42}\] Thus, turbulent drag is reduced in the presence of a secondary fields, such as magnetic field and polymers. Verma _et al._[11] adopted this scheme for the computation of turbulent drag. We will use this scheme throughout the paper. In Fig. 5, we present a schematic diagram illustrating TDR in a pipe flow and in bulk turbulence. An introduction of polymers in a pipe flow weakens the fluctuations and enhances the mean flow (see Fig. 5(a,b)). Similarly, in bulk turbulence, polymers and magnetic field can induce strong large-scale \(U\) and weaken the fluctuations in comparison to HD turbulence (see Fig. 5(c,d)). Figure 5: (a) Mean velocity profile (D profile) and fluctuations (green arrows) in a pipe flow without polymers. (b) With dilute polymers, the mean flow is enhanced, but the fluctuations are suppressed. (c) Velocity fluctuations in HD turbulence. (d) With polymers and magnetic field, the fluctuations (green arrows) are suppressed, but the large-scale \(U\) (black arrows) is enhanced. We propose the following drag coefficients to quantify TDR in the bulk: \[\bar{C}_{d1} = \frac{\langle\Pi_{u}\rangle}{U^{3}/L}, \tag{43}\] \[\bar{C}_{d2} = \frac{\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle}{U^{2}/L}, \tag{44}\] where \(L\) is the integral length scale, and \(U\) is the large-scale velocity. We obtain \(\bar{C}_{d1}\approx 1\) and \(\bar{C}_{d2}\approx 1\) for HD turbulence. However, \(\bar{C}_{d1}\) and \(\bar{C}_{d2}\) for a mixture are smaller than those for HD turbulence. In subsequent sections, we will compute the above drag coefficients for a variety of flows, but with an emphasis on MHD and QSMHD turbulence, and dynamo. In the next section, we provide a brief introduction to TDR in a turbulent flow with dilute polymers. ## 4 TDR in flows with dilute polymers via energy flux An introduction of small amount of polymers in a turbulent flow suppresses turbulent drag [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. As discussed in Section 1, TDR in polymeric turbulence depends on the boundaries, bulk turbulence, properties of fluids and polymers, anisotropy, etc. However, in this paper we focus on the TDR due to suppression of KE flux in the presence of polymers. For detailed discussions on TDR due to polymers, refer to the references [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. One of the popular models for polymers is _finitely extensible nonlinear elastic-Peterlin model_ (_FENE-P_) [9; 47]. In this model, the governing equations for the velocity field \(\mathbf{u}\) and configuration tensor \(\mathcal{C}\) are [9; 46; 48] \[\frac{\partial u_{i}}{\partial t}+u_{j}\partial_{j}u_{i} = -\partial_{i}p/\rho+\nu\partial_{jj}u_{i}+\frac{\mu}{\tau_{p}} \partial_{j}(f\mathcal{C}_{ij})+F_{\mathrm{ext},i}, \tag{45}\] \[\frac{\partial\mathcal{C}_{ij}}{\partial t}+u_{l}\partial_{l} \mathcal{C}_{ij} = \mathcal{C}_{il}\partial_{l}u_{j}+\mathcal{C}_{jl}\partial_{l}u_{ i}+\frac{1}{\tau_{p}}[f\mathcal{C}_{ij}-\delta_{ij}],\] (46) \[\partial_{i}u_{i} = 0, \tag{47}\] where \(\rho\) is the mean density of the solvent, \(\nu\) is the kinematic viscosity, \(\mu\) is an additional viscosity parameter, \(\tau_{p}\) is the polymer relaxation time, and \(f\) is the renormalized Peterlin's function. In the above equations, the following forces are associated with \(\mathbf{u}\) and \(\mathcal{C}\) (apart from constants) [3; 40; 47; 10; 37]: \[F_{u,i} = \partial_{j}(f\mathcal{C}_{ij}), \tag{48}\] \[F_{u,i}(\mathbf{k}) = \sum_{\mathbf{p}}\left[ik_{j}f(\mathbf{q})\mathcal{C}_{ij}( \mathbf{p})\right],\] (49) \[\mathcal{F}_{u}(\mathbf{k}) = \Re[F_{u,i}(\mathbf{k})u_{i}^{*}(\mathbf{k})]=-c_{1}\sum_{ \mathbf{p}}\Im\left[k_{j}f(\mathbf{q})\mathcal{C}_{ij}(\mathbf{p})u_{i}^{*}( \mathbf{k})\right], \tag{50}\] where \(\mathbf{q}=\mathbf{k}-\mathbf{p}\), and \(c_{1}\) is a constant. Note that the field \(\mathcal{C}\) replaces \(\mathbf{B}\) of Eqs. (24-26). Using the above equations, we derive the energy flux \(\Pi_{\mathcal{C}}(K)\), which is the net energy transfer rate from \(\mathbf{u}^{<}\) to \(\mathcal{C}\), as [37; 40] \[\Pi_{\mathcal{C}}(K)\,=\,\sum_{k\leq K}\sum_{\mathbf{p}}-c_{1}\Im\left[k_{j}f( \mathbf{q})\mathcal{C}_{ij}(\mathbf{p})u_{i}^{*}(\mathbf{k})\right] \tag{51}\] with \(\mathbf{q}=\mathbf{k}-\mathbf{p}\). Valente _et al._[18; 19] analysed the energy fluxes \(\Pi_{u}(k)\) and \(\Pi_{\mathcal{C}}(k)\) in a turbulent flow with dilute polymers and observed that \(\Pi_{\mathcal{C}}(k)>0\). One of their figures illustrating \(\Pi_{u}(k)\) and \(\Pi_{\mathcal{C}}(k)\) is reproduced in Fig. 6[19]. As shown in the figure, for \(\mathrm{De}=16.2\), \(\Pi_{\mathcal{C}}(k)/P\) (\(P=\) total injected power) peaks at approximately \(0.9\) when \(k\eta\approx 0.1\), where \(\eta\) is Kolmogorov's wavenumber. However, \(\Pi_{u}(k)/P\) remains less than \(0.1\) for all \(k\eta\). Valente _et al._[18; 19] also reported that \(\Pi_{u}(k)\) and \(\Pi_{\mathcal{C}}(k)\) depend on the Deborah number, De, which is the ratio of the relaxation time scale of the polymer and the characteristic time scale for the energy cascade. Notably, \(\Pi_{\mathcal{C}}(k)\) is maximum when \(\mathrm{De}\sim 1\). Thus, Valente _et al._[18; 19] showed that \(\Pi_{u}(k)\) is reduced significantly from \(\epsilon_{\mathrm{inj}}\) due to the energy transfer from the velocity field to polymers. That is, \(\Pi_{u}(k)<\epsilon_{\mathrm{inj}}\). Figure 6: For a polymeric flow with \(\mathrm{De}=16.2\), the energy fluxes \(\Pi_{u}(k)\) and \(\Pi_{\mathcal{C}}(k)\) normalized with the KE injection rate \(P\), and dissipation rate \(D_{u}(k)\)[19]. The injected KE, \(P\), is transferred to \(\mathbf{u}^{>}\) and \(\mathcal{C}\) as \(\Pi_{u}(k)\) and \(\Pi_{\mathcal{C}}(k)\) respectively. The rest of the injected energy is dissipated. Adapted from a figure from Valente _et al._[19]. Reprinted with the permission of AIP. Benzi _et al._[7] and Ray and Vincenzi [49] showed that during TDR, the large-scale KE is enhanced compared to HD turbulence. Figure 7 illustrates the energy spectra of Benzi _et al._ for pure HD and polymeric turbulence. In the figure we observe that at small wavenumbers, \(E_{u}(k)\) is larger for polymeric turbulence than that for HD turbulence. Hence, we deduce that large-scale \(U\) is enhanced in the presence of polymers. Thais et al. [50] and Nguyen et al. [51] arrived at similar conclusions using direct numerical simulation of polymeric turbulence. Based on these observations, we deduce that \[\Pi_{u,\mathrm{Polymeric}}<\Pi_{u,\mathrm{HD}}\quad\mathrm{and}\quad U_{ \mathrm{Polymeric}}>U_{\mathrm{HD}}. \tag{52}\] Therefore, using \(F_{\mathrm{drag}}=\Pi_{u}/U\), we deduce that \[F_{\mathrm{drag,Polymeric}}<F_{\mathrm{drag,HD}}. \tag{53}\] Thus, reduction in KE flux leads to a decrease in nonlinearity, and hence, TDR in polymeric turbulence. L'vov et al. [52] and others have observed TDR in flows with bubbles. In a bubbly flow, the KE is transferred to the elastic energy of the bubbles that leads to TDR. We also remark that in the laminar regime, the polymers induce additional drag via the term \(\mu\partial_{j}(f\mathcal{C}_{ij})/\tau_{p}\) of Eq. (45). Hence, polymers enhance the drag in the viscous limit [5]. Also note that in the present review, we focus on TDR in bulk turbulence and have avoided discussions on boundary layers, anisotropy, effects of polymer concentration, etc. Earlier, Fouxon and Lebedev [46] had related the equations of a turbulent flow with dilute polymers to those of MHD turbulence. In the next section, we Figure 7: KE spectra for pure HD turbulence (dashed line with circle) and polymeric turbulence (solid line with squares). At small wavenumbers, \(E_{u}(k)\) with polymers is larger than that without polymers. From Benzi _et al._[7]. Reprinted with permission from APS. will show that the energy transfers in MHD turbulence are similar to those in polymeric turbulence. ## 5 TDR in MHD turbulence via energy flux Magnetofluid is quasi-neutral and highly conducting charged fluid, and its dynamics is described by magnetohydrodynamics (MHD). Our universe is filled with magnetofluids, with prime examples being solar wind, solar corona, stellar convection zone, interstellar medium, and intergalactic medium [53; 54; 55]. The equations for incompressible MHD are [53; 54] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\nabla(p/\rho)+\nu\nabla^{2}\mathbf{u}+\mathbf{F}_{u}(\mathbf{B},\mathbf{B})+\mathbf{F}_{\mathrm{ext}}, \tag{54}\] \[\frac{\partial\mathbf{B}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{B} = \eta\nabla^{2}\mathbf{B}+\mathbf{F}_{B}(\mathbf{B},\mathbf{u}),\] (55) \[\nabla\cdot\mathbf{u} = 0,\] (56) \[\nabla\cdot\mathbf{B} = 0, \tag{57}\] where \(\mathbf{u},\mathbf{B}\) are the velocity and magnetic fields respectively; \(p\) is the total (thermal + magnetic) pressure; \(\rho\) is the density which is assumed to be unity; \(\nu\) is the kinematic viscosity; \(\eta\) is the magnetic diffusivity; \(\mathbf{F}_{\mathrm{ext}}\) is the external force employed at large scales; and \[\mathbf{F}_{u} = (\mathbf{B}\cdot\nabla)\mathbf{B}, \tag{58}\] \[\mathbf{F}_{B} = (\mathbf{B}\cdot\nabla)\mathbf{u} \tag{59}\] represent respectively the Lorentz force and the stretching of the magnetic field by the velocity field. Note that \(\mathbf{F}_{u}\) and \(\mathbf{F}_{B}\) induce energy exchange among \(\mathbf{u}\) and \(\mathbf{B}\) modes. In the above equations, the magnetic field \(\mathbf{B}\) is in velocity units, which is achieved by \(\mathbf{B}_{\mathrm{cgs}}\rightarrow\mathbf{B}_{\mathrm{cgs}}/\sqrt{4\pi\rho}\). The evolution equation for the modal kinetic energy \(E_{u}(\mathbf{k})=|\mathbf{u}(\mathbf{k})|^{2}/2\) is [16; 36; 37; 40; 41; 42; 56] \[\frac{d}{dt}E_{u}(\mathbf{k}) = T_{u}(\mathbf{k})+\mathcal{F}_{u}(\mathbf{k})+\mathcal{F}_{ \mathrm{ext}}(\mathbf{k})-D_{u}(\mathbf{k}), \tag{60}\] where \[T_{u}(\mathbf{k}) = \sum_{\mathbf{p}}\Im\left[\{\mathbf{k}\cdot\mathbf{u}(\mathbf{q} )\}\{\mathbf{u}(\mathbf{p})\cdot\mathbf{u}^{*}(\mathbf{k})\}\right], \tag{61}\] \[\mathcal{F}_{u}(\mathbf{k}) = \Re[\mathbf{F}_{u}(\mathbf{k})\cdot\mathbf{u}^{*}(\mathbf{k})]= \sum_{\mathbf{p}}-\Im\left[\{\mathbf{k}\cdot\mathbf{B}(\mathbf{q})\}\{\mathbf{ B}(\mathbf{p})\cdot\mathbf{u}^{*}(\mathbf{k})\}\right],\] (62) \[\mathcal{F}_{\mathrm{ext}}(\mathbf{k}) = \Re[\mathbf{F}_{\mathrm{ext}}(\mathbf{k})\cdot\mathbf{u}^{*}( \mathbf{k})],\] (63) \[D_{u}(\mathbf{k}) = 2\nu k^{2}E_{u}(\mathbf{k}), \tag{64}\] with \(\mathbf{q}=\mathbf{k}-\mathbf{p}\). Summing Eq. (60) over the modes of the wavenumber sphere of radius \(K\) yields [30; 48; 56]: \[-\frac{d}{dt}\sum_{k\leq K}E_{u}(\mathbf{k}) = -\sum_{k\leq K}T_{u}(\mathbf{k})-\sum_{k\leq K}\mathcal{F}_{u}( \mathbf{k})-\sum_{k\leq K}\mathcal{F}_{\mathrm{ext}}(\mathbf{k})+\sum_{k\leq K }D_{u}(\mathbf{k}) \tag{65}\] \[= \Pi_{u}(K)+\Pi_{B}(K)-\epsilon_{\mathrm{inj}}+\text{total viscous dissipation.}\] Note that \[\Pi_{B}(K)=-\sum_{k\leq K}\mathcal{F}_{u}(\mathbf{k})=\sum_{k\leq K}\sum_{ \mathbf{p}}\Im\left[\{\mathbf{k}\cdot\mathbf{B}(\mathbf{q})\}\{\mathbf{B}( \mathbf{p})\cdot\mathbf{u}^{*}(\mathbf{k})\}\right]. \tag{66}\] In Fig. 4, we illustrate \(\Pi_{B}(K)\) using the red arrows. Under a steady state (\(dE_{u}(\mathbf{k})/dt=0\)), \[\Pi_{u}(K)+\Pi_{B}(K)+\sum_{k\leq K}D_{u}(\mathbf{k})=\epsilon_{\mathrm{inj}}. \tag{67}\] In the inertial range where \(D_{u}(\mathbf{k})\approx 0\), we obtain \[\Pi_{u}(K)+\Pi_{B}(K)\approx\epsilon_{\mathrm{inj}}. \tag{68}\] Following similar lines of arguments as in Section 3, we estimate the turbulent drag in MHD turbulence as \[\left\langle F_{\mathrm{drag,MHD}}\right\rangle\approx\left\langle\left|( \mathbf{u}\cdot\nabla)\mathbf{u}\right|\right\rangle_{\mathrm{LS}}\approx \frac{\Pi_{u}}{U}\approx\frac{\epsilon_{\mathrm{inj}}-\Pi_{B}}{U}. \tag{69}\] Researchers have studied the energy fluxes \(\Pi_{u}\) and \(\Pi_{B}\) in detail for various combinations of parameters--forcing functions, boundary condition, \(\nu\) and \(\eta\) (or their ratio \(\mathrm{Pm}=\nu/\eta\), which is called the _magnetic Prandtl number_). For example, Dar _et al._[16], Debliquy _et al._[57], Mininni _et al._[17], and Kumar _et al._[58; 59] computed the fluxes \(\Pi_{u}\) and \(\Pi_{B}\) using numerical simulations and observed that \(\Pi_{B}>0\) on most occasions. Using numerical simulations, Mininni _et al._[17] showed that \(\mathcal{F}_{u}(\mathbf{k})<0\), and hence \(\Pi_{B}(\mathbf{k})>0\) (see Fig. 8). Hence, using Eq. (69) we deduce that \[\Pi_{u,\mathrm{MHD}}<\Pi_{u,\mathrm{HD}}. \tag{70}\] That is, the KE flux in MHD turbulence is lower than the corresponding flux in HD turbulence (without magnetic field). In addition, the speed \(U\) may increase under the inclusion of magnetic field. Therefore, using \(F_{\mathrm{drag}}=\Pi_{u}/U\), we deduce that \[F_{\mathrm{drag,MHD}}<F_{\mathrm{drag,HD}}. \tag{71}\] In this next section, we will explore whether the above inequality holds in numerical simulations of MHD turbulence. ## 6 Numerical verification of TDR in MHD turbulence Many researchers have simulated MHD turbulence, but TDR in MHD turbulence has not been explored in detail. In this section, we will present numerical results on TDR from direct numerical simulations (DNS) and shell models. MHD turbulence exhibits six energy fluxes that are shown in Fig. 9. These fluxes represent energy transfers from \(u^{<}\) and \(u^{>}\) to \(b^{<}\) and \(b^{>}\)[16; 37; 42]. However, as we discussed in Section 3, the relevant fluxes for TDR are \(\Pi_{u}\) and \(\Pi_{B}\). Also, TDR takes place at large scales, hence, we consider energy fluxes from small wavenumber spheres. In terms of the fluxes of Fig. 9, \[\Pi_{u}(K) = \Pi_{u>}^{u<}(K), \tag{72}\] \[\Pi_{B}(K) = \Pi_{b<}^{u<}(K)+\Pi_{b>}^{u<}(K). \tag{73}\] As discussed in Section 5, \(\Pi_{B}>0\)[57; 58; 59; 60; 16]. Hence, \(\Pi_{u}<\epsilon_{\rm inj}\) that leads to TDR in MHD turbulence. In this section, we will report the energy fluxes and \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) for HD and MHD turbulence from DNS and shell models, and compare them to quantify TDR in MHD turbulence. It is important to note that the velocity field receives parts of \(\Pi_{B}\) via the energy fluxes \(\Pi_{u>}^{b<}\) and \(\Pi_{u>}^{b>}\). However, these transfers are effective at intermediate and large wavenumbers. In this review we focus on small wavenumbers, hence we can ignore these energy transfers. In the following subsection, we discuss TDR in DNS of MHD turbulence. ### TDR in direct numerical simulation of MHD turbulence We solve the nondimensional MHD equations (54-57) using pseudo-spectral code TARANG [61; 62; 63] in a cubic periodic box of size \((2\pi)^{3}\). We nondimensionalize velocity, length, and time using the initial rms speed (\(U_{0}\)), box size \((2\pi)\), and the initial eddy turnover time (\(2\pi/U_{0}\)) respectively. We employ the fourth-order Runge-Kutta (RK4) scheme for time marching; Courant-Friedrich-Lewis (CFL) condition for computing the time step \(\Delta t\); and \(2/3\) rule for dealising. We perform our simulations on a \(256^{3}\) grid for \(\mathrm{Pm}=1/3,1,10/3\) (the details in the following discussion). The mean magnetic field \(\mathbf{B}_{0}=0\). Note that the \(256^{3}\) grid resolution is sufficient for computing the large-scale \(\Pi_{u},\Pi_{B}\), and \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\). In addition, the low grid resolution helps us carry out simulations for many eddy turnover times. For the initial condition, we employ random velocity and magnetic fields at all wavenumbers. For creating such fields, it is convenient to employ Craya-Herring basis [64; 65], whose basis vectors for wavenumber \(\mathbf{k}\) are \[\hat{\mathbf{e}}_{3}(\mathbf{k})=\hat{\mathbf{k}};\ \ \ \ \hat{\mathbf{e}}_{1}( \mathbf{k})=(\hat{\mathbf{k}}\times\hat{\mathbf{n}})/|\hat{\mathbf{k}}\times \hat{\mathbf{n}}|;\ \ \ \ \hat{\mathbf{e}}_{2}(\mathbf{k})=\hat{\mathbf{k}}\times\hat{\mathbf{e}}_{1}( \mathbf{k}) \tag{74}\] with \(\hat{\mathbf{n}}\) along any arbitrary direction, and \(\hat{\mathbf{k}}\) as the unit vector along \(\mathbf{k}\). We choose 3D incompressible flow, hence, \[\mathbf{u}(\mathbf{k})=u_{1}(\mathbf{k})\hat{\mathbf{e}}_{1}( \mathbf{k})+u_{2}(\mathbf{k})\hat{\mathbf{e}}_{2}(\mathbf{k}). \tag{75}\] For random initial velocity with the total kinetic energy as \(E_{u}\), we employ \[u_{1}(\mathbf{k}) = \sqrt{(E_{u}/2N^{3})}\;i\left(\exp(i\phi_{1}(\mathbf{k}))-\exp(i \phi_{2}(\mathbf{k}))\right), \tag{76}\] \[u_{2}(\mathbf{k}) = \sqrt{(E_{u}/2N^{3})}\;\left(\exp(i\phi_{1}(\mathbf{k}))+\exp(i \phi_{2}(\mathbf{k}))\right), \tag{77}\] where \(N^{3}\) is the total number of modes, and the phases \(\phi_{1}(\mathbf{k})\) and \(\phi_{2}(\mathbf{k})\) are chosen randomly from uniform distribution in the band \([0,2\pi]\). The above formulas ensure that the kinetic helicity remains zero. We employ \(E_{u}=0.5\) for our simulation. A similar scheme is adopted for the random magnetic field with the initial magnetic energy as \(0.25\). We carry out the above run for \(\nu=\eta=0.01\), or \(\mathrm{Pm}=1\). We employ random force to the velocity modes in a wavenumber shell \((2,3)\), denoted by \(k_{f}=2\), so as to achieve a steady state [66]. The kinetic-energy injection rate \(\epsilon_{\mathrm{inj}}=0.4\). We carry out the simulation till 29 eddy turnover times. Note, however, that the flow reaches a steady state in approximately 15 eddy turnover times. At the end of the above simulation, we perform four independent simulations given below. We take the final state of the above run as the initial state (\(t=0\)) for the following simulations. 1. MHD1: \(\nu=0.01\), \(\eta=0.03\), and hence \(\mathrm{Pm}=1/3\). 2. MHD2: \(\nu=0.01\), \(\eta=0.01\), and hence \(\mathrm{Pm}=1\). This is continuation of the run described above. 3. MHD3: \(\nu=0.01\), \(\eta=0.003\), and hence \(\mathrm{Pm}=10/3\). 4. HD: \(\nu=0.01\) with magnetic field turned off. We carry out the HD and MHD2 simulations till 40 eddy turnover times, whereas MHD1 and MHD3 runs till 5 eddy turnover times. Subsequently, we compare the energy fluxes and \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) of the four runs after they have reached their respective steady states that occur in several eddy turnover times. The Reynolds number (\(\mathrm{Re}=UL/\nu\)) for the steady state of the HD run is 457. For the steady state of the MHD runs with \(\mathrm{Pm}=1/3,1\), and \(10/3\), \(\mathrm{Re}=413\), 347, and 338 respectively, while \(\mathrm{Rm}=137\), 347 and 1127 respectively. In Fig. 10 (left column), we exhibit the time series of KE of the HD run, and as well as KE, magnetic energies (ME), and the total energies of the three MHD runs. The corresponding dissipation rates are exhibited in the right column of Fig. 10. As shown in the figures, all the runs reach steady states after several eddy turnover times. The KE dissipation rate for the HD run increases rapidly to 0.4, which is the KE injection rate (\(\epsilon_{\mathrm{inj}}\)). The KE for the MHD runs with \(\mathrm{Pm}=1/3,1\), and \(10/3\) saturate respectively to approximate values of \(0.65,0.47\) and \(0.41\), but the respective magnetic energies saturate at approximately \(0.07,0.2\) and \(0.26\). Note that energies for the MHD runs exhibit significant fluctuations, however, the dissipation rates of the total energy remain at 0.4. Now, we report the energy spectra for the velocity and magnetic fields for a wavenumber \(k\). Numerically, we compute them using \[E_{u}(k) = \frac{1}{2}\sum_{k-1<|\mathbf{k}^{\prime}|\leq k}|\mathbf{u}( \mathbf{k}^{\prime})|^{2}, \tag{78}\] \[E_{b}(k) = \frac{1}{2}\sum_{k-1<|\mathbf{k}^{\prime}|\leq k}|\mathbf{b}( \mathbf{k}^{\prime})|^{2}. \tag{79}\] In Fig. 11, we exhibit \(E_{u}(k)\) and \(E_{b}(k)\) for the MHD runs, along with \(E_{u}(k)\) for the HD run. These quantities are averaged over several time frames in the steady state. We observe that \(E_{u}(k)\) for the HD run is larger than those for the MHD runs, except at several small wavenumbers for \(\mathrm{Pm}=1/3\) where \(E_{b}(k)>E_{u}(k)\). Figure 10: Left column: (a,c,e) Time series of KE of the HD run (dashed red curve); and KE (solid red curve), magnetic energies (solid green curve), and total energies (solid blue curve) of the MHD runs for \(\mathrm{Pm}=1/3,1,10/3\). Right column: (b,d,f) Corresponding energy dissipation rates with the same notation. Further, for the HD and MHD runs, we report the large-scale velocity \(U\), integral length scales \(L\), and Reynolds numbers based on Taylor microscale, \(\mathrm{Re}_{\lambda}=U\lambda/\nu\), where Taylor microscale \(\lambda=(15\nu U^{2}/\epsilon)^{1/2}\)[35; 37]. Following Sreenivasan [43], we compute \(U\) as the rms value for each component of the velocity field, or \[U=\left[\frac{2}{3}\int dkE(k)\right]^{1/2}, \tag{80}\] whereas the integral length \(L\) is computed using \[L=\frac{\int dkk^{-1}E(k)}{\int dkE(k)}. \tag{81}\] We quantify \(U\) in three ways: \(U_{\mathrm{rms}}\); and \(U(K=1)\) and \(U(K=2)\), which are computed using the KE in the wavenumber spheres of radii 1 and 2 respectively. We list \(U_{\mathrm{rms}}\) in Table 1. In Fig. 12, we exhibit the time series of \(U_{\mathrm{rms}}\), \(U(K=1)\), \(U(K=2)\), \(L\), and \(\mathrm{Re}_{\lambda}\) for the four runs. We observe that \(U_{\mathrm{rms}}\), \(U(K=1)\), and \(U(K=2)\) for the MHD runs are smaller than the corresponding quantities for the HD run, except for MHD1 (\(\mathrm{Pm}=1/3\)) where \(U(K=1)\) is comparable to that for the HD run. Consequently, \(\mathrm{Re}_{\lambda}\) for MHD1 is close to that for the HD run, but \(\mathrm{Re}_{\lambda}\) for the other two MHD runs are smaller than those for the Figure 11: (a,b,c) For MHD runs with \(\mathrm{Pm}=1/3,1,10/3\), the KE spectra (solid red curve) and the magnetic energy spectra (solid green curve). We also exhibit the plots of the KE spectra of the HD run (dashed red curve). HD run. The integral lengths \(L\) for the three MHD runs are larger than the corresponding \(L\) for the HD run. Hence, the velocity fields are more ordered in the MHD runs compared to the HD run. Next, we compute \(\Pi_{u}(K)\) for the HD and MHD runs, as well as \(\Pi_{B}(K)\) for the MHD runs. These fluxes exhibit significant fluctuations, hence we average over several time frames in the steady state. The fluxes, shown in Fig 13, clearly show that \(\Pi_{B}>0\), indicating energy transfers from the velocity field Figure 12: Time evolution of rms velocity (\(U_{\rm rms}\)), \(U(K=1)\), \(U(K=2)\), integral length scale (\(L\)), and \({\rm Re}_{\lambda}\) for the HD run (dashed red curve) and the MHD runs (solid red curve) for \({\rm Pm}=1/3,1,10/3\). \(U(K=1)\) and \(U(K=2)\) are computed using the KE contained in the waveumber spheres of radii 1 and 2 respectively. to magnetic field at all scales, and that \[\Pi_{u,\rm MHD}<\Pi_{u,\rm HD}. \tag{82}\] We compute the drag coefficient \(\bar{C}_{d1}\), which is defined in Eq. (43) as \(\left\langle\Pi_{u}\right\rangle/(U_{\rm rms}^{3}/L)\), and exhibit its time series in Fig. 14. In Table 1, we list the average values of \(\bar{C}_{d1}\) for the steady state. We observe that \(\bar{C}_{d1}\) for the steady \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & & & & & & \(K=1\) & & \(K=2\) \\ & Pm & \(\left\langle\Pi_{u}\right\rangle\) & \(U_{\rm rms}\) & \(\left\langle\bar{C}_{d1}\right\rangle\) & \(\left\langle|({\bf u}\cdot\nabla){\bf u}|\right\rangle\) & \(\left\langle\bar{C}_{d2}\right\rangle\) & \(\left\langle|({\bf u}\cdot\nabla){\bf u}|\right\rangle\) & \(\left\langle\bar{C}_{d2}\right\rangle\) \\ \hline HD & - & 0.35 & 0.72 & 0.58 & 0.1 & 0.13 & 0.3 & 0.37 \\ MHD1 & 1/3 & 0.28 & 0.66 & 0.65 & 0.07 & 0.11 & 0.3 & 0.46 \\ MHD2 & 1 & 0.25 & 0.55 & 0.98 & 0.06 & 0.13 & 0.22 & 0.49 \\ MHD3 & 10/3 & 0.17 & 0.53 & 0.8 & 0.04 & 0.09 & 0.17 & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 1: For MHD runs with Pm = 1/3, 1, 10/3, numerical values of average KE flux (\(\left\langle\Pi_{u}\right\rangle\)) in the inertial range, rms velocity (\(U_{\rm rms}\)), and \(\left\langle\bar{C}_{d1}\right\rangle\). We also list \(\left\langle|({\bf u}\cdot\nabla){\bf u}|\right\rangle\) and \(\left\langle\bar{C}_{d2}\right\rangle\) for the wavenumber spheres of radii \(K=1\) and \(K=2\). The table contains the corresponding quantities for the HD run. For all the runs, \(\epsilon_{\rm inj}=0.4\) Figure 13: (a,b,c) Plots \(\Pi_{u}(K)\) (solid red curve) and \(\Pi_{B}(K)\) (solid green curve) for the MHD runs with Pm = \(1/3,1,10/3\). Plots also illustrate \(\Pi_{u}(K)\) (dashed red curve) for the HD run. state of the HD run is consistent with the results of Sreenivasan [43], thus validating our code and diagnostics. However, \(\bar{C}_{d1}\) for the steady states of the three MHD runs are larger than that for the HD run. This is because the decrease in \(U_{\rm rms}^{3}\) for the MHD runs overcompensates the decrease in \(\Pi_{u}(K)\). Now, we examine the nonlinear term \(N_{u}\) for the HD and MHD runs. Since the drag force is effective at large scales, we estimate \(N_{u}\) by its rms value for a small wavenumber sphere of radius \(K\), that is, \[\langle|\left(\mathbf{u}\cdot\boldsymbol{\nabla}\right)\mathbf{u}|\rangle_{ \rm LS}=N_{u}(K)=\sqrt{\sum_{k\leq K}|\mathbf{N}_{u}(\mathbf{k})|^{2}}. \tag{83}\] In particular, we choose \(K=1\) and \(K=2\). In Fig. 15(a,b), we illustrate the time series of \(N_{u}(K)\) for the HD run (dashed red curve) and the MHD runs (solid red curve) for \(K=1\) and \(K=2\). In Table 1, we list the average values of \(N_{u}(K)\) for all the runs. We observe that \(N_{u}(K)\) for the three MHD runs are smaller than \(N_{u}(K)\) for the HD counterpart. Hence, there is a reduction in \(\langle|\left(\mathbf{u}\cdot\boldsymbol{\nabla}\right)\mathbf{u}|\rangle_{ \rm LS}\) for MHD turbulence compared to HD turbulence, signalling TDR in MHD turbulence. After this, we compute the drag reduction coefficient \(\bar{C}_{d2}\), which is defined in Eq. (44) as \(\langle|\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}|\rangle_{\rm LS}/\left( U_{\rm rms}^{2}/L\right)\). The time series of \(\bar{C}_{d2}\) for \(K=1\) and \(K=2\) are plotted in Figure 16, and their average values Figure 14: (a,b,c) Time evolution of the drag reduction coefficient \(\bar{C}_{d1}\) for the HD run (dashed red curve) and the MHD runs (solid red curve) with \(\mathrm{Pm}=1/3,1,10/3\). are listed in Table 1. We observe that \(\bar{C}_{d2}(K=1)\) for the MHD runs with \(\mathrm{Pm}=1/3\) and \(10/3\) are smaller than that for the HD run for \(t\gtrapprox 2\). For the other cases, \(\bar{C}_{d2}\) for MHD runs are larger than those for the HD run. Thus, for \(1/3\leq\mathrm{Pm}\leq 10/3\), \(\Pi_{u}\) and \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) for the MHD runs are smaller than the corresponding values for the HD run. For \(K=1\), the drag coefficient \(\bar{C}_{d2}\) exhibits similar behaviour for \(\mathrm{Pm}=1/3\) and \(10/3\), but not for \(\mathrm{Pm}=1\). This is in contrast to \(\bar{C}_{d1}\), which is typically larger for MHD runs than that for the corresponding HD runs. Figure 15: (a,b,c) Plots of the time series of nonlinear term (\(N_{u}\)) for spheres of radii (a) \(K=1\) and (b) \(K=2\) for the HD run (dashed red curve) and the MHD runs (solid red curve) with \(\mathrm{Pm}=1/3,1,10/3\). We will show in Section 8 that QSMHD turbulence, which corresponds to \(\mathrm{Pm}=0\), exhibits larger \(U\) than the respective HD turbulence. Hence, we expect that MHD runs with very small \(\mathrm{Pm}\) will yield larger \(U\) than the corresponding HD runs. This conjecture needs to be verified in future. In addition, dynamo simulations exhibit enhancement in \(U\) on the emergence of a large-scale magnetic field (see Section 7). We will discuss these issues in later sections. Figure 16: (a,b,c) Time evolution of drag reduction coefficient \(\bar{C}_{d2}\) for sphere of radii (a) \(K=1\), and (b) \(K=2\) for HD turbulence (dashed red curve) and MHD turbulence (solid red curve) with \(\mathrm{Pm}=1/3,1,10/3\). In summary, DNS of MHD turbulence exhibits reduction in \(\Pi_{u}(k)\) and \(\langle|(\mathbf{u}\cdot\boldsymbol{\nabla})\,\mathbf{u}|\rangle_{\text{LS}}\) in comparison to HD turbulence. However, we do not observe enhancement in \(U\) in the MHD runs, at least for \(1/3\leq\text{Pm}\leq 10/3\). We conjecture that MHD runs with very small Pm may exhibit enhancement in \(U\). After the above discussion on DNS results on TDR in MHD turbulence, in the next subsection, we will discuss TDR in the shell model of MHD turbulence. ### Numerical verification of TDR in shell models of MHD turbulence In comparison to DNS, shell models have much fewer variables, hence they are computationally faster than DNS. Therefore, shell models are often used to study turbulence, especially for extreme parameters. Beginning with Gledzer-Ohkitani-Yamada (GOY) shell model for HD turbulence [67; 68; 69], researchers have developed several shell models for MHD turbulence [70; 71; 72; 73]. In this subsection, we report TDR in a shell model of MHD turbulence [11]. Verma _et al._ employed a revised version of GOY shell model and computed the drag forces and nonlinear terms for the HD and MHD runs. They showed that the turbulent drag in MHD turbulence is indeed reduced compared to HD turbulence. In a shell model of turbulence, all the Fourier modes in a wavenumber shell are represented by a single variable. A MHD shell model with \(N\) shells has \(N\) velocity and \(N\) magnetic shell variables that are coupled nonlinearly. The corresponding HD shell model has \(N\) velocity shell variables. In this subsection, we present the results of the shell model of Verma _et al._[11]. Verma _et al._[11] employed a shell model with 36 shells, with random forcing employed at shells \(n=1\) and 2 such that the KE injection rate is maintained at a constant value [74]. They performed three sets of HD and MHD simulations with KE injection rates \(\epsilon_{\text{inj}}=0.1,1.0\) and \(10.0\), and \(\nu=\eta=10^{-6}\). For time integration, they used Runge-Kutta fourth order (RK4) scheme with a fixed \(\Delta t\). For \(\epsilon_{\text{inj}}=0.1\) and \(1.0\), they chose \(\Delta t=5\times 10^{-5}\), but for \(\epsilon_{\text{inj}}=10.0\), they took \(\Delta t=1\times 10^{-5}\). The numerical results are summarized in Table 2. They carried out the HD and MHD simulations up to 1000 eddy turnover time. For further details on the model and the numerical method, refer to Verma _et al._[11]. Both HD and MHD simulations reached their respective steady states after approximately 200 eddy turnover time. Interestingly, Verma _et al._[11] observed that for the same \(\epsilon_{\text{inj}}\), the KE and \(U\) for MHD turbulence are larger than those for HD turbulence (see Table 2). These observations clearly demonstrate an enhancement of \(U\) in MHD turbulence compared to HD turbulence, as is the case for turbulent flows with dilute polymers. The increase in \(U\) for the MHD runs compared to the HD runs has its origin in the energy spectra. Verma _et al._[11] computed the average KE spectra \(E_{u}(k)\) for the HD and MHD runs. These spectra, shown in Fig. 17, exhibit Kolmogorov's \(k^{-5/3}\) spectrum. For a given \(\epsilon_{\text{inj}}\), \(E_{u}(k)\) plots for the HD and MHD runs almost overlap with each other, except for small wavenumbers where \(E_{u}(k)\) for the MHD runs are larger than the HD counterpart. Since the energy is concentrated at small wavenumbers, we observe that \(U_{\rm MHD}>U_{\rm HD}\). This is in sharp contrast to DNS results of Section 6 where \(U\) and \(E_{u}(k)\) of the MHD runs with moderate Pm are smaller than the corresponding values for the HD runs. However, in dynamo simulations, we do observe that \(U\) of MHD turbulence could be larger than that for HD turbulence; this topic will be discussed in the next section. Next, using the numerical data of the shell model, Verma _et al._[11] estimated the rms values of \(({\bf u}\cdot\nabla){\bf u}\) for the HD and MHD runs using \[\langle|({\bf u}\cdot\nabla){\bf u}|\rangle=\left(\sum_{n}\lvert N_{n}[u,u] \rvert^{2}\right)^{1/2}. \tag{84}\] To suppress the fluctuations, averaging was performed over a large number of states. As listed in Table 2, \(\langle|({\bf u}\cdot\nabla){\bf u}|\rangle\) for the MHD runs are suppressed \begin{table} \begin{tabular}{l c c c c c c} \hline & \(\epsilon_{\rm inj}\) & \(\Pi_{u}\) & \(U\) & \(\langle|({\bf u}\cdot\nabla){\bf u}|\rangle\) & \(\bar{C}_{d1}\) & \(\bar{C}_{d2}\) \\ \hline HD & 0.1 & 0.1 & 0.87 & 8.77 & 0.15 & 11.6 \\ MHD & 0.1 & 0.02 & 0.92 & 4.17 & 0.026 & 4.93 \\ \hline HD & 1.0 & 1.0 & 1.88 & 47.48 & 0.15 & 13.4 \\ MHD & 1.0 & 0.21 & 2.02 & 23.79 & 0.026 & 5.83 \\ \hline HD & 10.0 & 10.0 & 3.95 & 271.88 & 0.16 & 17.4 \\ MHD & 10.0 & 2.06 & 4.33 & 136.44 & 0.025 & 7.28 \\ \hline \end{tabular} \end{table} Table 2: For the shell model runs of HD and MHD turbulence with \(\epsilon_{\rm inj}=0.1,1.0,10.0\), numerical values of inertial-range KE flux \(\Pi_{u}\), rms velocity \(U\), \(\langle|({\bf u}\cdot\nabla){\bf u}|\rangle=(\sum_{n}\lvert N_{n}[u,u]\rvert^{2 })^{1/2}\), \(\bar{C}_{d1}\), and \(\bar{C}_{d2}\). [11]. Figure 17: Plots of KE spectra \(E_{u}(k)\) for the shell model runs with \(\epsilon_{\rm inj}=0.1\) (red), \(\epsilon_{\rm inj}=1.0\) (green) and \(\epsilon_{\rm inj}=10.0\) (blue). The dashed and solid curves represent the \(E_{u}(k)\) for the MHD and HD runs respectively. Kolmogorov’s \(-5/3\) scaling (black) fits well in the inertial range for all the runs. From Verma _et al._[11]. Reproduced with permission from AIP. compared to the corresponding HD runs. These results reinforce the fact that the nonlinearity \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) depends critically on the phases of the Fourier modes; larger \(U\) does not necessarily imply larger \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\). We remark that averaging over the small \(n\) would have been more appropriate for the estimation of \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\), as was done for the DNS. Verma _et al._[11] also computed the average KE fluxes for the HD and MHD runs [37; 73]. These fluxes are illustrated in Fig. 18, and their average values in the steady state are listed in Table 2. The figure illustrates that for a given \(\epsilon_{\rm inj}\), the MHD run has a lower KE flux than corresponding HD run. This is consistent with the suppression of \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\); lower \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) leads to lower KE flux. In addition, we compute \(\bar{C}_{d1}\) and \(\bar{C}_{d2}\) using the values of Table 2 and \(L=1\). Clearly, \(\bar{C}_{d1}\) and \(\bar{C}_{d2}\) for the MHD runs are lower than those for the corresponding HD runs, thus indicating TDR in MHD turbulence. Thus, DNS and the shell model results illustrate that MHD turbulence has lower \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) and lower \(\Pi_{u}(k)\) compared to HD turbulence. These results demonstrate TDR in MHD turbulence. Note, however, that in DNS, \(U\) for the MHD runs with \(1/3\leq\mathrm{Pm}\leq 10/3\) are smaller than the corresponding \(U\) for the HD runs, but it is other way round in the shell model. As argued in Section 6, we expect that \(U\) for MHD runs with very small \(\mathrm{Pm}\) would be larger than \(U\) for the HD runs. In the next section we will describe TDR in dynamos. ## 7 TDR in Dynamos Magnetic field generation, or _dynamo process_, in astrophysical objects is an important subfield of MHD. In dynamo process, the velocity field is forced mechanically, or by convection induced via temperature and/or concentration gradients. Rotation too plays an important role in dynamo. There are many books and papers written on dynamo, see e.g. [24; 25]. In this section, we will discuss only a handful of dynamo studies that are related to TDR. Figure 18: Plots of \(\Pi_{u}(k)\) for \(\epsilon_{\rm inj}=0.1\) (red), \(\epsilon_{\rm inj}=1.0\) (green) and \(\epsilon_{\rm inj}=10.0\) (blue). The dashed curves represent \(\Pi_{u}(k)\) for the HD runs, whereas the solid curves indicate the same for the MHD runs. From Verma _et al._[11]. Reproduced with permission from AIP. Yadav _et al._[27] simulated Taylor-Green dynamo for magnetic Prandtl number \(\mathrm{Pm}=0.5\). They reported many interesting properties, including subcritical dynamo transition, as well as steady, periodic, quasi-periodic, and chaotic dynamo states. Let us focus on an interesting feature of this dynamo that is related to TDR. In Fig. 19 we exhibit the intensities of the magnitudes of the velocity and magnetic fields for the forcing amplitude \(F_{0}=15.2\). Before the dynamo transition, the velocity field is quite turbulent, as shown in Fig. 19(a). However, after the dynamo transition or emergence of magnetic field, both the velocity and magnetic fields, shown in Fig. 19(b,c), become more ordered compared to the pure HD state of Fig. 19(a). Yadav _et al._ observed similar features at several other \(F_{0}\)'s. For example, at \(F_{0}=15.8\), after the emergence of magnetic field, the velocity fluctuations are suppressed, and the velocity and magnetic fields become quite coherent (see Fig. 20). The emergence of ordered velocity field is akin to an enhancement of the mean velocity in a pipe flow with polymers. The aforementioned simulation of Yadav _et al._[27] is somewhat idealized in comparison to spherical geo- and solar dynamos with rotation and thermal convection at extreme parameters. Interestingly, spherical dynamos share certain common features with Taylor-Green dynamo. As shown in Fig. 21, the velocity field of spherical dynamo [28] is organized in vertical columns, which Figure 19: For the Taylor-Green dynamo with the forcing amplitude \(F_{0}=15.2\), (a) 3D plot of the spatially chaotic velocity field for a no-dynamo state; (b) ordered velocity field for a dynamo state arising due to the suppression of chaos in the presence of a finite mean magnetic field; (c) ordered magnetic field. From Yadav _et al._[27]. Reprinted with the permission of APS. is also a feature of rotating turbulence [29; 75]. It is possible that thermal convection and magnetic field too contribute to the structural organization of the flow; this feature however needs a careful examination. Even though \(\langle|\mathbf{u}\cdot\nabla\mathbf{u}|\rangle\) and the energy fluxes for dynamos have been studied widely (e.g., [25; 42; 58]), TDR in dynamos has not been analyzed in detail. It is hoped that a systematic study of TDR in dynamos would be performed in future. In the next section, we describe TDR in QSMHD turbulence. Figure 21: The radial component of the velocity field in a numerical simulation of geodynamo by Olson et al. [28]. From Olson et al. [28]. Reproduced with permission from John Wiley & Sons. Figure 20: Plots of the total KE (top panel) and the total ME (bottom panel) for Taylor-Green dynamo with \(F_{0}=15.8\). We observe ordered velocity and magnetic fields after the onset of dynamo (time \(>3000\) units). From Yadav _et al._[27]. Reprinted with the permission of APS. ## 8 TDR in QSMHD turbulence via energy flux Liquid metals have small magnetic Prandtl number (Pm), and they are described using QSMHD equations, which are a limiting case of MHD equations [20; 21; 76]. The equations for QSMHD with a strong external magnetic field \(\mathbf{B}_{0}\) are [20; 21; 76] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\nabla(p/\rho)-\frac{\sigma}{\rho}\Delta^{-1}[(\mathbf{B}_{0} \cdot\nabla)^{2}\mathbf{u}]+\nu\nabla^{2}\mathbf{u}+\mathbf{F}_{\mathrm{ext}}, \tag{85}\] \[\nabla\cdot\mathbf{u} = 0, \tag{86}\] where \(\sigma\) is the electrical conductivity, and \(\Delta^{-1}\) is the inverse Laplacian operator. In Fourier space, a nondimensionalized version of QSMHD equations is \[\frac{d}{dt}\mathbf{u}(\mathbf{k}) = -i\sum_{\mathbf{p}}\{\mathbf{k}\cdot\mathbf{u}(\mathbf{q})\} \mathbf{u}(\mathbf{p})-i\mathbf{k}p(\mathbf{k})/\rho-N(\cos^{2}\theta)\mathbf{ u}(\mathbf{k}) \tag{87}\] \[-\nu k^{2}\mathbf{u}(\mathbf{k})+\mathbf{F}_{\mathrm{ext}}( \mathbf{k}),\] \[\mathbf{k}\cdot\mathbf{u}(\mathbf{k}) = 0, \tag{88}\] where \(N\) is the _interaction parameter_, and \(\theta\) is the angle between the wavenumber \(\mathbf{k}\) and \(\mathbf{B}_{0}\). The interaction parameter \(N\) is the ratio of the Lorentz force and nonlinear term \((\mathbf{u}\cdot\nabla)\mathbf{u}\), or \[N=\frac{\sigma B_{0}^{2}L}{\rho U}. \tag{89}\] Using Eq. (87), we derive an equation for the modal energy as \[\frac{d}{dt}E_{u}(\mathbf{k}) = T_{u}(\mathbf{k})-2NE_{u}(k)\cos^{2}\theta+\mathcal{F}_{\mathrm{ ext}}(\mathbf{k})-D_{u}(\mathbf{k}), \tag{90}\] where \(T_{u}(\mathbf{k})\) is defined in Eq. (10), and the dissipation induced by Lorentz term is [21; 76] \[\mathcal{F}_{u}(\mathbf{k}) = -2NE_{u}(\mathbf{k})\cos^{2}\theta<0. \tag{91}\] Hence, the magnetic field induces additional dissipation in QSMHD turbulence. Equation (91) represents the energy transfers from the velocity field to the magnetic field at a wavenumber \(\mathbf{k}\). A sum of \(\mathcal{F}_{u}(\mathbf{k})\) over a wavenumber sphere of radius \(K\) yields the following expression for the energy flux \(\Pi_{B}(K)\): \[\Pi_{B}(K)=-\sum_{k\leq K}\mathcal{F}_{u}(\mathbf{k})=\sum_{k\leq K}2NE_{u}( \mathbf{k})\cos^{2}\theta>0. \tag{92}\] Thus, the Lorentz force transfers the kinetic energy to the magnetic energy, which is immediately dissipated by the Joule dissipation; this feature is due to \(\mathrm{Pm}=0\). As a consequence, for an injection rate \(\epsilon_{\mathrm{inj}}\), \(\Pi_{u}(K)\) of a QSMHD run is suppressed compared to \(\Pi_{u}(K)\) of the corresponding HD run. Hence, in the inertial range, \[\Pi_{u}<\epsilon_{\mathrm{inj}}. \tag{93}\] Therefore, following the same line arguments as in earlier sections, we deduce that turbulent drag is suppressed in QSMHD turbulence. In addition, the velocity fields of the MHD runs are less random (or more ordered) compared to the corresponding HD runs, thus suppressing \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\). Therefore, we expect the turbulent drag in QSMHD turbulence to be smaller than the corresponding HD counterpart. In the following discussion, we will describe numerical results that are consistent with the above predictions. Reddy and Verma [22] simulated QSMHD turbulence in a periodic box for \(N\) ranging from \(1.7\) to \(220\). They employed a constant KE injection rate of \(0.1\) (in nondimensional units). In fact, the magnetic field \(\mathbf{B}_{0}\) was switched on after the initial HD run was fully developed. After an introduction of \(\mathbf{B}_{0}\), KE first decreases abruptly due to Joule dissipation, and then it increases due to reorganization of the flow. As shown in Fig. 22, for \(N>18\), the total KE is larger than its HD counterpart (\(N=0\)). In this range of \(N\), the flow becomes quasi two-dimensional with larger \(U\) and suppressed turbulent drag. This is counter-intuitive because we expect the KE to decrease with the increase of Joule dissipation. However, reorganization of the flow leads to enhancement of \(U\) and TDR in the flow. In Table 3, we list the rms velocity \(U\) as a function of \(N\). Clearly, \(U\) increases monotonically with \(N\) because \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) and turbulent drag decrease with Figure 22: From the numerical simulation of QSMHD turbulence by Reddy and Verma [22], the time series of the normalised KE, \(E(t)/E_{0}\), for \(N=5.5,11,18,27,130\), where \(E_{0}\) is the energy at the final state of \(N=0\) simulation. For each \(N\), after an application of external magnetic field, the KE drops suddenly, and then it increases and reaches a statistically steady value. The asymptotic KE for all the runs with \(N>18\) are larger than \(E_{0}\). From Reddy and Verma [22]. Reproduced with permission from AIP. the increase of \(N\). In Fig. 23 we exhibit the vorticity isosurfaces for \(N=0,5.5\), and \(18\). As is evident in the figure, the flow becomes quasi-2D and more orderly with the increase of \(N\). The above results again indicate that a large \(U\) does not necessarily imply large \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) because the nonlinear term depends on \(U\) and the phase relations between the velocity modes. In QSMHD turbulence, two-dimensionalization leads to a reduction in \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) even with large \(U\). Note, however, that for a definitive demonstration of drag reduction in QSMHD turbulence, we still need to perform a comparative study of \(\Pi_{u}\) and \(\langle|(\mathbf{u}\cdot\nabla)\mathbf{u}|\rangle\) for HD and QSMHD turbulence. Reduced turbulent flux is an important ingredient for drag reduction. Note that such a reduction does not occur in laminar QSMHD; here, the Lorentz force damps the flow further. We illustrate this claim for a channel flow. In a HD channel flow, the maximum velocity at the centre of the pipe is (see Fig. 2) [13; 39] \[U_{\mathrm{HD}}=-\frac{d^{2}}{2\nu\rho}\left(\frac{dp}{dx}\right), \tag{94}\] \begin{table} \begin{tabular}{l l l l l} \hline \hline \(N\) & \(1.7\) & \(18\) & \(27\) & \(220\) \\ \hline \(U\) & \(0.39\) & \(0.51\) & \(0.65\) & \(0.87\) \\ \hline \hline \end{tabular} \end{table} Table 3: In numerical simulations of QSMHD turbulence by Verma and Reddy [77], rms velocity (\(U\)) for various \(N\)’s. Clearly, \(U\) increase with \(N\). Figure 23: From the numerical simulation of QSMHD turbulence by Reddy and Verma [22], the vorticity isosurfaces for (a) N = 0, (b) N = 5.5, and (c) N = 18. The flow field becomes anisotropic and ordered with the increase of \(N\). We observe a vortex tube for \(N=18\). From Reddy and Verma [22]. Reproduced with permission from AIP. where \(d\) is half-width of the channel (see Fig. 2). However, in a laminar QSMHD flow, the corresponding velocity is [20; 76; 78] \[U_{\text{QSMHD}}=-\frac{1}{\sigma B_{0}^{2}}\left(\frac{\partial p}{\partial x} \right). \tag{95}\] The ratio of the two velocities is \[\frac{U_{\text{QSMHD}}}{U_{\text{HD}}}=\frac{2\nu\rho}{\sigma B_{0}^{2}d^{2}}= \frac{1}{\text{Ha}^{2}}, \tag{96}\] where Ha is the Hartmann number, which is much larger than unity for a QSMHD flow. Hence, the velocity in laminar QSMHD is much smaller than that in the HD channel. In comparison, \(U\) increases with \(N\) in QSMHD turbulence. Hence, drag reduction is a nonlinear phenomena, which is a visible in a turbulent flow. In the next section, we will cover several more examples of TDR. ## 9 TDR in Miscellaneous Systems In this section, we briefly describe TDR in stably stratified turbulence, over smooth surfaces, and in turbulent convection. ### TDR in stably stratified turbulence Many natural and laboratory flows are stably stratified with lighter fluid above heavier fluid and gravity acting downwards. The governing equations for stably-stratified flows under Boussinesq approximation are [13; 29; 30; 79] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\nabla p-\Omega\rho\hat{\mathbf{z}}+\nu\nabla^{2}\mathbf{u}+ \mathbf{F}_{\text{LS}}, \tag{97}\] \[\frac{\partial\rho}{\partial t}+(\mathbf{u}\cdot\nabla)\rho = \Omega u_{z}+\kappa\nabla^{2}\rho,\] (98) \[\nabla\cdot\mathbf{u} = 0, \tag{99}\] where \(p\) is the pressure, \(\rho\) is the density fluctuation in velocity units, \(-\Omega\rho\hat{\mathbf{z}}\) is buoyancy, and \(\Omega\) is the _Brunt-Vaisala frequency_, which is defined as [29; 79] \[\Omega=\sqrt{\frac{g}{\rho_{m}}|\frac{d\bar{\rho}}{dz}|}. \tag{100}\] Here \(\rho_{m}\) is the mean density of the whole fluid, \(d\bar{\rho}/dz\) is the average density gradient, and \(g\) is the acceleration due to gravity. We convert the density in velocity units using the transformation, \(\rho\rightarrow\rho g/(\Omega\rho_{m})\). The ratio \(\nu/\kappa\) is called _Schmidt number_, which is denoted by Sc. _Richardson number_, Ri, which is a nondimensional number, is employed to quantify the ratio of buoyancy and the nonlinear term \((\mathbf{u}\cdot\nabla)\mathbf{u}\). For periodic or vanishing boundary condition and in the absence of dissipative terms, the total energy, \[E_{u}+E_{\rho}=\int d\mathbf{r}\frac{1}{2}u^{2}+\int d\mathbf{r}\frac{1}{2}\rho^ {2}, \tag{101}\] is conserved [29; 40; 79; 80]. Here, \(E_{\rho}\) can be interpreted as the _total potential energy_. It has been shown that in the inertial range, the associated energy fluxes obey the following conservation law [40; 81]: \[\Pi_{u}+\Pi_{\rho}=\mathrm{const}=\epsilon_{\mathrm{inj}}, \tag{102}\] where \(\Pi_{\rho}\) is the potential energy flux, and \(\epsilon_{\mathrm{inj}}\) is the KE injection rate. Note that under steady state, \(\Pi_{\rho}\) equals the energy transfer rate from the velocity field to the density field. Using the stable nature of the flow, we can argue that \(\Pi_{\rho}>0\)[29; 30; 40; 81]. Nature of the stably stratified turbulence depends quite critically on the density gradient or Richardson number. For moderate density gradient (\(\mathrm{Ri}\approx 1\)), Bolgiano [82] and Obukhov [83] argued that \(\Pi_{\rho}\) is positive and constant, whereas \(\Pi_{u}(k)\sim k^{-4/5}\). For small Richardson numbers, the scaling is closer to passive scalar turbulence [84], but the flow becomes quasi-2D for large Richardson numbers [29; 30]. Here, we present only one numerical result. Kumar _et al._[85] simulated stably stratified turbulence for \(\mathrm{Sc}=1\) and \(\mathrm{Ri}=0.01\), and observed that in the inertial range, \(\Pi_{\rho}(k)=\mathrm{const}\) (\(>0\)) and \(\Pi_{u}(k)\sim k^{-4/5}\). See Fig. 24 for an illustration. Researchers have observed that \(\Pi_{\rho}>0\) for small and large \(\mathrm{Ri}\)'s as well [29; 80; 84]. Using the fact that \(\Pi_{\rho}(k)>0\), following the arguments described in Section 3, we argue that the turbulent drag will be reduced in stably stratified turbulence. That is, for the same KE injection rate \(\epsilon_{\mathrm{inj}}\), \(\Pi_{u}(k)\) and \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) for stably stratified turbulence will be smaller than those for HD turbulence. We remark that the flux-based arguments presented above are consistent with the observations of Narasimha and Sreenivasan [38] who argued that stably stratified turbulence is relaminarized. In the next subsection, we will discuss TDR experienced by smooth bluff bodies. ### TDR over smooth bluff bodies As discussed in Section 2, bluff bodies experience turbulent drag at large Reynolds numbers. Models, experiments, and numerical simulations reveal that the turbulent drag on aerodynamic objects is a combination of the _viscous drag_ and _adverse pressure gradient_[13; 14; 15]. Engineers have devised ingenious techniques to reduce this drag, which are beyond the scope of this article. Equation (17) illustrates that the turbulent drag experienced by a bluff body is a combination of the inertial and viscous forces, and the adverse pressure gradient. However, for bluff bodies like aerofoils and automobiles, the dominant contributions come from the viscous drag and adverse pressure gradient [14; 15]. Note, however, that the bulk flow above the smooth surface is anisotropic, and it contains signatures of the surface properties. Hence, the nonlinear term \(\langle|\mathbf{u}\cdot\nabla\mathbf{u}|\rangle\) and the drag coefficient \(\bar{C}_{d2}\) could yield interesting insights into TDR over bluff bodies. Narasimha and Sreenivasan [38] performed such analysis for a variety of flows. In the following subsection, we will use the above idea to explain TDR in turbulent thermal convection. ### TDR in turbulent thermal convection Turbulent convection exhibits interesting properties related to TDR. In this subsection, we consider Rayleigh-Benard convection (RBC), which is an idealized setup consisting of a thin fluid layer confined between two thermally conducting plates separated by a distance \(d\). The temperatures of the bottom and top plates are \(T_{b}\) and \(T_{t}\) respectively, with \(T_{b}>T_{t}\). The equations for thermal convection under Boussinesq approximation are [86] \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u} = -\frac{1}{\rho}\nabla p+\alpha gT\hat{\mathbf{z}}+\nu\nabla^{2} \mathbf{u}, \tag{103}\] \[\frac{\partial T}{\partial t}+(\mathbf{u}\cdot\nabla)T = \kappa\nabla^{2}T,\] (104) \[\nabla\cdot\mathbf{u} = 0, \tag{105}\] where \(T\) is the temperature field; \(\alpha,\kappa\) are respectively the thermal expansion coefficient and thermal diffusivity of the fluid; and \(g\) is the acceleration due to gravity. The two important parameters of turbulent thermal convection are Figure 24: Stably stratified simulation with \(\mathrm{Sc}=1\) and \(\mathrm{Ri}=0.01\): plots of \(\mathrm{KE}\) flux \(\Pi_{u}(k)\), normalized \(\mathrm{KE}\) flux \(\Pi_{u}(k)k^{4/5}\), and potential energy flux \(\Pi_{\rho}(k)\) (presented as \(\Pi_{\theta}(k)\) in the figure). From Kumar _et al._[85]. Reproduced with permission from APS. thermal Prandtl number, \(\mathrm{Pr}=\nu/\kappa\), and Rayleigh number, \[\mathrm{Ra}=\frac{\alpha gd^{3}(T_{b}-T_{t})}{\nu\kappa}. \tag{106}\] In turbulent thermal convection, the velocity field receives energy from the temperature field via buoyancy. Note that thermal plumes drive thermal convection. This feature is opposite to what happens in polymeric, MHD, and stably stratified turbulence, where the velocity field loses energy to the secondary field. Yet, there are signatures of TDR in turbulent convection, which is due to the smooth thermal plates. Hence, the mechanism of TDR in turbulent thermal convection differs from that in polymeric, MHD, and stably stratified turbulence. In the following, we list some of the results related to TDR in thermal convection. 1. Kraichnan [87] argued that turbulent thermal convection would become fully turbulent or reach _ultimate regime_ at very large Rayleigh number. In this asymptotic state, the effects of walls are expected to vanish, similar to the vanishing of boundary effects in the bulk of HD turbulence [35; 36; 88]. Kraichnan [87] predicted that \(\mathrm{Nu}\propto\mathrm{Ra}^{1/2}\) in the ultimate regime. However, experimental observations and numerical simulations reveal that for \(\mathrm{Ra}\lessapprox 10^{13}\), \(\mathrm{Nu}\sim\mathrm{Ra}^{\beta}\) with \(\beta\) ranging from 0.29 to 0.33 [30; 89; 90]. This reduction in the \(\mathrm{Nu}\) exponent from 1/2 to approximately 0.30 is attributed to the suppression of heat flux due to the smooth thermal plates, boundary layers, and other complex properties [30; 89; 90; 91; 92]. 2. Pandey _et al._[31] performed numerical simulations of RBC for \(\mathrm{Pr}=1\) and \(\mathrm{Ra}\) ranging from \(10^{6}\) to \(5\times 10^{8}\), and showed that \[\frac{\mathrm{Nonlinear\ term}}{\mathrm{Viscous\ term}}=\frac{|\mathbf{u} \cdot\nabla\mathbf{u}|}{|\nu\nabla^{2}\mathbf{u}|}\sim\mathrm{ReRa}^{-0.14}.\] (107) Note that the above ratio is \(\mathrm{Re}\) for HD turbulence. Thus, nonlinearity (\(\langle|\mathbf{u}\cdot\nabla\mathbf{u}|\rangle\)) is suppressed in turbulent thermal convection at large \(\mathrm{Ra}\). 3. Pandey _et al._[31] and Bhattacharya _et al._[93; 32] showed that the viscous dissipation rate (\(\epsilon_{u}\)) and thermal dissipation rate (\(\epsilon_{T}\)) depend on Rayleigh and Prandtl numbers, and that \(\epsilon_{u}\) and \(\epsilon_{T}\) are suppressed compared to HD turbulence. For moderate \(\mathrm{Pr}\) and large \(\mathrm{Ra}\), \[\epsilon_{u} \sim \frac{U^{3}}{d}\mathrm{Ra}^{-0.2},\] (108) \[\epsilon_{T} \sim \frac{U(T_{b}-T_{t})^{2}}{d}\mathrm{Ra}^{-0.2}.\] (109) Interestingly, for small Prandtl numbers, \(\epsilon_{u}\sim U^{3}/d\) with very small Radependent correction [93; 32]. See Fig. 25 for an illustration. It is well known that a large-scale circulation (LSC) is present in turbulence convection (see Fig. 26) [94; 95; 96; 97; 98]. As we show below, the suppression of nonlinearity (\(\langle|\mathbf{u}\cdot\nabla\mathbf{u}|\rangle\)) and turbulent drag in RBC is related to this LSC and the smooth walls. As shown in Fig. 26, the flows near the top and bottom plates have similarities with those near a flat plate. The LSC traverses vertically along the Figure 26: A LSC observed in 2D RBC by Sugiyama _et al._[95]. The arrows represent the velocity field, whereas the colors represent the temperature of the fluid, with red as hot and blue as cold fluid. From Sugiyama _et al._[95]. Reproduced with permission from APS. Figure 25: Plots exhibiting the Ra and Pr dependence of the viscous and thermal dissipation rates. For moderate Pr, \(\epsilon_{u},\epsilon_{T}\sim\mathrm{Ra}^{-0.20}\). From Bhattacharya et al. [32]. Reproduced with permission from AIP. vertical walls, but moves horizontally along the thermal plates. However, for a typical RBC flow, the horizontal extent of LSC is shorter than that in the flow past a flat plate. Researchers have argued that for large Rayleigh numbers (\(\mathrm{Ra}\gtrapprox 10^{13}\)), the boundary layers exhibit a transition to a log layer, which is a signature of transition from viscous to turbulent boundary layer, as in flow past a flat plate [12; 13; 14; 30; 89]. For example, Zhu _et al._[99] simulated 2D RBC and showed that above the viscous layer, the normalized velocity field varies logarithmically with the normalized vertical distance. In particular, Zhu _et al._[99] observed that \(u^{+}\propto\log(y^{+})\) for \(y^{+}\gtrapprox 10\) (see Fig. 27). Note, however, that the thermal boundary layers do not show transition to log layer [99]. Several other experiments exhibit similar behaviour [100]. Since the boundary layers of turbulent thermal convection have similar properties as those over a flat plate, we can argue that the nonlinearity \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) is suppressed in turbulent convection. This is the reason why the dissipation rates and turbulent drag in turbulent convection are smaller than the corresponding quantities in HD turbulence. Verma _et al._[101] studied the correlation \(\langle u_{z}\theta\rangle\), where \(\theta\) is the temperature fluctuation, and showed that for Figure 27: In numerical simulation of Zhu _et al._[99], the velocity (a) and temperature (b) profiles in wall units for various Ra’s. The dashed lines illustrate the viscous sublayer and the log-layer. A log layer is observed for the velocity field, but not for the temperature field. From Zhu _et al._[99]. Reproduced with permission from APS. moderate Pr, \[\langle u_{z}\theta\rangle=\sqrt{\langle u_{z}^{2}\rangle}\sqrt{\langle\theta^{2} \rangle}(\text{PrRa})^{-0.22}. \tag{110}\] Note that \(\sqrt{\langle u_{z}^{2}\rangle}\approx\text{Ra}^{1/2}\) and \(\sqrt{\langle\theta^{2}\rangle}\approx(\Delta T)\). Therefore, the correction \((\text{PrRa})^{-0.22}\) of the above equation leads to \(\langle u_{z}\theta\rangle\sim\text{Ra}^{0.28}\) or \(\text{Nu}\sim\text{Ra}^{0.28}\). Verma _et al._[30; 101] argued that at very large Ra, the corrections would disappear and the flow will approach the ultimate regime with \(\langle u_{z}\theta\rangle\sim\text{Ra}^{1/2}\) or \(\text{Nu}\sim\text{Ra}^{1/2}\). Note, however, that no experiment and numerical simulation has been able to achieve the ultimate regime, thus the ultimate regime remains a conjecture at present [90; 99; 102; 103], even though several experiments and numerical simulation report a transition to the ultimate regime with the Nu exponent reaching up to 0.38 (but lower than 1/2) [99; 102], while some others argue against the transition to the ultimate regime [90; 103]. It is interesting to note that for rough thermal plates, the heat transport is enhanced because of increase in turbulence due to the roughness [104]. RBC with periodic boundary condition exhibits \(\text{Nu}\propto\text{Ra}^{1/2}\) due to the absence of boundary layers [101; 105]. In addition, RBC with small Prandtl numbers too exhibit properties similar to those of periodic boundary condition. This is because the temperature gradient is linear in the bulk in both these systems [93; 106]. In summary, turbulent thermal convection exhibits suppression of nonlinearity (\(\langle|\mathbf{u}\cdot\nabla\mathbf{u}|\rangle\)) and KE flux compared to HD turbulence. This suppression, which occurs essentially due to the smooth walls, leads to TDR in thermal convection. ## 10 Discussions and conclusions Experiments and numerical simulations show that turbulent flows with dilute polymers exhibit TDR. Many factors-boundary layers, polymer properties, bulk properties of the flow-are responsible for this phenomena [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. There are many interesting works in this field, however, in this review, we focus on the role of bulk turbulence on TDR. The KE flux, \(\Pi_{u}(k)\), is suppressed in the presence of polymers. This reduction in \(\Pi_{u}(k)\) leads to suppression of nonlinearity \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) and turbulent drag. MHD turbulence exhibits very similar behaviour as the polymeric turbulence [11]. Here too, \(\Pi_{u}(k)\) is suppressed because a major fraction of the injected KE is transferred to the magnetic field. Consequently, \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) and the turbulent drag are suppressed in MHD turbulence. For the same KE injection rate at large scales, \(\Pi_{u}(k)\) and \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) for MHD turbulence are smaller than the respective quantities of HD turbulence. These properties are borne out in DNS and shell models. The KE flux \(\Pi_{u}(k)\) of stably stratified turbulence too is suppressed compared to HD turbulence. Hence, we expect TDR in stably stratified turbulence. Narasimha and Sreenivasan [38] made a similar observation. We need detailed numerical simulations to verify the above statement. An interesting point to note that for the above three flows, \[\Pi_{u}(k)+\Pi_{B}(k)=\text{const}=\epsilon_{\text{inj}}, \tag{111}\] where \(\Pi_{B}(k)\) represents the energy flux associated with the secondary field \(B\), which could be polymer, magnetic field, or density. The constancy of the sum of fluxes in Eq. (111) arises due to the stable nature of system [29; 40; 81]. The above constancy also represents a redistribution of the injected kinetic energy at large scales to (a) the velocity field in the intermediate scales, and to (b) the secondary field. Positive \(\Pi_{B}\) implies that \(\Pi_{u}(k)<\epsilon_{\text{inj}}\) which leads to TDR in the flow. Thus, TDR is intimately related to the conservation law of Eq. (111). Another important feature of TDR is that the mean flow or large scale velocity (\(U\)) is enhanced in the presence of polymers or magnetic field. This is because the velocity field gets more ordered under TDR. Suppression of \(\Pi_{u}(k)\) and \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\) even with strong \(U\) is due to the correlations in the velocity field. An emergence of ordered \(U\) is also observed in dynamo and QSMHD turbulence. Unfortunately, DNS of MHD turbulence with magnetic Prandtl number Pm = 1/3, 1, and 10/3 do not show enhancement in \(U\) compared to the respective HD turbulence. Based on the findings of QSMHD turbulence (\(\text{Pm}\approx 0\)) and dynamo, we conjecture that \(U\) of MHD turbulence with very small Pm will be larger than that of corresponding HD turbulence. TDR is also observed in turbulent thermal convection. This observation is based on the suppression of viscous and thermal dissipation rates, and that of nonlinearity \(\langle\mathbf{u}\cdot\nabla\mathbf{u}\rangle\)[31; 32; 37]. Note, however, that unlike MHD, polymeric, and stably-stratified turbulence, \(\Pi_{u}(k)\) for turbulent thermal convection is not suppressed due to the unstable nature of thermal convection [40; 81]. Therefore, the mechanism for TDR in turbulent thermal convection differs from that for TDR in MHD, polymeric, and stably-stratified turbulence. In this review, we argue that TDR in turbulent thermal convection occurs due to the smooth thermal plates. Near the thermal plates, the large-scale circulation (LSC) are akin to the flow past a flat plate. This feature has important consequences on the possible transition to the ultimate regime in thermal convection. The enhancement of \(U\) under TDR is similar to the increase in the mean flow during relaminarization. Narasimha and Sreenivasan [38] showed reversion of flows from random to smooth profiles by relaminarizing agencies, which could be stably stratification, rotation, thermal convection, etc. Figure 28 illustrates interactions between the mean flow and turbulence via a relaminarizing agency. In this figure, the channels 1, 2, and 3 represent complex interactions between the mean flow and fluctuations during relaminarization, whereas channel 0 represents these interactions in the HD turbulence. The arguments of Verma _et al._[11] have certain similarities with those of Narasimha and Sreenivasan [38]. In summary, this review discusses a general framework based on KE flux to explain TDR in a wide range of phenomena--polymeric, MHD, QSMHD, and stably stratified turbulence; dynamo; and turbulent thermal convection. This kind of study is relatively new, and it is hoped that it will be explored further in future. We also expect TDR to emerge in other systems, such as drift-wave turbulence, astrophysical MHD, rotating turbulence, etc. Such a study has an added benefit that TDR has practical applications in engineering flows, liquid metals, polymeric flows, etc. **Acknowledgments.** The authors thank Abhishek Kumar and Shashwat Bhattacharya for useful discussions. This project was supported by Indo-French project 6104-1 from CEFIPRA. S. Chatterjee is supported by INSPIRE fellowship (No. IF180094) of the Department of Science & Technology, India. ## Declarations Conflict of interest statement.The authors have no actual or potential conflicts of interest to declare in relation to this article.
2304.00999
Bandits for Sponsored Search Auctions under Unknown Valuation Model: Case Study in E-Commerce Advertising
This paper presents a bidding system for sponsored search auctions under an unknown valuation model. This formulation assumes that the bidder's value is unknown, evolving arbitrarily, and observed only upon winning an auction. Unlike previous studies, we do not impose any assumptions on the nature of feedback and consider the problem of bidding in sponsored search auctions in its full generality. Our system is based on a bandit framework that is resilient to the black-box auction structure and delayed and batched feedback. To validate our proposed solution, we conducted a case study at Zalando, a leading fashion e-commerce company. We outline the development process and describe the promising outcomes of our bandits-based approach to increase profitability in sponsored search auctions. We discuss in detail the technical challenges that were overcome during the implementation, shedding light on the mechanisms that led to increased profitability.
Danil Provodin, Jérémie Joudioux, Eduard Duryev
2023-03-31T10:14:43Z
http://arxiv.org/abs/2304.00999v2
# Learning Optimal Bidding Strategy: Case Study in E-Commerce Advertising ###### Abstract Although the bandits framework is a classical and well-suited approach for optimal bidding strategies in sponsored search auctions, industrial attempts are rarely documented. This paper outlines the development process at Zalando, a leading fashion e-commerce company, and describes the promising outcomes of a bandits-based approach to increase profitability in sponsored search auctions. We discuss in detail the technical and theoretical challenges that were overcome during the implementation, as well as the mechanisms that led to increased profitability. ## 1 Introduction Search engine advertising is essential for e-commerce companies, and bidding algorithms have significantly improved their efficiency by enabling more precise ad targeting. In the context of online advertising, a bidding problem typically refers to the challenge faced by advertisers in optimizing bids in sponsored search auctions for maximum profitability or conversions. To achieve this goal, advertisers must carefully balance the cost of bidding with the potential value of gaining clicks or conversions from their ad. As a leading European company for clothing and beauty products (cosmetics), Zalando made efficient marketing investments a key objective. This paper documents a promising attempt at Zalando to optimize bids in sponsored search auctions for maximum profitability using the bandits approach. Bandits is a reinforcement learning (RL) approach that provides a formal framework for sequential decision-making in repeated interactions with an environment. Despite the extensive theoretical research on bandits (and RL in general) in the past decade and their widespread usage in content placement, recommendations, and e-commerce, there are only a few documented instances of their industrial applications for bid optimization. Our work proves that maximizing profitability through the RL approach can yield substantial value, but it requires overcoming significant challenges that have yet to be encountered in popular RL benchmarks. As of independent interest, we introduce a computationally efficient bid placement system and a substantial data collection mechanism that are prone to the most challenges encountered in real-life scenarios (described in Section 1.2). Furthermore, the developed infrastructure is versatile and can be readily adapted to any contextual decision-making setting. ### Background and framework Pay-per-click sponsored search auctions emerged in the late 1990s as a quintessential example of auction theory and have become an indispensable part of the business model of web hosts. Typically, the host (the seller) runs a separate auction for each search query, and at each auction, the entity being sold is the right to place an ad in a slot. Advertisers (bidders) repeatedly bid for slots on a search engine and pay the seller only if their ad gained a click. There is a fixed number of slots for advertisements, and the advertisers have different valuations for these slots. The complexity of the underlying auction mechanism of sponsored search engines and the large volume of repeated auctions has given rise to an abundance of automated bidding tools, where bidders were constantly changing their bids in response to new information and changing information from other bidders [1]. In this paper, we take the bidder's perspective and aim to develop an algorithm that maximizes total profitability. Using a sequential decision-making paradigm, we model repeated interactions between the seller (Google Ads) and the bidder (Zalando). Specifically, we take an online learning perspective and formulate a problem using adversarial bandits [2]. At each iteration, the bidder has some (unknown) private value \(v_{t}\) and submits a bid \(b_{t}\), based on the empirical importance-weighted performance of the bids. The auction mechanism then outputs an outcome \(r_{t}\). Using the bandit language, the bidder selects action \(b_{t}\) and consequently observes reward \(r_{t}\). ### Challenges of learning in sponsored search auctions In sponsored search auctions, the business goal is to optimize bids for maximum profitability. It causes practitioners to focus on a complex and long-term event, typically a conversion such as a sale. As such, learning in sponsored search auctions poses a unique interplay between challenges from various disciplines, including auctions and online advertising. * **Blackbox auction mechanism.** Most of the theoretical work assumes a structured underlying auction mechanism, with some papers positing a generalized second-price auction lying at the heart of the allocation process, while others describe it as a first-price auction. However, for bidders participating in Google Ads sponsored search auctions, the underlying auction mechanism remains a black box, as Google does not disclose the type of auction mechanism being used [3, 4]. * **Unknown valuation of the goods at sale.** A standard assumption in the literature on auction theory is that participants arriving in the market have a clear assessment of their valuation for the goods at sale (see, e.g., [5, 6, 7, 8]). However, this assumption is severely violated in online advertising where the high-frequency auction mechanism is subject to changing market conditions and user behavior. This can amend the balance between exploration and exploitation and complicate learning behavior [9]. * **Delayed feedback.** While clicks can be observed shortly after an ad is displayed, it may take hours or even days for the corresponding sale to occur. This delay in feedback can make it challenging to optimize bids or other decisions in online advertising, as the true value of action may take time to be apparent [10]. * where the rewards for a group of actions are revealed together and observed at the end of the batch. In contrast to traditional online feedback, where the reward for each action is immediately revealed after it is chosen, learning from batched feedback can be more efficient when obtaining feedback is costly or time-consuming but comes at the cost of learning performance [11]. * **Reward sparsity.** Sparse reward refers to a situation where the reward signal provided to the learner is infrequent or incomplete. This is especially cumbersome in marketing applications, where conversion rates are typically low and observational data does not capture behavioral patterns. This scarcity of information makes it difficult for advertisers to make informed decisions, and, as a result, the optimization process becomes more challenging. RL algorithms that depend on frequent and informative feedback to learn tend to struggle in such settings. * **Clicks attribution.** After observing a desired outcome (a single conversion event), it is challenging to attribute credit to a specific action in a coherent way: a customer clicking on an advertisement might delay ordering or buy another product if they buy a product at all. A user might furthermore have been exposed to different ads at other points before the order conversion happened, and assigning credit to a specific bid that was active at the conversion time, can lead to an inability to evaluate its effectiveness accurately. * **Measurement.** Beyond the click attribution to order, a company might be interested in favoring clicks that lead, for instance, to recurring orders or new customer acquisitions. This purpose is usually incorporated in a complex mechanism, which measures which clicks led to such events. Hence, the click value can be a complex quantity, making the measurement challenging. Furthermore, the data we incorporate into learning are subject to various biases, such as normalization or selection bias. It is crucial to be able to construct an unbiased estimate of a desired metric to achieve reliable results. This is particularly important in repeated interactions, where the goal is to establish a causal relationship between the action and outcome [12]. This list is incomplete, and there are many more challenges in real-life. We focus on the successful resolution of the most pressing, in our view, challenges. ### Contribution Addressing the aforementioned challenges requires a synthesis of RL techniques as well as a bidding system. In this paper, we develop a systematic and practical approach that explicitly optimizes bids for maximum profitability and addresses most of the challenges. Our RL methodology captures **Blackbox auction mechanism**, **Unknown valuation of the goods at sale**, **Batch update**, and **Measurement** challenges. It does not, however, fully addresses the **Clicks attribution** nor the data collection issues caused by **Batched and Delayed feedback**, which we incorporate by a bid placement system on the deployment side. Thus, only **Reward sparsity** issues remain unaddressed by our approach, which we elaborate on in Sections 4.2 and 5.2. The evaluation of our solution demonstrates an increase in partial profit during the test duration. The mechanism by which this profit increase occurs is nonetheless complex and varies depending on the product at sale. The main source of profitability improvement comes from reduced costs: the algorithm stopped advertising a certain number of low-profitable products to favour more efficient bid values for a small selection of high-profitable products. In summary, the paper makes the following contributions to the literature: * We document a successful attempt to learn optimally and efficiently in complex sponsored search auctions. * We introduce an extension of the EXP3 algorithm to the batched and delayed feedback setting, which we call Batch EXP3. * We develop a computationally efficient bid placement system that is, in combination with the methodology, robust against many challenges arising in real life. Moreover, the developed system is versatile and can be readily adapted to any contextual decision-making setting. * To the best of our knowledge, we are the first to present an RL-based bidding system, which is deployed in a live environment system. ### Outline Section 2 briefly recalls previous relevant work. Section 3 is devoted to the methodological setup and contains problem definition (Section 3.1) and translating it into the bandit setting (Section 3.2). The bidding system deployment and live test design are given in Section 4. In Section 5, we evaluate our solution and elaborate on mechanisms by which the profitability is improved (Section 5.1). Consequently, in Section 6, we discuss the limitations of our work and future directions. Finally, we conclude with Section 7. Related work AuctionsThe majority of auction theory research has focused on designing truthful auction mechanisms to maximize the seller's revenue by optimizing a _reserve price_ (a price below which no transaction occurs) [13, 14, 15]. A more traditional approach takes a game-theoretic view, where the seller has perfect or partial knowledge of bidders' private valuations modeled as probability distributions [16]. However, this approach has a major limitation as it relies on perfect knowledge of the bidders' value distribution which is unlikely to be known to the seller in practice [5, 6]. In recent years the ubiquitous collection of data has presented new opportunities where unknown quantities, such as the bidders' value distributions, may potentially be learned from past observations [17]. This has led to the emergence of the online learning approach in repeated auctions from both seller's and bidder's perspectives. Sellers usually seek to set a reserve price to optimize revenues [18, 19, 20, 21, 22, 23]. In fact, this is the main mechanism by which a seller can influence the auction revenue in today's electronic markets [18]. By contrast, bidders try to maximize their reward while simultaneously learning the value of a good sold repeatedly [17, 24]. In particular, this triggered the emergence of a large number of various RL approaches, which we describe below in more detail. RL for bid optimization.Unlike much of the mechanism design literature, RL approaches for bid optimization are not searching for the optimal revenue under a truthful auction mechanism. Rather, they focus on maximizing either seller's or bidder's revenue. The existing research to optimize bidding strategy falls into two main categories: methods based on the bandits formulation and methods based on the full RL formulation. The origin of the full RL methods in application to auctions can be traced back to [25]. In this paper, the authors modeled a budget-constrained bidding problem as a finite Markov Decision Process (MDP). The authors utilize a model-based RL setting for the optimal action selection assuming the perfect knowledge of the MDP. This work led to various improvements, such as proposing model-free RL algorithms [26, 27], where the learner cannot obtain perfect information, and considering continuous action space using policy gradient methods [28]. However, this line of work has two major limitations: first, it lacks theoretical guarantees, and, second, it does not tackle the real-life challenges described in Section 1.2. For example, most of these papers rely heavily on simulation and replay of datasets and focus on simpler impression-based reward definitions. Additionally, their high complexity makes them difficult to apply in real-life scenarios due to weak debuggability. On the other hand, bandit-based methods have proved to be effective in optimizing bidding strategies for the second-price auctions [17, 18, 29], first-price auctions [7, 8, 30], and generalized auction mechanisms [9]. Due to the truthfulness of second-price auctions, methods developed for such a mechanism are based on optimism in the face of uncertainty principle [17, 18, 29], whereas first-price and generalized auction mechanisms leverage the adversarial nature of the problem 1 and use exponential weighting methods [7, 8, 9, 30]. Footnote 1: First-price and generalized auctions are known to be untruthful, making the environment from the bidders’ perspective adversarial. One paper that we found particularly relevant to our approach is [9]. In this paper, a general auction mechanism is considered, where the product valuation \(v_{t}\) is unknown, evolving in an arbitrary manner and observed only if the bidder wins the auction. The authors decompose the reward of placing bid \(b_{t}\) at iteration \(t\) as \(r_{t}(b_{t})=rev_{t}(b_{t})x_{t}(b_{t})\), where \(x_{t}(\cdot)\) is the allocation function and \(rev_{t}(\cdot)\) is the revenue function. Consequently, they assume that while \(rev_{t}(\cdot)\) is subject to bandit feedback, \(x_{t}(\cdot)\) is subject to online feedback, i.e., the learner gets to observe \(x_{t}(\cdot)\) for each bid \(b\) and not only for the placed bid \(b_{t}\). Based on this, they develop an exponentially faster algorithm (in action space) than a generic bandit algorithm (\(O(\sqrt{T|B|})\to O(\sqrt{T\log|B|})\), where \(T\) is a horizon and \(|B|\) is the cardinality of a bid space). Unfortunately, due to the complexity of our setting, we could not make use of their assumption and had to recover to a more classical bandit approach in our solution [2]. Learning in sponsored search auctions In this section, we formulate the learning problem in repeated sponsored search auctions, with discussions on various assumptions. Subsequently, we describe our approach from a methodological perspective. ### Problem setup Online advertisingThe user journey starts in a search engine with a keywords query. The search engine analyses the query and presents the user with a selection of advertisements for products by different bidders. From the bidder's perspective, it means presenting a large number of products to a large amount of customers simultaneously. The corresponding bidding model is developed in steps of increasing complexity to simplify the exposition. Single item single auction per iterationWe start with a single auction per iteration first. In this situation, the bidder bids sequentially on one product. We focus on a single bidder in a large population of bidders during a time horizon of \(T\), where \(T\) is unknown and possibly infinite. At the beginning of each iteration \(t\), \(t=1,2,...,T\), the bidder has a value \(v_{t}\in\mathbb{R}\) per unit of a good and, based on the past observations, submits a bid \(b_{t}\in B\), where \(B\) is a finite set of bids (will be specified later). The outcome of the auction is as follows: if \(x_{t}(b_{t})=1\) (click occurred), the bidder gets a good and pays \(p_{t}(b_{t})\); if \(x_{t}(b_{t})=0\) (no click occurred) the bidder does not get the good and pays nothing. Consequently, the _instantaneous profitability_ of the bidder is \[r(b_{t},v_{t};p_{t},x_{t})=(v_{t}-p_{t}(b_{t}))x_{t}(b_{t}). \tag{1}\] In general, the allocation function \(x_{t}(\cdot)\) and the payment function \(p_{t}(\cdot)\) depend on the underlying auction mechanism as well as the bid profile of other bidders, and for formulating the problem from an auction perspective should take these dependencies into account. However, as we mentioned in Section 1.2, Google Ads does not explicitly specify what kind of auction mechanism is being used in reality, nor does it provide the auctions contexts to the bidder (numbers of bidders, winning bids, etc.). Therefore, we take an online learning perspective and formulate the problem as stated in (1), assuming that the bidder gets to observe \(b_{t},v_{t},p_{t}(b_{t})\), and \(x_{t}(b_{t})\) if auction \(t\) is won; and observes only \(b_{t}\) is auction \(t\) is lost. Note that \(v_{t}\) is unknown to the bidder before auction \(t\) starts and is only revealed if the auction is won. The goal of the bidder, therefore, is to maximize the _total profitability_: \[\max_{b_{t}\in B}\sum_{t=1}^{T}r(b_{t},v_{t};p_{t},x_{t}). \tag{2}\] In real life, additional subtleties arise. Further, we describe practical nuances in more detail, gradually complicating the setting and, after all, reaching the real-life formulation that we address in this paper. Single item multiple auctions per iterationFirst, the sponsored search engine runs multiple auctions per iteration, and only aggregated information is available to the bidder. Formally, every iteration \(t\) is associated with a set of reward contests \(I_{t}\). The bidder picks a bid \(b_{t}\), which is used at all reward contests. 2 At the end of iteration \(t\), the bidder observes _aggregated values_ of gain \(\sum_{\tau\in I_{t}}v_{\tau}x_{\tau}(b_{t})\), payment \(\sum_{\tau\in I_{t}}p_{\tau}(b_{t})x_{\tau}(b_{t})\), and _click-through-rate_\(\frac{1}{|I_{t}|}\sum_{\tau\in I_{t}}x_{\tau}(b_{t})\) in the reward contest \(I_{t}\). Since only aggregated information is revealed to the bidder, this makes learning in the multiple auctions per iteration setting more complex. Footnote 2: Note, changing a bid within the reward contest \(I_{t}\) would not make any sense, as the bidder does not have access to granular information about every single auction. We denote \(v^{\prime}_{t}=\sum_{\tau\in I_{t}}v_{\tau}x_{\tau}(b_{t})\), \(p^{\prime}_{t}(b_{t})=\sum_{\tau\in I_{t}}p_{\tau}(b_{t})x_{\tau}(b_{t})\) and define the _instantaneous aggregated profitability_ as follows: \[r(b_{t},v^{\prime}_{t};p^{\prime}_{t})=\sum_{\tau\in I_{t}}\left(v_{\tau}-p_{\tau}( b_{t})\right)x_{\tau}(b_{t})=v^{\prime}_{t}-p^{\prime}_{t}(b_{t}),\] and the bidder's goal becomes to maximize the _total aggregated profitability_ \[\max_{b_{t}\in B}\sum_{t=1}^{T}r(b_{t},v^{\prime}_{t};p^{\prime}_{t}). \tag{3}\] Since we are solely working with aggregated data, we omit \({}^{\prime}\) and write \(v_{t}\) and \(p_{t}(b_{t})\) instead of \(v^{\prime}_{t}\) and \(p^{\prime}_{t}(b_{t})\). **Remark 1**.: \(|I_{t}|\) _is a random variable which distribution is unknown to the learner. Moreover, different allocation and payment functions might be used for different auctions, i.e., \(x_{\tau}(\cdot)\) and \(p_{\tau}(\cdot)\) depend on \(\tau\) as opposed to [9]._ Single item multiple auctions per iteration under delayed batched feedbackNext, due to the complex reward definition, the bidder does not observe the outcome of bid \(b_{t}\) immediately after reward contest \(I_{t}\) ends. Instead, the outcomes are batched in groups and observed after some delay. To define it formally, we borrow notations from [31]. Let \(\mathcal{T}=t_{1},...,t_{M}\) be a grid of integers such that \(1<t_{1}<...<t_{M}=T\). It defines a partition \(\mathcal{S}=\{S_{1},...,S_{M}\}\) where \(S_{1}=[1:t_{1}]\) and \(S_{k}=(t_{k-1},t_{k}]\) for \(k\in[2:M]\). The set \(S_{k}\) is the \(k\)-th batch. Next, for each \(t\in[T]\), let \(J(t)\in M\) be the index of the current batch \(S_{J(t)}\). Then, for each \(t\in S_{J(t)}\), the bidder observes the outcome of reward contest \(I_{t}\) only after batch \(S_{J(t)+\Delta}\) ends, for some positive integer \(\Delta\). Although the bidder's goal (3) remains unchanged in the batched feedback setting, we emphasize that the complexity of the problem increases greatly, as the decision at round \(t\) can only depend on observations from \(\Delta\) batches ago. In fact, [11] shows that, in the worst case, the performance of the batch learning deteriorates linearly in the batch size for stochastic linear bandits. Multiple items multiple auctions per iteration under batched feedbackFinally, bidders are rarely presented with a single item, and in real life, they strive to optimize bids for multiple items simultaneously. Let \(\mathcal{C}\) be a finite set of possible contexts. Every iteration \(t\) is associated with a unique set of contexts \(C_{t}\in\mathcal{C}\). Given \(C_{t}\), the bidder selects a vector of bids \(\mathbf{b}_{t}\in B^{|C_{t}|}\), one for each context, and observes vectors of _aggregated values_\(\mathbf{v}_{t}\) and \(p_{t}(\mathbf{b}_{t})\), where \(p_{t}\) is vector functions from \(\mathbb{R}^{|C_{t}|}\) to \(\mathbb{R}^{|C_{t}|}\). The instantaneous profitability, in this case, is defined as the inner product between the vector of profits \(\mathbf{v}_{t}-p_{t}(\mathbf{b}_{t})\) and vector of ones \(\mathbf{1}\): \[r(\mathbf{b}_{t},\mathbf{v}_{t};p_{t})=\left\langle\mathbf{v}_{t}-p_{t}(\mathbf{b}_{t}),\mathbf{1 }\right\rangle. \tag{4}\] and the goal becomes to maximize the _total profitability_ for multiple items \[\max_{\mathbf{b}_{t}\in B^{|C_{t}|}}\sum_{t=1}^{T}r(\mathbf{b}_{t},\mathbf{v}_{t};p_{t}). \tag{5}\] The instantaneous profitability (4) and the goal (5) correspond to the most general setting of learning in sponsored search auction when the bidder aims to optimize bids for multiple items simultaneously under the batched feedback. This is the problem that we are addressing in this paper. ### Assumptions **Assumption 1** (Independence of goods at sale).: _A set of possible contexts is represented by \(n\) unit vectors, \(C_{t}=\{e_{1},...,e_{n}\}\) for every \(t=1,...,T\)._ Such context set definition corresponds to the situation when the bidder is presented with \(n\) items to bid for, and the bidder treats these items independently from each other. In this case, we can consider \(n\) instances of the learner, each solving a single-item problem (3). Alternatively, [7] and [30] propose to use valuation \(v_{t}\) or its estimate as a context. However, we consider a stricter setting, assuming that \(v_{t}\) is unknown for the bidder, nor data is available for its estimation. Nevertheless, we emphasize that Assumption 1 is not critical, as a simple partition of private valuations \(v_{t}\) to groups and considering separate instances for each group reduces to our approach. We will discuss it further in Section 6. **Assumption 2** (Fixed batch size and delay).: _Grid \(\mathcal{T}\) divides the horizon \(T\) in equal partitions, i.e., for all \(S_{k}\) in \(\mathcal{S}\), \(|S_{k}|=q\), for some positive integer \(q\), and delay \(\Delta\) is fixed for all rounds. Moreover, values of \(q\) and \(\Delta\) are known to the bidder in advance._ Although restrictive from the problem formulation perspective, the batch size and delay are controlled by our bidding system and can be wholly justified in practice (see Section 4). Moreover, our algorithm, which we introduce in Section 3.3, is adaptable to unknown and random batch sizes and delays. ### Bandit formulation We formalize the goal (3) using the adversarial bandits setting and assume that the bidder (learner) is presented with a discrete set of bids (actions) \(B\). At each auction (round) \(t\), the learner picks an action \(b_{t}\) and the adversary constructs reward \(r_{t}\) by secretly choosing reward components \(v_{\tau}\), payment functions \(p_{\tau}(\cdot)\), and allocation functions \(x_{\tau}(\cdot)\), which is further observed by the learner. 3 Footnote 3: In the subsequent sections, we will use pairs bidder-learner, bid-action, and outcome-reward interchangeably, depending on the context. The advantage of the adversarial setting is that it avoids imposing any assumptions on the reward components (except that \(r_{t}\in[0,1]\)), which is perfectly combined with the black-box nature of Google Ads sponsored search auctions. Moreover, as we mentioned in Section 2, the adversarial setting allows accounting for the untruthfulness of the underlying auction mechanism. **Remark 2**.: _Note, restricting the bid space to a discrete set \(B\) is justified as in Google Ads, the bids are integer numbers of cents and take values between 0.01$ and 2$[32]._ The bandit formulation, therefore, accounts for **Blackbox auction mechanism** and **Unknown valuation of the goods at sale** challenges. ### Algorithm We introduce an adaptation of the EXP3 algorithm to the batch setting, which we call Batch EXP3. Batch EXP3 enables strong theoretical guarantees by building on top of the basic algorithm and extensions to more complex settings with batched and delayed feedback. Specifically, Batch EXP3 maintains \(n\) instances of the learner, each instance for a separate item, and performs \(\Delta\)-steps delayed update at the end of each batch. Importantly, the update mechanism of Batch EXP3 preserves the importance-weighted unbiased estimator of rewards, thus, making our adaptation resilient to **Batch update** and **Measurement** challenges. Algorithm 1 formalizes the description above. Theoretical guaranteesTheoretical guarantees of the Batch EXP3 algorithm follow from a classical analysis of the EXP3 algorithm (see, e.g., [2]). For completeness and theoretical rigor, we formulate a separate statement on Batch EXP3 theoretical performance. In order to do that, we start with the definition of the theoretical success metric called _regret_. Regret is the difference between the learner's total reward and reward obtained by any fixed vector of bids in hindsight: \[R(T,n)=\sup_{\mathbf{b}\in B^{n}}\mathbb{E}\left[\sum_{t=1}^{T}\left(r_{t}(\mathbf{b },\mathbf{v}_{t};p_{t})-r_{t}(\mathbf{b}_{t},\mathbf{v}_{t};p_{t})\right)\right].\] **Theorem 3** (Regret of Batch EXP3).: _The regret of the Batch EXP3 algorithm with the learning rate \(\sqrt{\frac{\log|B|}{T|B|}}\) is: \(\mathcal{O}\left(n\sqrt{qT|B|\log|B|}+n\Delta\right)\)._ ``` 0: bid set \(B\), learning rate \(\eta\), number of items \(n\), grid \(\mathcal{T}\), delay \(\Delta\) 1: Set \(X_{0,i}^{j}=0\) for all \(i\in[B]\) and \(j\in[n]\) 2: Set \(t\gets 1\) 3:for\(t=1,2,\dots\)do 4: (Policy update) Calculate the sampling distributions \(\pi_{t}=(\pi_{t}^{j})_{j=1}^{n}\): \[\pi_{t}^{j}(b_{i})=\frac{\exp\left(\eta X_{t-1,i}^{j}\right)}{\sum_{l\in[B]} \exp\left(\eta X_{t-1,l}^{j}\right)}\] 5: (Bid generation) Sample \(\mathbf{b}_{t}\sim\pi_{t}(\cdot)\) 6: \(X_{t,i}^{j}\gets X_{t-1,i}^{j}\) 7:if\(t\in\mathcal{T}\) and \(J(t)>\Delta\)then 8:for\(s\in S_{J(t)-\Delta}\)do 9: Observe \(\mathbf{v}_{s},p_{s}(\mathbf{b}_{s})\) 10: Calculate \(r(\mathbf{b}_{s})=\langle\mathbf{v}_{s}-p_{s}(\mathbf{b}_{s}),\mathbf{1}\rangle\) 11: Calculate \(X_{t,i}^{j}\): \[X_{t,i}^{j}=X_{t,i}^{j}+1-\frac{\mathbb{I}\{b_{s}^{j}=b_{i}\}(1-r^{j}(\mathbf{b}_{ s}))}{\pi_{t}^{j}(b_{i})}\] (6) 12:\(t\gets t+1\) ``` **Algorithm 1** Batch EXP3 algorithm for learning in sponsored search auctions Proof.: We start analyzing regret for \(n=1\), \(R(T,1)\): \[R(T,1) =\sup_{b\in B}\mathbb{E}\left[\sum_{t=1}^{T}\left(r(b,v_{t};p_{t}) -r(b_{t},v_{t};p_{t})\right)\right]\] \[\overset{(a)}{\leq}\sup_{b\in B}\mathbb{E}\left[\sum_{k=1}^{M} \sum_{t\in S_{k}}\left(r(b,v_{t};p_{t})-r(b_{t},v_{t};p_{t})\right)\right]+\Delta\] \[\overset{(b)}{\leq}q\sup_{b\in B}\mathbb{E}\left[\sum_{k=1}^{M} \left(r(b,v_{t};p_{t})-r(b_{t},v_{t};p_{t})\right)\right]+\Delta\] \[\overset{(c)}{\leq}2q\sqrt{M|B|\log|B|}+\Delta=2\sqrt{qT|B|\log |B|}+\Delta,\] where \((a)\) is due to batch execution of Batch EXP3 and the fact that the first \(\Delta\) steps no update is happening, \((b)\) is because of Assumption 2, and \((c)\) follows from standard analysis of EXP3. Then, summing \(R(T,1)\) over \(n\) gives \(\mathcal{O}\left(n\sqrt{qT|B|\log|B|}+n\Delta\right)\). ## 4 Deployment While our RL methodology accounts for **Blackbox auction mechanism**, **Unknown valuation of the goods at sale**, **Batch update**, and **Measurement** challenges, it takes system support to fully address **Clicks attribution** and the data collection issues caused by **Batched and Delayed feedback**. For example, Algorithm 1 simply assumes \(\mathbf{v},p(\mathbf{b})\) data is correctly provided as input, but this is nontrivial in practice. To provide a systematic solution that explicitly optimizes bids for maximum profitability, we fill these gaps on the deployment side. Subsequently, we describe the live test design for our bidding system. ### Bidding system architecture Clicks attributionIt is rarely the case when a single click leads to a desired outcome. Usually, a customer journey starts with a single click, but it is a chain of clicks that results in a conversion event. Identifying click chains and attributing credit to a single click in each click chain is a complex independent task that requires great engineering efforts. At Zalando, the _performance measurement pipeline_ takes up these challenges and measures the performance of online marketing at scale. In short, the pipeline sources all marketing clicks, sales, as well as more complex conversion events such as customer acquisitions, and creates the customers' journeys across all their devices, from first ad interaction to conversion. Then, an attribution module comes into play and determines how much incremental value was created by every ad click by iterating many different attribution models. We refer the interested reader to [33] for more details. Bidding system architectureOur solution is designed to match the modularity of the RL methodology in an efficient way, including the _bid generation_ and _policy update_ components and the _performance measurement pipeline_. The bidding system contains two streams: the first stream is responsible for bid placement in Google Ads; the second stream unifies the _performance measurement pipeline_ and the _policy update_ component. Ideally, both streams are to be synchronized and run as frequently as possible, one right after the other. However, the _performance measurement pipeline_ is subject to daily execution due to its compoundness and complexity, making any attempt to increase the frequency of the second stream meaningless. Keeping the same frequency for the first stream would admit placing one bid a day, which slows down the learning process considerably. To account for this limitation, we desynchronize two streams and execute the first stream with a higher frequency, updating bids every 3 hours. Such improvement allowed us to speed up the learning process substantially. Therefore, the first stream runs every 3 hours and samples bids from the latest policy (3-hour time period corresponds to round \(t\)), while the second stream is subject to daily execution and performs an update based on batched feedback (scheduled by grid \(\mathcal{T}\)). 4 The architecture is illustrated in Figure 1. Footnote 4: The first stream is scheduled at midnight, 3am, 6am, etc. The second stream runs at midnight. ### Live test design and unfolding Test scopeThe test took place from December 16th, 2022 (date of deployment) to January 29th, 2023, in a large European country. A list of 180 (\(n=180\)) clothing products was selected to be steered by the bandits algorithm. Because of the data sparsity, we chose to focus for this test on products for which the traffic was deemed high enough. The selection Figure 1: Bidding system architecture criterion was that they should meet a threshold of ten average daily clicks over a period of six months. The products selected for the test were randomly sampled amongst those satisfying this traffic threshold. Profit metricIn Section 3, we defined the reward as the aggregated difference between the valuation \(v_{t}\) and costs \(p_{t}\). While costs \(p_{t}\) causes no problems and correspond to the expenditure during the round \(t\) (which is (almost) immediately available to the bidder), the valuation \(v_{t}\) is abstract and requires special attention. We assumed that the valuation \(v_{t}\) is unknown to the bidder before auction \(t\) starts. In fact, it is difficult to evaluate \(v_{t}\) even when auction \(t\) is over. Typically, the ground truth of valuation \(v_{t}\) is assumed to be the gain auction \(t\) has generated over \(d\) days, where \(d\) might correspond to several months due to return and cancellation policies. Therefore, it is impractical to learn a bidding system when \(d\) is too big. Although a vast literature on bandits with delayed feedback provides solutions with delay-corrected estimations, these solutions are not infallible and cannot completely eliminate delay. Therefore, there are primarily two practical ways of dealing with delays: shortening the feedback loop by decreasing \(d\) or developing a delay-free method by substituting \(v_{t}\) with some approximation. Due to the lack of historical data, we have taken a more pragmatic approach and shortened the delay to 2 days (we will discuss this further in Section 6). As a result, we focus on maximizing the _2 days partial profit_, i.e., profit attributed within 2 days conversion window after the bid placement. Profit metric normalizationWe apply two normalization steps to the _2 days partial profit_. The first normalization step is a naive yet pragmatic way of incorporating side information into the modeling, and it eliminates the difference between time periods. Since users' activity is different during the nighttime and daytime, rewards that we observe from 3am - 6am are incomparable to rewards that we observe from 3pm - 6pm. We reduce rewards to a common medium of expression by normalization: \[r(b_{t},v_{t};p_{t})\coloneqq\alpha_{t-qS_{J(t)}}r(b_{t},v_{t};p_{t}),\] where \(t-qS_{J(t)}\) is the time period number within the batch \(S_{J(t)}\), and \(\alpha_{l}\) is the ratio of average traffic during time period \(l\) to the average traffic of the most active time period, \(l=1,\ldots,q\). Next, the bandit formulation requires rewards to be in the \([0,1]\) range, which is far from the truth in a real-life application. In order to account for that, we apply minimax normalization to rewards by \[r(b_{t},v_{t};p_{t})\coloneqq\frac{r(b_{t},v_{t};p_{t})-r_{min}}{r_{max}-r_{ min}},\] where \(r_{min}\) and \(r_{max}\) are the 5-th and 95-th quantities of the historical _2 days partial profit_. The coefficients \(\alpha_{l}\) were calculated on the market level and remain constant for all products \(i=1,\ldots,n\); whereas the coefficients \(r_{min}\) and \(r_{max}\) are product-dependent and were calculated individually for each product. Bid spaceThe bandit formulation described in Section 3.2 supports a discrete and finite set of bids. The conducted analysis of historical data demonstrated that bids higher than 40 cents are unprofitable, which, due to Remark 2, narrows the potential bid space values to range from 1 cent to 40 cents. To trade off the total number of bids and coverage of the bid space, we decided to include more options for lower bids (with step 2 cents) and fewer options for higher bids (with steps 3-5 cents). The final bid space \(B\) consists of 14 possible bids and is \[B=\left\{1,\,3,\,5,\,7,\,9,\,11,\,13,\,15,\,17,\,20,\,25,\,30,\,35,\,40 \right\}. \tag{7}\] Test resetWe started the experiment with a generic value of the learning rate \(\eta=1\). After we rolled out the solution, we spotted unstable learning behavior in the bidding system. On December 30, we decided to reset the test with the learning rate \(\eta=0.1\), which corresponds to less aggressive exploitation by the learner. While this adjustment did not resolve the issue completely, it mitigated the level of instability and facilitated the learning process. We will detail this phenomenon in the next section. ### Detailed analysis First, we concentrate on products with high traffic by removing 60 products (33%) with a low number of clicks (we will revisit low-traffic products later). Further, we split high-traffic products into a profitable sample and an unprofitable sample. The profitable sample is a group of 23 products (13%) with the highest _gain-to-cost ratio_. The unprofitable sample is the rest high-traffic product which numbered 97 products (54%). Table 1 presents statistics for each group. Figure 3 demonstrates this split and outlines the mechanisms by which the algorithm increases profitability. Specifically, it shows that the costs of the profitable sample decrease faster than the costs of the unprofitable sample. Simultaneously, the gains for the profitable sample decrease slower than the gain of the unprofitable sample. In other words, the algorithm is driving the increasing profitability by spending the budget more efficiently for the profitable sample, while, for the unprofitable sample, it is just decreasing costs. To support this even stronger, we provide individual examples of both profitable and unprofitable samples. High-traffic products: profitable sample Figure 4: **Profitable sample behavior. Left:** The profit heatmap shows the average profit for each (bid value, day) - pair for 3 products (4a, 4b, 4c) from the profitable sample. The \(y\)-axis represents bid numbers (lower bid number corresponds to lower bid value), and the \(x\)-axis represents day numbers (starting from December 30). The color bar is normalized between \([1,-1]\) to hide the actual profitability. **Right:** The bid placement heatmap shows the number of times bids were placed for each (bid value, day) - pair for 3 products (4a, 4b, 4c) from the profitable sample. The \(y\)-axis represents bid numbers (lower bid number corresponds to lower bid value), and the \(x\)-axis represents day numbers (starting from December 30). Since there are 8 3-hour periods in a day, maximum value in each cell is 8. Figure 5: **Unprofitable sample behavior.** The profit heatmap (left) and bids placement heatmap (right) across the bid numbers (y-axis) and day numbers (x-axis) starting from December 30 aggregated over 97 products from the unprofitable sample. The left color bar is normalized between \([1,-1]\) to hide the actual profitability. Lower bid number corresponds to lower bid value. The answer to this behavior lies in the update rule (6). According to (6), the incremental score gain at round \(t\) \[X_{t,i}-X_{t-1,i}=1-\frac{\mathbb{I}\{b_{t}=b_{i}\}(1-r(b_{t}))}{\pi_{t}(b_{i})} \tag{8}\] is equal to \(1\) for bids that were not placed to encourage exploration and is equal to \(1-\frac{(1-r(b_{t}))}{\pi_{t}(b_{t})}\) for a placed bid to punish bids for poor performance. The latter is a loss-driven mechanism that might take values in \((-\infty,1]\) depending on the values of \(r(b_{t})\) and \(\pi_{t}(b_{i})\). There are two intermediate conclusions can be made: (1) due to the loss-driven approach, bids that were not placed in a round never get punished more than the bid that was placed in that round, no matter how well it performed, and (2) due to importance-weighted sampling, bids with low probabilities experience more severe punishment than bids with high probabilities for providing the same reward value. Both these observations are theoretically reasonable. However, in combination with Reward sparsity, it led to a snowballing effect of being unreasonably confident in bids that did not happen to be placed at the very beginning. Indeed, because of observation 1, higher bid values (that the model didn't place for the first 5-6 days) appeared more appealing as other bids experienced some punishment. Next, because of observation 2, the higher bids were flourishing, perhaps even producing worse outcomes, as other bids had a lower probability. Although theoretically the system will eventually recover from such behavior, it highlights the risk associated with the algorithm in environments with sparse rewards due to the sensitivity of the learning behavior. At the beginning of the live test, the learning rate parameter was misspecified with respect to reward sparsity. This led to a high sensitivity of the learning system, such that omitting placing a bid for 1-2 days resulted in a massive degeneration of policies. That was the reason for the test reset and the learning rate decrease. ## 6 Discussion and further work One challenge that our solution does not address directly relates to the sparse reward signal. We have seen that the same reward definition led to conceptually different behaviors for Figure 6: **Low-traffic behavior. The profit heatmap (left) and bids placement heatmap (right) across the bid numbers (y-axis) and day numbers (x-axis) starting from December 30 aggregated over 60 low-traffic products. The left color bar is normalized between \([1,-1]\) to hide the actual profitability. Lower bid number corresponds to lower bid value.** Figure 7: **Counterintuitive behavior. The profit heatmap (left) and bids placement heatmap (right) across the bid numbers (y-axis) and day numbers (x-axis) starting from December 30 for 1 product from the profitable sample. The left color bar is normalized between \([1,-1]\) to hide the actual profitability. Lower bid number corresponds to lower bid value.** various products/groups of products, making the system highly susceptible to imbalances between exploration and exploitation. Moreover, the marketing nature and the blackbox auction mechanism further complicate the problem due to the low conversion rate and complex reward structure. A potential solution to deal with the reward sparsity issue is to compensate for it by exploring more aggressively. Different methods exist to achieve this, such as decreasing the learning rate, using EXP3-IX algorithm [34], or mixing policies with uniform distribution. However, we believe that these remedies only mitigate the snowballing effect and do not completely solve it. As we discussed in Sections 5.1 and 5.2, the sparse reward signal can cause the system to exhibit trivial behavior or destabilize the system at all. We believe the essence of the problem lies in the loss-driven update rule (6). More precisely, in environments where the positive outcomes are gems, the system should distinguish between punishment for losses and encouragement due to lack of exploration. But in reality, these outcomes are taken for granted. While this line of thought has something in common with the reward shaping techniques, which usually require additional hand-crafted reward functions, it is highly unclear how to extend these techniques to the auction domain. Somewhat surprisingly, this issue remains unaddressed in the theoretical literature, nor are practical approaches known. Alternatively, one can overcome the reward sparsity issue by modeling the reward signal. That is, instead of waiting for the true outcome to appear, one could substitute it with an approximation modeled by an independent supervised learning module, which, in its term, is trained based on the true outcome with delay \(d\). If an accurate enough approximation can be developed, this method can mitigate two limitations of our current approach, which include a large number of products \(n\) and a long delay \(d\). While the larger values of \(n\) can be modeled as context using approximate valuation \(v_{t}\) (as suggested in [7, 30]), the delay \(d\) should not be as critical for supervised learning as it is for reinforcement learning. Studying how to integrate these two modules is an interesting direction for future work. ## 7 Conclusion In this paper, we introduced a systematic solution to learning optimal bidding strategies in a complex auction problem. Our solution relies on the adversarial bandit framework: to optimize the exploration-exploitation trade-off by maintaining empirical importance-weighted rewards of the actions. Alternatively to classical bandit algorithms that rely on immediate online feedback, the developed Batch EXP3 is robust to batched and delayed feedback. The theoretical appeal of our solution can be motivated by the relationship of Batch EXP3 to the classical EXP3 algorithm. We have outlined the theoretical guarantees of the underlying algorithm in Theorem 3. On the deployment side, we introduced a bidding architecture that complements the RL techniques. Although the technical infrastructure was heavy and incorporated the non-trivial implementation of the clicks attribution pipeline, the practical appeal of our solution is motivated by its computational advantages: the resulting system is computationally efficient, reliable, and debuggable. Furthermore, it can be readily applied to many more contextual decision problems. Our solution has demonstrated its effectiveness in increasing the partial profit at the group level. Additionally, our system optimizes bids for maximum profitability at the product level, particularly for a group of high-traffic products. However, we acknowledge that finer tuning is needed for low-traffic products. Overall, the live test has yielded promising results indicating that many real-life challenges can be addressed pragmatically within a reinforcement learning system. AcknowledgmentsWe wish to thank Joshua Hendinata and Aleksandr Borisov for their engineering support. Furthermore, we would like to thank Amin Jamalzadeh, head of the Traffic Platform Applied Science and Analytics at Zalando, for guidance, support with administrative processes related to the project, and the review of the final draft. Danil Provodin would like to thank Maurits Kaptein and Mykola Pechenizkiy, whose thoughts influenced his ideas.
2309.11868
A Radon-Nikodym theorem for monotone measures
A version of Radon-Nikodym theorem for the Choquet integral w.r.t. monotone measures is proved. Without any presumptive condition, we obtain a necessary and sufficient condition for the ordered pair $(\mu, \nu)$ of finite monotone measures to have the so-called Radon-Nikodym property related to a nonnegative measurable function $f$. If $\nu$ is null-continuous and weakly null-additive, then $f$ is uniquely determined almost everywhere by $\nu$ and thus is called the Radon-Nikodym derivative of $\mu$ w.r.t. $\nu$. For $\sigma$-finite monotone measures, a Radon-Nikodym type theorem is also obtained under the assumption that the monotone measures are lower continuous and null-additive.
Yao Ouyang, Jun Li
2023-09-21T08:11:05Z
http://arxiv.org/abs/2309.11868v1
# A Radon-Nikodym theorem for monotone measures ###### Abstract A version of Radon-Nikodym theorem for the Choquet integral w.r.t. monotone measures is proved. Without any presumptive condition, we obtain a necessary and sufficient condition for the ordered pair \((\mu,\nu)\) of finite monotone measures to have the so-called Radon-Nikodym property related to a nonnegative measurable function \(f\). If \(\nu\) is null-continuous and weakly null-additive, then \(f\) is uniquely determined almost everywhere by \(\nu\) and thus is called the Radon-Nikodym derivative of \(\mu\) w.r.t. \(\nu\). For \(\sigma\)-finite monotone measures, a Radon-Nikodym type theorem is also obtained under the assumption that the monotone measures are lower continuous and null-additive. _Keywords:_ Radon-Nikodym theorem; Monotone measure; Choquet integral; lower continuous; null-additive Introduction Suppose that \(\nu\) is a \(\sigma\)-additive measure and \(f\) is a nonnegative integrable function. The measure \(\mu\) defined by \[\mu(A)=\int_{A}fd\nu\] for all measurable sets \(A\) is said to be the indefinite integral of \(f\) w.r.t. \(\nu\). In this case, \(\mu\) is absolutely continuous w.r.t. \(\nu\). Under what conditions a measure can be expressed as the indefinite integral w.r.t. another measure is quite interesting. This pertains to the scope of the Radon-Nikodym theorem. Radon-Nikodym theorem, one of the most important theorems in measure theory, states that \(\mu\) is the indefinite integral w.r.t. \(\nu\) if and only if \(\mu\) is absolutely continuous w.r.t. \(\nu\), see Halmos [9] for example. We note that the Radon-Nikodym theorem has various proofs and all these proofs are highly dependent on the \(\sigma\)-additivity of measures. When one of the measures is only finitely additive, the Radon-Nikodym theorem does not hold in general. Since in this case, the Hahn decomposition does not hold in general and the implications "\(\nu(A)=0\Rightarrow\mu(A)=0\)" and "\(\nu(A_{n})\to 0\Rightarrow\mu(A_{n})\to 0\)" are not equivalent. Various conditions [2, 3, 14] have been derived in the literature for the validity of finitely additive measures-based Radon-Nikodym theorem. For example, in [2] the Radon-Nikodym theorem was proved under absolute continuity and a property called Hahn separation (a variant of Hahn decomposition). Graf [7] proved a Radon-Nikodym theorem for the Choquet integral w.r.t. capacities (lower continuous subadditive monotone measure), while Nguyen et al. [15, 16] investigated a Radon-Nikodym theorem for \(\sigma\)-subadditive monotone measures. Greco [8] (see also [4]) obtained necessary and sufficient conditions of this theorem for null-additive monotone measures. Roughly speaking, these conditions include a variant of Hahn decomposition and some other conditions. We also note that Rebille [20] discussed the superior Radon-Nikodym derivative of a set function w.r.t. a \(\sigma\)-additive measure. In this paper, a new version of Radon-Nikodym theorem for the Choquet integral is proved. It should be stressed that our result generalizes the corresponding ones in [7, 8, 15]. Concretely, we introduce the concept of decomposition property of monotone measures in Section 3. This property concerns an ordered pair \((\mu,\nu)\) of monotone measures and a decreasing family \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}_{+}}\) of measurable sets and is a natural generalization of the Hahn decomposition for \(\sigma\)-additive measures. The decomposition property together with \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\) is demonstrated to be the necessary and sufficient conditions for \((\mu,\nu)\) to have Radon-Nikodym property based on the Choquet integral (i.e., there is a nonnegative measurable function \(f\) such that \(\mu(E)=\int_{E}fd\nu\) for each measurable set \(E\)), where \(\mu,\nu\) are finite monotone measures. This result is obtained without any presumptive condition other than the monotonicity of set functions, thus it is a generalization of the results of Greco [8] and Nguyen et al. [16]. When \(\nu\) is further weakly null-additive (which is weaker than subadditive) and null-continuous (which is implied by lower continuous), then the function \(f\) is unique a.e.\([\nu]\) and is called the Radon-Nikodym derivative of \(\mu\) w.r.t. \(\nu\). Thus, Graf's result is also generalized. The Radon-Nikodym theorem for \(\sigma\)-finite monotone measures are considered in Section 4. The existence and uniqueness of the Radon-Nikodym derivative is obtained when \(\mu,\nu\) are lower continuous and \(\nu\) is further null-additive. ## 2 Preliminaries Let \((U,\mathcal{U})\) denote a measurable space, that is, a nonempty set \(U\) equipped with a \(\sigma\)-algebra \(\mathcal{U}\) of subsets of \(U\). A subset \(A\) of \(U\) is called measurable (w.r.t. \(\mathcal{U}\)) if \(A\in\mathcal{U}\). A nonnegative extended real-valued function \(f\colon U\to\overline{\mathbb{R}}_{+}\) is called measurable if for each \(\alpha\in[0,+\infty]\), \(\{f\geq\alpha\}\in\mathcal{U}\) (here \(\{f\geq\alpha\}\) is the abbreviation for \(\{t\in U\,|\,f(t)\geq\alpha\}\)). **Definition 2.1**.: A set function \(\mu:\mathcal{U}\to\overline{\mathbb{R}}_{+}\) is called a _monotone measure_ if it satisfies the following two conditions: (i) \(\mu(\emptyset)=0\); (vanishing at \(\emptyset\)) (ii) \(\mu(A)\leq\mu(B)\) whenever \(A\subset B\) and \(A,B\in\mathcal{U}\). (monotonicity) The triple \((U,\mathcal{U},\mu)\) is called a _monotone measure space_. A monotone measure \(\mu\) on \((U,\mathcal{U})\) is said to be (i) _finite_ if \(\mu(U)<\infty\); (ii) \(\sigma\)_-finite_ if there is \(\{U_{n}\}_{n=1}^{\infty}\subset\mathcal{U}\) with \(U_{n}\nearrow U\) (_i.e._, \(U_{1}\subset U_{2}\subset\cdots\subset U_{n}\subset\cdots\) and \(\bigcup\limits_{n=1}^{\infty}U_{n}=U\)) such that \(\mu(U_{n})<\infty\) for each \(n\). Let \(f\) be a nonnegative measurable function, \(\nu\) be a monotone measure and \(A\) be a measurable set. The Choquet integral of \(f\) w.r.t. \(\nu\) is defined as follows, see [5, 6, 19]. **Definition 2.2**.: The Choquet integral of \(f\) w.r.t. \(\nu\) on \(A\) is given by \[\int_{A}fd\nu=\int_{0}^{\infty}\nu(\{f\geq\alpha\}\cap A)d\alpha,\] where the integral on the right side is the improper Riemann integral. When \(A=U\), we write \(\int fd\nu\) instead of \(\int_{U}fd\nu\). If \(\int fd\nu<\infty\), then \(f\) is called Choquet integrable w.r.t. \(\nu\) on \(U\). When \(\nu\) is a \(\sigma\)-additive measure, the Choquet integral coincides with the Lebesgue integral. Throughout this paper, unless otherwise stated, all the considered integrals are assumed to be the Choquet integrals. The following are some basic properties of the Choquet integrals ([6, 19, 25]): **Proposition 2.3**.: _Let \((U,\mathcal{U},\nu)\) be a monotone measure space and \(f,g\) be nonnegative measurable functions. Then_ (i)_\(\int_{A}fd\nu=0\) whenever \(\nu(A)=0\);_ (ii)_\(f\leq g\) implies \(\int fd\nu\leq\int gd\nu\); (monotonicity)_ (iii)_\(\int cfd\nu=c\int fd\nu\) for any constant \(c\geq 0\); (homogeneity)_ (iv)_\(\int_{A}\chi_{A}d\nu=\nu(A),\forall\,A\in\mathcal{U}\), where \(\chi_{A}\) denotes the characteristic function of \(A\);_ (v)_\(\int_{A}fd\nu=\int f\chi_{A}d\nu\);_ (vi)_\(\int_{A}fd\nu=\lim\limits_{n\to\infty}\int_{A}(f\wedge n)d\nu\)._ **Proof.** We only give the proof of (vi). For any \(A\in\mathcal{U}\), \[\int_{A}fd\nu = \int_{0}^{\infty}\nu(A\cap\{f\geq\alpha\})d\alpha=\lim_{n\to \infty}\int_{0}^{n}\nu(A\cap\{f\geq\alpha\})d\alpha\] \[= \lim_{n\to\infty}\int_{0}^{n}\nu(A\cap\{f\wedge n\geq\alpha\})d\alpha\] \[= \lim_{n\to\infty}\left(\int_{0}^{n}\nu(A\cap\{f\wedge n\geq\alpha \})d\alpha+\int_{n}^{\infty}\nu(A\cap\{f\wedge n\geq\alpha\})d\alpha\right)\] \[= \lim_{n\to\infty}\int_{0}^{\infty}\nu(A\cap\{f\wedge n\geq\alpha \})d\alpha=\lim_{n\to\infty}\int_{A}(f\wedge n)d\nu.\] \(\Box\) Two functions \(f,g\) on \(U\) are said to be _comonotone_ if for any \(t_{1},t_{2}\in U\), \((f(t_{1})-f(t_{2}))(g(t_{1})-g(t_{2}))\geq 0\). The following proposition is known as _comonotonic additivity_ of Choquet integral, which is a distinguishing feature of the Choquet integral, see [6, 21]. **Proposition 2.4**.: _Let \((U,\mathcal{U},\nu)\) be a monotone measure space and \(f,g\) be nonnegative measurable functions. If \(f\) and \(g\) are comonotone, then_ \[\int(f+g)d\nu=\int fd\nu+\int gd\nu.\] Note that two increasing (decreasing, resp.) functions are comonotone, and a constant function \(c\) is comonotone with arbitrary functions. Moreover, for any function \(f\) and any constant \(c\), \((f-c)\lor 0\) and \(f\wedge c\) are comonotone, where \((f\lor c)(t)=\max\{f(t),c\}\) and \((f\wedge c)(t)=\min\{f(t),c\}\). ## 3 Radon-Nikodym theorem for finite monotone measures In this section we present a new version of Radon-Nikodym theorem for finite monotone measures. To do this, we introduce the following concept of _decomposition property_ relating to an ordered pair of monotone measures. **Definition 3.1**.: Let \(\mu,\nu\) be two monotone measures on \((U,\mathcal{U})\). The ordered pair \((\mu,\nu)\) is said to have _decomposition property_ if there is a decreasing family \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\) of measurable sets with \(A_{0}=U\) such that \[\alpha\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta})\Big{)} \leq \mu(A\cap A_{\alpha})-\mu(A\cap A_{\beta}) \tag{1}\] \[\leq \beta\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta})\Big{)} \tag{2}\] holds for any \(A\in\mathcal{U}\) with finite measures for \(\mu\) and \(\nu\), and any \(\alpha,\beta\in\mathbb{Q}^{+}\) with \(\alpha<\beta\), where \(\mathbb{Q}^{+}\) is the set of all nonnegative rational numbers. **Example 3.2**.: For any \(\sigma\)-additive finite measures \(\mu,\nu\), the ordered pair \((\mu,\nu)\) has decomposition property w.r.t. \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\), where \((A_{\alpha},A_{\alpha}^{\rm c})\) is a Hahn decomposition of the signed measure \(\mu-\alpha\nu\) (see Remark 3.8 for detail). This is why we call the ordered pair \((\mu,\nu)\) having decomposition property if \(\mu,\nu\) satisfy inequalities (1) and (2). **Lemma 3.3**.: _Let \((\mu,\nu)\) have decomposition property w.r.t. \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\). If \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\), then \(\lim\limits_{\alpha\to\infty}\alpha\nu(A_{\alpha})=0\)._ **Proof.** We can assume that \(\mu(A_{\alpha})\vee\nu(A_{\alpha})<\infty\). From the first inequality in Definition 3.1, we get \[\alpha\Big{(}\nu(A_{\alpha})-\nu(A_{\beta})\Big{)}\leq\mu(A_{\alpha})-\mu(A_{ \beta})\] holds for any \(\alpha<\beta\), and hence we have \(\alpha\nu(A_{\alpha})\leq\mu(A_{\alpha})\) by letting \(\beta\to\infty\) as \(\lim\limits_{\beta\to\infty}\mu(A_{\beta})\vee\nu(A_{\beta})=0\). Thus we also have \(\alpha\nu(A_{\alpha})\) tends to \(0\) whenever \(\alpha\) tends to \(\infty\). \(\Box\) Now we show our main result -- a version of Radon-Nikodym theorem for finite monotone measures. **Theorem 3.4**.: _Let \(\mu,\nu\) be two finite monotone measures on \((U,\mathcal{U})\). Then the following two assertions are equivalent:_ (i) _The ordered pair \((\mu,\nu)\) has Radon-Nikodym property, i.e., there is a nonnegative measurable function \(f\colon U\to\overline{\mathbb{R}}_{+}\) such that_ \[\mu(A)=\int_{A}fd\nu,\ \ \forall\,A\in\mathcal{U}. \tag{3}\] (ii) _The ordered pair \((\mu,\nu)\) has decomposition property w.r.t. a sets system \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\) and \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\)._ **Proof.** (i)\(\Longrightarrow\) (ii). Suppose there is a nonnegative measurable function \(f\) such that Eq. (3) holds, then \(f\) is Choquet integrable w.r.t. \(\nu\) on \(U\), i.e., \(\int fd\nu=\mu(U)<\infty\). First we show that \((\mu,\nu)\) has decomposition property. Put \(A_{\alpha}=\{f\geq\alpha\}\), then \(\{A_{\alpha}\}\) is decreasing and \(A_{0}=U\). For any \(A\in{\cal U}\) with finite measure (i.e., \(\mu(A)\vee\nu(A)<\infty\)), and any \(\alpha<\beta\) it holds \[\mu(A\cap A_{\alpha})-\mu(A\cap A_{\beta}) = \int_{A\cap A_{\alpha}}fd\nu-\int_{A\cap A_{\beta}}fd\nu\] \[= \int_{0}^{\infty}\Big{(}\nu(A\cap A_{\alpha}\cap A_{t})-\nu(A \cap A_{\beta}\cap A_{t})\Big{)}dt\] \[\geq \int_{0}^{\alpha}\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta })\Big{)}dt\] \[= \alpha\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta})\Big{)}.\] On the other hand, we have \[\mu(A\cap A_{\alpha})-\mu(A\cap A_{\beta}) = \int_{0}^{\beta}\Big{(}\nu(A\cap A_{\alpha}\cap A_{t})-\nu(A\cap A _{\beta})\Big{)}dt\] \[+\int_{\beta}^{\infty}\Big{(}\nu(A\cap A_{t})-\nu(A\cap A_{t}) \Big{)}dt\] \[\leq \int_{0}^{\beta}\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta}) \Big{)}dt\] \[= \beta\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta})\Big{)}.\] The assertion \(\lim\limits_{\alpha\to\infty}\nu(A_{\alpha})=0\) follows from \(\alpha\nu(A_{\alpha})\leq\int_{A_{\alpha}}fd\nu=\mu(A_{\alpha})<\infty\). Since \(f=((f-n)\lor 0)+(f\wedge n)\) and \(f=((f-n)\lor 0)\) and \((f\wedge n)\) are comonotone, then \[\int fd\nu=\int((f-n)\lor 0)d\nu+\int(f\wedge n)d\nu\] holds for each \(n\). Therefore, from \(\int fd\nu=\lim\limits_{n\to\infty}\int(f\wedge n)d\nu\) and noting that \(\int fd\nu<\infty\), we conclude that \[\lim\limits_{n\to\infty}\int((f-n)\lor 0)d\nu=0.\] Also, \[\mu(A)=\int_{A}fd\nu = \int_{A}((f-n)\lor 0)d\nu+\int_{A}(f\wedge n)d\nu\] \[\leq \int((f-n)\lor 0)d\nu+n\nu(A)\] for each \(A\in\mathcal{U}\). Specifically, \[\mu(A_{n})\leq\int((f-n)\lor 0)d\nu+n\nu(A_{n})\] for each \(n\), it follows that \(\lim\limits_{n\to\infty}\mu(A_{n})=0\). Therefore, \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})=0\) as \(\{A_{\alpha}\}_{\alpha\geq 0}\) is a decreasing family of measurable sets. (ii)\(\Longrightarrow\)(i). Suppose that \((\mu,\nu)\) has decomposition property and \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\) is the corresponding sets system satisfying \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\lor\nu(A_{\alpha})=0\). We show that there is a nonnegative measurable function \(f\colon U\to\overline{\mathbb{R}}_{+}\) such that Eq. (3) holds. Define \(f\colon U\to[0,\infty]\) as \[f(x)=\sup\{\alpha\,|\,x\in A_{\alpha}\}. \tag{4}\] Then \(f\) is measurable and \(\mu(\{f=\infty\})=0\) as \(\{f=\infty\}=\bigcap\limits_{\alpha\in\mathbb{N}}A_{\alpha}\). For each positive integer \(n\), define \[f_{n}(x)=\left\{\begin{array}{ll}\frac{k-1}{2^{n}},&\mbox{if }x\in A_{\frac{k-1}{2^{n}}} \setminus A_{\frac{k}{2^{n}}},k=1,2,\cdots,n2^{n},\\ n,&\mbox{if }x\in A_{n}.\end{array}\right.\] Then \(\{f_{n}\}_{n\in\mathbb{N}}\) is an increasing sequence and \(f\wedge n-\frac{1}{2^{n}}\leq f_{n}\leq f\). Since \(f_{n}\) can be rewritten as \[f_{n}=\frac{1}{2^{n}}\sum_{k=1}^{n\cdot 2^{n}}\chi_{A_{\frac{k}{2^{n}}}},\] for any given \(A\in\mathcal{U}\), we have \[\int_{A}f_{n}d\nu=\frac{1}{2^{n}}\sum_{k=1}^{n\cdot 2^{n}}\nu(A \cap A_{\frac{k}{2^{n}}})\] \[= \sum_{k=1}^{n\cdot 2^{n}-1}\frac{k}{2^{n}}\Big{(}\nu(A\cap A_{ \frac{k}{2^{n}}})-\nu(A\cap A_{\frac{k+1}{2^{n}}})\Big{)}+n\nu(A\cap A_{n})\] \[\leq \sum_{k=1}^{n\cdot 2^{n}-1}\Big{(}\mu(A\cap A_{\frac{k}{2^{n}}})- \mu(A\cap A_{\frac{k+1}{2^{n}}})\Big{)}+n\nu(A\cap A_{n})\] \[= \mu(A\cap A_{\frac{1}{2^{n}}})-\mu(A\cap A_{n})+n\nu(A\cap A_{n}).\] It follows from the assumption and Lemma 3.3 that \[\lim_{n\to\infty}\int_{A}f_{n}d\nu\leq\lim_{n\to\infty}\mu(A\cap A_{\frac{1}{2^{n} }})\leq\mu(A).\] The inequality \(f\wedge n\leq f_{n}+\frac{1}{2^{n}}\) implies that \[\int_{A}(f\wedge n)d\nu\leq\int_{A}f_{n}d\nu+\int_{A}\frac{1}{2^{n}}d\nu=\int_ {A}f_{n}d\nu+\frac{1}{2^{n}}\nu(A).\] By virtue of Proposition 2(vi) we get \[\int_{A}fd\nu=\lim_{n\to\infty}\int_{A}(f\wedge n)d\nu\leq\lim_{n\to\infty}\int _{A}f_{n}d\nu\leq\mu(A).\] On the other hand, \[\int_{A}f_{n}d\nu=\frac{1}{2^{n}}\sum_{k=1}^{n\cdot 2^{n}}\nu(A \cap A_{\frac{k}{2^{n}}})\] \[= \sum_{k=1}^{n\cdot 2^{n}-1}\frac{k+1}{2^{n}}\Big{(}\nu(A\cap A_{ \frac{k}{2^{n}}})-\nu(A\cap A_{\frac{k+1}{2^{n}}})\Big{)}\] \[\qquad\quad+\ (n+\frac{1}{2^{n}})\nu(A\cap A_{n})-\frac{1}{2^{n}} \nu(A\cap A_{\frac{1}{2^{n}}})\] \[\geq \sum_{k=1}^{n\cdot 2^{n}-1}\Big{(}\mu(A\cap A_{\frac{k}{2^{n}}})- \mu(A\cap A_{\frac{k+1}{2^{n}}})\Big{)}\] \[\qquad\quad+\ (n+\frac{1}{2^{n}})\nu(A\cap A_{n})-\frac{1}{2^{n}} \nu(A\cap A_{\frac{1}{2^{n}}})\] \[\geq \mu(A\cap A_{\frac{1}{2^{n}}})-\mu(A\cap A_{n})+(n+\frac{1}{2^{n} })\nu(A\cap A_{n})-\frac{1}{2^{n}}\nu(A\cap A_{\frac{1}{2^{n}}}).\] By the decomposition property we have \[\mu(A)-\mu(A\cap A_{\frac{1}{2^{n}}}) = \mu(A\cap A_{0})-\mu(A\cap A_{\frac{1}{2^{n}}})\] \[\leq \frac{1}{2^{n}}(\nu(A\cap A_{0})-\nu(A\cap A_{\frac{1}{2^{n}}})) \to 0\,(n\to\infty),\] _i.e._, \(\mu(A\cap A_{\frac{1}{2^{n}}})\to\mu(A)\,(n\to\infty)\). Since both \(\mu(A_{n})\) and \(n\nu(A_{n})\) tend to \(0\) when \(n\to\infty\), it then holds that \[\int_{A}fd\nu\geq\lim_{n\to\infty}\int_{A}f_{n}d\nu\geq\mu(A)\] as \(f\geq f_{n}\) for each \(n\). Thus we reach Eq. (3), \[\mu(A)=\int_{A}fd\nu,\ \ \forall\,A\in\mathcal{U}.\] The proof is complete. \(\quad\Box\) The Radon-Nikodym theorem for classical measures concerns the absolute continuity of measures [9]. For monotone measures, there are various types of absolute continuity (see [12, 17, 23, 24]). Let \(\mu,\nu\) be two monotone measures on \((U,\mathcal{U})\). (1) If for any \(A\in\mathcal{U}\), \(\nu(A)=0\) implies \(\mu(A)=0\), then we say that \(\mu\) is absolutely continuous w.r.t. \(\nu\) and denoted by \(\mu\ll\nu\). (2) If for each \(\epsilon>0\) there is a \(\delta>0\) such that \(\mu(A)<\epsilon\) for all sets \(A\in\mathcal{U}\) satisfying \(\nu<\delta\), then we say that \(\mu\) is strongly absolutely continuous w.r.t. \(\nu\) and denoted by \(\mu\ll^{s}\nu\) ([12]). Obviously, \(\mu\ll^{s}\nu\) implies \(\mu\ll\nu\), but the converse is not true. Observe that Theorem 3.4(ii) implies \(\mu\ll\nu\) and \(\mu\ll^{s}\nu\). In fact, assume \(\nu(A)=0\). From the second inequality in Definition 3.1, we take \(\alpha=0,\beta>0\), then \[\mu(A)-\mu(A\cap A_{\beta})\leq\beta\Big{(}\nu(A)-\nu(A\cap A_{\beta})\Big{)},\] which implies \(\mu(A)-\mu(A\cap A_{\beta})=0\) for any \(\beta>0\). Therefore, \(\mu(A)=0\) as \(\lim_{\beta\to\infty}\mu(A_{\beta})=0\). Similarly, \(\mu\ll^{s}\nu\) is also true. Thus, we obtain necessary conditions that the Radon-Nikodym theorem in classical measure theory remains valid for the Choquet integral w.r.t. monotone measures (see also [24]). **Corollary 3.5**.: _Let \(\mu,\nu\) be two finite monotone measures on \((U,\mathcal{U})\). If there is a nonnegative measurable function \(f\colon U\to\overline{\mathbb{R}}_{+}\) such that Eq. (3) holds, i.e.,_ \[\mu(A)=\int_{A}fd\nu,\ \ \forall\,A\in\mathcal{U},\] _then \(\mu\ll\nu\) and \(\mu\ll^{s}\nu\)._ Note that the measurable function \(f\) in Theorem 3.4 is not unique in general. **Example 3.6**.: Let \(U\) be the set of all positive integers, \(\mathcal{U}\) the power set of \(U\) and \[A_{\alpha}=\left\{\begin{array}{ll}U,&\mbox{ if }\alpha\in[0,1]\cap \mathbb{Q},\\ \{2,4,6,\cdots\},&\mbox{ if }\alpha\in(1,2]\cap\mathbb{Q},\\ \emptyset,&\mbox{ if }\alpha\in(2,\infty)\cap\mathbb{Q}.\end{array}\right.\] Define \[\mu(A)=\nu(A)=\left\{\begin{array}{ll}1,&\quad\mbox{if}\,A=U,\\ 0,&\quad\mbox{otherwise.}\end{array}\right.\] It is routine to verify that \((\mu,\nu)\) has decomposition property w.r.t. \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\). The condition \(\lim\limits_{\alpha\rightarrow\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\) is obviously satisfied. By Theorem 3.4 there exists a nonnegative function \(f\) such that \(\mu(A)=\int_{A}fd\nu\) for each \(A\). In fact \[f_{1}(x)=\sup\{\alpha\ |\ x\in A_{\alpha}\}=\left\{\begin{array}{ll}1,&\quad \mbox{if}\,\,x\mbox{ is odd}\\ 2,&\quad\mbox{if}\,\,x\mbox{ is even}\end{array}\right.\] is such a function. Note that \(f_{2}\) defined by \[f_{2}(x)=\left\{\begin{array}{ll}1,&\quad\mbox{if}\,\,x\mbox{ is even}\\ 2,&\quad\mbox{if}\,\,x\mbox{ is odd}\end{array}\right.\] also satisfies \(\mu(A)=\int_{A}f_{2}d\nu\) (there are in fact infinitely many such functions). Interestingly, \(f_{1}\) and \(f_{2}\) are different at every point and thus \(\nu(\{f_{1}\neq f_{2}\})=\nu(U)=1\). To ensure the uniqueness of \(f\), we have to impose some additional conditions. Recall that a monotone measure \(\mu\) is said to be (i) _weakly null-additive_[25], if \(\mu(A_{1}\cup A_{2})=0\) for any \(A_{1},A_{2}\in\mathcal{U}\) with \(\mu(A_{1})=\mu(A_{2})=0\); (ii) _null-continuous_[1], if \(\mu(\bigcup_{n=1}^{\infty}A_{n})=0\) for every increasing sequence \(\{A_{n}\}_{n\in N}\subset\mathcal{A}\) such that \(\mu(A_{n})=0,n=1,2,\cdots.\) The monotone measure \(\mu\) is both weakly null-additive and null-continuous if and only if \(\mu(\bigcup_{n=1}^{\infty}A_{n})=0\) whenever \(\{A_{n}\}_{n\in\mathbb{N}}\subset\mathcal{A}\) and \(\mu(A_{n})=0,n=1,2,\cdots\), see [11]. Such a monotone measure \(\mu\) is called to have _property_ (\(\sigma\)), i.e., the set of all \(\mu\)-null sets is a \(\sigma\)-ideal, see [4]. **Proposition 3.7**.: _Let \(\mu,\nu\) be two monotone measures and \(\nu\) is weakly null-additive and null-continuous. If measurable functions \(f,g\colon U\rightarrow[0,\infty]\) satisfy Eq. (3), i.e.,_ \[\mu(A)=\int_{A}fd\nu=\int_{A}gd\nu,\ \ \forall\,A\in\mathcal{U},\] _then \(f=g\ a.e.[\nu]\) (i.e., \(\nu(\{f\neq g\})=0\))._ **Proof.** Let \(A=\{f>g\}\) and \(A_{n}=\{f>g+\frac{1}{n}\}\), then \(A_{n}\nearrow A\). We conclude that \(\nu(A)=0\), otherwise there is some \(n\) such that \(\nu(A_{n})>0\) as \(\nu\) is null-continuous. Then \[\int_{A_{n}}fd\nu\geq\int_{A_{n}}(g+\frac{1}{n})d\nu = \int_{A_{n}}gd\nu+\int_{A_{n}}\frac{1}{n}d\nu\] \[= \int_{A_{n}}gd\nu+\frac{1}{n}\nu(A_{n})>\int_{A_{n}}gd\nu,\] a contradiction. It holds similarly that \(\nu(B)=0\), where \(B=\{g>f\}\). Since \(\nu\) is weakly null-additive, we have \(\nu(A\cup B)=0\). As a consequence \(\nu(\{f\neq g\})=\nu(A\cup B)=0\). \(\Box\) Note: We also obtain \(f=g\ a.e.[\mu]\) (i.e., \(\mu(\{f\neq g\})=0\)). Now by using Proposition 3.7 we can propose the concept of Radon-Nikodym derivative for monotone measures. In Theorem 3.4, we consider that \(\nu\) is weakly null-additive and null-continuous (i.e., \(\nu\) has _property_\((\sigma)\)), then the measurable function \(f\) on \(U\) for which Eq. (3) holds is called a Radon-Nikodym derivative (or Radon-Nikodym density) of \(\mu\) w.r.t. \(\nu\), and denoted by \(\frac{d\mu}{d\nu}\) and Eq. (3) will be written as \(f=\frac{d\mu}{d\nu}\) or \(d\mu=fd\nu\). Thus, the preceding Theorem 3.4 asserts that if the ordered pair \((\mu,\nu)\) satisfies the condition (ii) and \(\nu\) has _property_\((\sigma)\), then any two Radon-Nikodym derivatives of \(\mu\) w.r.t. \(\nu\) are equal \(a.e.\ [\nu]\), and so the notation \(\frac{d\mu}{d\nu}\) is only ambiguous up to a \(\nu\)-null set. We can discuss some properties of Radon-Nikodym derivative. For example, suppose that \(\mu,\lambda\) and \(\nu\) are finite monotone measures on \((U,\mathcal{U})\) and \(\nu\) has _property_\((\sigma)\), and \((\mu,\nu)\) and \((\lambda,\nu)\) satisfy the condition (ii) in Theorem 3.4, respectively, then \(\frac{d\mu}{d\nu}\) and \(\frac{d\lambda}{d\nu}\) exist. We write \(\frac{d\mu}{d\nu}=f\) and \(\frac{d\lambda}{d\nu}=g\), if \(f\) and \(g\) are comonotone, then we have \[\frac{d(\mu+\lambda)}{d\nu}=\frac{d\mu}{d\nu}+\frac{d\lambda}{d\nu}\ \ a.e.[\nu].\] **Remark 3.8**.: Let \(\mu,\nu\) be \(\sigma\)-additive finite measures. For each nonnegative rational number \(\tau\) the signed measure \(\mu-\tau\nu\) has a Hahn decomposition, _i.e._, there is a measurable set \(A_{\tau}\) such that \(A_{\tau}\) is a positive set of \(\mu-\tau\nu\) and \(A_{\tau}^{c}\) is a negative set of \(\mu-\tau\nu\). Since \(A_{\tau}\) is also a positive set of \(\mu-\gamma\nu\) for any \(\gamma\leq\tau\) without loss of generality we can suppose that \(\{A_{\tau}\}_{\tau\in\mathbb{Q}^{+}}\) is decreasing. If \(\tau=0\), then \(U\) itself is a positive set of \(\mu-\tau\nu=\mu\) and so \(A_{0}=U\). Let \(\alpha<\beta\) be given. Since \(A_{\alpha}\) is a positive set of \(\mu-\alpha\nu\), for any \(A\in\mathcal{U}\), \((\mu-\alpha\nu)(A\cap(A_{\alpha}\setminus A_{\beta}))\geq 0\), _i.e._, \[\alpha\Big{(}\nu(A\cap A_{\alpha})-\nu(A\cap A_{\beta})\Big{)}\leq\mu(A\cap A _{\alpha})-\mu(A\cap A_{\beta}).\] Similarly, \(A_{\beta}^{c}\) is a negative set of \(\mu-\beta\nu\) and hence is a positive set of \(\beta\nu-\mu\). For any \(A\in\mathcal{U}\), \((\beta\nu-\mu)(A\cap(A_{\alpha}\setminus A_{\beta}))\geq 0\), _i.e._, \[\mu(A\cap A_{\alpha})-\mu(A\cap A_{\beta})\leq\beta\Big{(}\nu(A\cap A_{\alpha })-\nu(A\cap A_{\beta})\Big{)}.\] Thus \((\mu,\nu)\) has decomposition property and \((A_{\alpha},A_{\alpha}^{c})\) is a Hahn decomposition of the signed measure \(\mu-\alpha\nu\). If \(\mu,\nu\) further satisfy \(\mu\ll\nu\), then we have \(\lim\limits_{\alpha\to\infty}\nu(A_{\alpha})=0\) and hence \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})=0\). In fact, if \(\lim\limits_{\alpha\to\infty}\nu(A_{\alpha})>0\), then \((\mu-\alpha\nu)(A_{\alpha})\to-\infty\) when \(\alpha\to\infty\), contradicting with the fact that \(A_{\alpha}\) is a positive set of \(\mu-\alpha\nu\). On the other hand, if \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\), then \(\mu\ll\nu\). To see this, let \(\nu(A)=0\) be given. For any \(\beta>0\), \[\mu(A\cap A_{0})-\mu(A\cap A_{\beta})\leq\beta(\nu(A\cap A_{0})-\nu(A\cap A_{ \beta}))=0,\] which implies \(\mu(A)=0\) as \(A_{0}=U\) and \(\mu(A_{\beta})\to 0\,(\beta\to\infty)\). In conclusion, for two \(\sigma\)-additive finite measures \(\mu,\nu\), the pair \((\mu,\nu)\) has decomposition property, and \(\mu\ll\nu\) if and only if \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})\vee\nu(A_{\alpha})=0\). As a special case of Theorem 3.4, we thus obtain a classical Radon-Nikodym theorem. **Corollary 3.9**.: _Let \(\mu,\nu\) be \(\sigma\)-additive finite measures on \((U,\mathcal{U})\). There exists a nonnegative measurable function \(f\colon U\to\overline{\mathbb{R}}_{+}\) such that_ \[\mu(A)=\int_{A}fd\nu,\ \ \forall\,A\in\mathcal{U}\] _if and only if \(\mu\ll\nu\)._ Let \(\mu,\nu\) be bounded finitely additive measures such that for every \(\epsilon>0\) there exists a finite decomposition of \(U\), \(\{A_{1},\cdots,A_{n}\}\subset\mathcal{U}\), satisfying \(\epsilon\). For such measures, Candeloro and Martellotti proved in [3] that if \(\mu\ll^{s}\nu\) and the set \(\{(\mu(A),\nu(A))|A\in\mathcal{U}\}\) is closed, then \((\mu,\nu)\) satisfies all requirements in Theorem 3.4(ii) and thus \((\mu,\nu)\) has Radon-Nikodym property. **Remark 3.10**.: There are several papers dealing with Radon-Nikodym theorem for monotone measures (see, for example, Graf [7], Greco [8] and Nguyen et al. [15, 16]). Graf obtained his result under the assumption that the monotone measures are subadditive and lower continuous, while Nguyen et al. demanded the monotone measures being \(\sigma\)-subadditive. Greco [8] (see also Theorem 1.2 in Candeloro, Volcic [4]) posed an additional requirement that \[(*)\ \ \ \ \ \mu(S)=\nu(S)=0\Rightarrow\nu(A\cup S)=\nu(A),\ \ \forall A\in \mathcal{U}.\] Specifically, under the condition \((*)\), Greco proved that there is a nonnegative function \(f\) such that \[\mu(A)=\int_{A}fd\nu,\ \ \forall A\in\mathcal{U}\] if and only if \((\mu,\nu)\) satisfies a strong decomposition property (S.D.P. for short, see [7]) w.r.t. a sets system \(\{A_{\alpha}\}\) and \(\lim_{\alpha\to\infty}\mu(A_{\alpha})=0\). It is not difficult to see that S.D.P. together with \(\{A_{\alpha}\}\) and \(\lim_{\alpha\to\infty}\mu(A_{\alpha})=0\) implies that \(\mu\ll\nu\), hence the condition \((*)\) says in fact that \(\nu\) is null-additive. In contrast to these results, our result have no additional requirements for monotone measures other than a set of sufficient and necessary conditions. Interestingly, Example 3.6 shows that the Radon-Nikodym theorem can hold even for monotone measures without weakly null-additivity. ## 4 Radon-Nikodym theorem for \(\sigma\)-finite monotone measures Before presenting a Radon-Nikodym theorem for \(\sigma\)-finite monotone measures, we need some further properties of the Choquet integral. Note that if \((\mu,\nu)\) has decomposition property w.r.t. a sets system \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\) then for any nonempty set \(V\in{\cal U}\) the ordered pair \((\mu|_{V},\nu|_{V})\) also has decomposition property w.r.t. the system \(\{A_{\alpha}\cap V\}_{\alpha\in{\mathbb{Q}}^{+}}\). A monotone measure \(\mu\) is said to be (i) _lower continuous_ (or _continuous from below_) if for any \(\{A_{n}\}\subset{\cal U}\) with \(A_{n}\nearrow A\), it holds that \(\mu(A)=\lim\limits_{n\to\infty}\mu(A_{n})\); (ii) _null-additive_ if \(\mu(A\cup N)=\mu(A)\) for any \(A,N\in{\cal U}\) with \(\mu(N)=0\). Obviously, lower continuity implies null-continuity and null-additivity implies weak null-continuity, but not vice versa (see [11]). **Proposition 4.1**.: _[_6, 22_]_ _Let \((U,{\cal U},\nu)\) be a monotone measure space and \(f,g,f_{n}\)\((n=1,2,\cdots)\) be nonnegative measurable functions._ (i) _If \(f=g\)\(a.e.[\nu]\) and \(\nu\) is null-additive, then \(\int fd\nu=\int gd\nu\)._ (ii) _If \(\nu\) is lower continuous and \(f_{n}\nearrow f\), then \(\lim\limits_{n\to\infty}\int f_{n}d\nu=\int fd\nu\)._ In the following we suppose that the monotone measures \(\mu,\nu\) are \(\sigma\)-finite. Without loss of generality, we can assume that there is \(\{U_{n}\}_{n=1}^{\infty}\subset{\cal U}\) with \(U_{n}\nearrow U\) such that for every \(n\), \(\mu(U_{n})<\infty\) and \(\nu(U_{n})<\infty\) hold simultaneously. **Theorem 4.2**.: _Let \(\mu,\nu\) be \(\sigma\)-finite and lower continuous and \(\nu\) be null-additive. If \((\mu,\nu)\) has decomposition property w.r.t. the system \(\{A_{\alpha}\}_{\alpha\in{\mathbb{Q}}^{+}}\) and \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha}\cap U_{n})\vee\nu(A_{\alpha}\cap U_ {n})=0\) for each \(n\), then there is a nonnegative and finite a.e.[\(\nu\)] measurable function \(f\) such that_ \[\mu(A)=\int_{A}fd\nu,\ \ \forall\,A\in{\cal U}.\] _In this case, \(f\) is unique \(a.e.[\nu]\)._ **Proof.** For each \(n\), \((\mu|_{U_{n}},\nu|_{U_{n}})\) also has decomposition property w.r.t. the system \(\{A_{\alpha}\cap U_{n}\}_{\alpha\in{\mathbb{Q}}^{+}}\). Since \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha}\cap U_{n})\vee\nu(A_{\alpha}\cap U_ {n})=0\) also holds, according to Theorem 3.4, there is a nonnegative measurable function \(f_{n}\) on \(U_{n}\) such that \[\mu|_{U_{n}}(E)=\int_{E}f_{n}d\nu|_{U_{n}}\] for each measurable subset \(E\) of \(U_{n}\). Equivalently, for each \(A\in{\cal U}\) we have \[\mu|_{U_{n}}(A_{n})=\int_{A_{n}}f_{n}d\nu|_{U_{n}},\] where \(A_{n}=A\cap U_{n}\). By Proposition 3.7, for \(n>m\) we have \(f_{n}|_{U_{m}}=f_{m}\ a.e.[\nu]\). Without loss of generality, by Proposition 4.1(i) we can assume that \(f_{n}|_{U_{m}}=f_{m}\) as \(\nu\) is null-additive. Let \(\tilde{f}_{n}(u)=f_{n}(u)\) for \(u\in U_{n}\) and \(\tilde{f}_{n}(u)=0\) for \(u\in U\setminus U_{n}\). Then \[\mu(A_{n})=\int_{A_{n}}\tilde{f}_{n}d\nu\] holds for each \(n\). Note that the sequence \(\{\tilde{f}_{n}\}\) is nondecreasing and thus it is convergent everywhere. Denote \(f=\lim\limits_{n\to\infty}\tilde{f}_{n}\), then \(f|_{U_{n}}=\tilde{f}_{n}\) for each \(n\). Thus \[\{f=\infty\}=\bigcup\limits_{n=1}^{\infty}\Big{(}U_{n}\cap\{f=\infty\}\Big{)} =\bigcup\limits_{n=1}^{\infty}\{\tilde{f}_{n}=\infty\},\] which implies that \(f\) is finite a.e.\([\nu]\) as \(\nu\) is null-additive and lower continuous. Moreover, \(f|_{U_{n}}=\tilde{f}_{n}\) also implies that \[\mu(A_{n})=\int_{A_{n}}fd\nu.\] Since \(A_{n}\nearrow A\), again by the lower continuity of \(\mu,\nu\) we reach the final conclusion \[\mu(A)=\lim\limits_{n\to\infty}\mu(A_{n})=\lim\limits_{n\to\infty}\int_{A_{n}} fd\nu=\int_{A}fd\nu.\] The uniqueness of \(f\) follows from (i) of Proposition 4.1. This completes the proof. \(\Box\) **Example 4.3**.: Let \(U\) be the set of natural numbers and \(\mathcal{U}\) be the power set of \(U\). Let \(\nu(A)=1\) for \(A\neq\emptyset\), \(\mu(A)=\max A\) if \(A\) is finite and \(\mu(A)=\infty\) if \(A\) is infinite. Then (i) \(\nu\) is lower continuous and null-additive; (ii)\(\mu\) is lower continuous; (iii) \(\nu\) is finite and \(\mu\) is \(\sigma\)-finite. Let \(A_{\alpha}=[\alpha,\infty)\cap U\) for each \(\alpha\in\mathbb{Q}^{+}\). Then we can verify that \((\mu,\nu)\) has decomposition property w.r.t. the system \(\{A_{\alpha}\}_{\alpha\in\mathbb{Q}^{+}}\). It is easy to see that \(\mu(A)=\int_{A}fd\nu,\forall\,A\in\mathcal{U}\) for \(f(x)=\sup\{\alpha|x\in A_{\alpha}\}=x\). Note that \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha})=\infty\) and \(\lim\limits_{\alpha\to\infty}\nu(A_{\alpha})=1\). But \(\lim\limits_{\alpha\to\infty}\mu(A_{\alpha}\cap U_{n})\vee\nu(A_{\alpha}\cap U _{n})=0\) for each \(n\), where \(U_{n}=\{0,1,2,\cdots,n\}\). ## 5 Concluding remarks We have presented a version of the Radon-Nikodym theorem for finite monotone measures (Theorem 3.4). As we have seen, we introduced the decomposition property of the ordered pair \((\mu,\nu)\) of monotone measures (Definition 3.1) and showed a necessary and sufficient condition that the Radon-Nikodym theorem holds for the Choquet integral w.r.t. finite monotone measures. We point out that our version has no additional conditions for finite monotone measures (such as, subadditivity, or \(\sigma\)-subadditivity, or continuity from below, etc.) other than monotonicity. The proof of this result is dependent on a distinguished feature, namely, the comonotonic additivity, of the Choquet integral. The uniqueness of Radon-Nikodym derivative and the case of \(\sigma\)-finite monotone measures have also been considered. Apart from the Choquet integral, there are other nonlinear integrals (the concave integral [10] and pan-integral [25] for example) in the literature that extend the Lebesgue integral. So, it would be an interesting topic to explore the Radon-Nikodym theorem for these integrals. As these integrals lack the comonotonic additivity, we need to seek new decomposition properties and techniques. The known relationships among these integrals [10, 13, 18] may be useful.
2306.17444
Controlling photons by phonons via giant atom in a waveguide QED setup
We investigate the single photon scattering in a phonon-photon hybrid system in the waveguide QED scheme. In our consideration, an artificial giant atom, which is dressed by the phonons in a surface acoustic wave resonator, interacts with a coupled resonator waveguide (CRW) nonlocally via two connecting sites. Together with the interference effect by the nonlocal coupling, the phonon serves as a controller to the transport of the photon in the waveguide. On the one hand, the coupling strength between the giant atom and the surface acoustic wave resonator modulates the width of the transmission valley or window in the near resonant regime. On the other hand, the two reflective peaks induced by the Rabi splitting degrade into a single one when the giant atom is large detuned from the surface acoustic resonator, which implies an effective dispersive coupling. Our study paves the way for the potential application of giant atoms in the hybrid system.
Xinyu Li, Wei Zhao, Zhihai Wang
2023-06-30T07:36:57Z
http://arxiv.org/abs/2306.17444v1
# Controlling photons by phonons via giant atom in a waveguide QED setup ###### Abstract We investigate the single photon scattering in a phonon-photon hybrid system in the waveguide QED scheme. In our consideration, an artificial giant atom, which is dressed by the phonons in a surface acoustic wave resonator, interacts with a coupled resonator waveguide (CRW) nonlocally via two connecting sites. Together with the interference effect by the nonlocal coupling, the phonon serves as a controller to the transport of the photon in the waveguide. On the one hand, the coupling strength between the giant atom and the surface acoustic wave resonator modulates the width of the transmission valley or window in the near resonant regime. On the other hand, the two reflective peaks induced by the Rabi splitting degrade into a single one when the giant atom is large detuned from the surface acoustic resonator, which implies an effective dispersive coupling. Our study paves the way for the potential application of giant atoms in the hybrid system. ## I Introduction Waveguide quantum electrodynamics (QED) [1; 2] mainly studies the interaction between the limited light field in the waveguides and matter at the quantum level. Due to the achievable strong coupling between light and matter, the superconducting circuit provides an ideal platform for realizing and exploring the physical properties of waveguide QED [3], such as resonance fluorescence [4; 5; 6], collective Lamb shifts [7] and Dicke superradience and sub-radiance [8; 9; 10]. Meanwhile, as the carrier of the information, the propagation of the photon in the waveguide can be controlled by a two or three-level system. In such a manner, the single and few photon scattering has attracted lots of interest, which is aiming to design coherent quantum devices, for example, quantum transistors [11], routers [12] and frequency converters [13]. In the conventional quantum optics scenario, the size of the natural atom, whose radius is in the order of \(10^{-10}\,\mathrm{m}\), is much smaller than the wavelength of the photons (\(\lambda\approx 10^{-7}-10^{-6}\,\mathrm{m}\)) in the resonator or waveguide, therefore the dipole approximation is usually applied by considering that the electromagnetic field is uniform in the atomic regime [14]. However, a pioneering experimental work in 2014 suggests that the superconducting qubit can be coupled to the phonon field in the surface nonlocally [15]. Such a system is named as "giant atom" by Kockum. The giant atom exhibits some interesting quantum effects which do not exist in the small atom setup, such as frequency dependent Lamb shift [7], non-Markovian oscillation bound state [16; 17; 18; 19], chiral physics [20; 21; 22; 23; 24] and so on. Meanwhile, the giant atom model is also proposed in the cold atom system [25] and synthetic dimension [26; 27]. Recently, the giant atom with more than two coupling points or the coupling between more giant atoms and the waveguide has also been realized in the superconducting circuits [28; 29; 30]. The artificial superconducting qubit, which serves as a giant atom, can simultaneously couple to the phonon and photon in the microwave frequency. Therefore, via the data bus supplied by the giant atom, it is possible to design the photon-phonon hybrid system, to realize the mutual control between the photons and phonons. In this letter, we propose such a model in the context of the waveguide QED as shown in Fig. 1. Unlike Delsing's experiment [15] where the giant atom couples to the surface acoustic wave (SAW) nonlocally, we here limit the SAW in a resonator, which locally couples to the two-level system (named as atom in what follows). To demostrate the effect of the giant atom, we further couple the atom to a CRW, which supports the propagation of microwave photons. In this way, we show how to control the transport of the photon by tuning the degree of freedom of the phonon in the SAW resonator. When the atom and the SAW resonator resonantly couple to each other, we demonstrate the Rabi splitting by the single photon reflective spectrum and a stronger atom-SAW resonator coupling strength is beneficial to widen the reflection valley. Due to the photonic interference between the two atom-waveguide connecting points, we also observe a transmission window under some certain conditions, and the window can also be widened by increasing the atom-SAW resonator coupling. Therefore, the giant atom supplies us with an unconventional way to manipulate the scattering of photons in the waveguide. ## II Model and Hamiltonian As schematically shown in Fig. 1, the system we consider is composed of a two-level system which is coupled to both a SAW resonator and a one-dimensional CRW with infinite length. The two-level system, which serves as a giant atom, couples to the waveguide nonlocally via two separate sites. The Hamiltonian \(\mathcal{H}\) of the system can be divided into three parts, i.e., \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{c}+\mathcal{H}_{I}\). The first part is (Hereafter, we set \(\hbar=1\)). \[\mathcal{H}_{0}=\omega_{0}a^{\dagger}a+\Omega\ket{e}\bra{e}+\lambda\left(\sigma^{+ }a+a^{\dagger}\sigma^{-}\right), \tag{1}\] which describes the coupling between the giant atom and the SAW resonator. This coupling can be realized via the interdigital transducer (IDTs) [31; 32; 33; 34]. Here, \(a\) is the annihilation operator of the SAW resonator with frequency \(\omega_{0}\), \(\Omega\) is the transition frequency of the giant atom between its ground state \(\ket{g}\) and the excited state \(\ket{e}\). As a reference, we have set the frequency of the ground state \(\ket{g}\) as \(\omega_{g}=0\). \(\sigma^{+}=(\sigma^{-})^{\dagger}=\ket{e}\bra{g}\) is the raising operator. The real number \(\lambda\) is the magnitude of the coupling constant between the giant atom and the SAW resonator. In the above Hamiltonian, we have used the rotating wave approximation by considering the parameter regime of \(\lambda\ll(\omega_{0},\Omega)\). The second part \(\mathcal{H}_{c}\) of the Hamiltonian \(\mathcal{H}\) represents the free Hamiltonian of the CRW, which can be written as \[\mathcal{H}_{c}=\omega_{c}\sum_{j}b_{j}^{\dagger}b_{j}-\xi\sum_{j=-\infty}^{+ \infty}\left(b_{j+1}^{\dagger}b_{j}+b_{j}^{\dagger}b_{j+1}\right), \tag{2}\] Here \(\omega_{c}\) is the frequency of the resonators, and \(b_{j}\) is the boson annihilation operator on site \(j\). \(\xi\) is the hopping strength between the nearest neighbour resonators. The third part \(\mathcal{H}_{I}\) of the Hamiltonian describes the coupling between the CRW and the giant atom via the \(0\)th and \(N\)th sites. Under the rotating wave approximation, the Hamiltonian \(\mathcal{H}_{I}\) can be written as \[\mathcal{H}_{I}=g\left(b_{0}^{\dagger}\sigma^{-}+b_{0}\sigma^{+}\right)+g\left( b_{N}^{\dagger}\sigma^{-}+b_{N}\sigma^{+}\right), \tag{3}\] where \(g\) is the coupling strength between the CRW and the giant atom, and has been assumed as a real number. ## III Single-photon scattering In this section, we will discuss the behavior of the single-photon scattering. We consider that a single-photon with wave vector \(k\) is incident from the left side of the CRW. Since the excitation number in the system is conserved, the eigenstate in the single-excitation subspace can be written as \[\ket{\psi}=\left(v_{a}a^{\dagger}+u_{e}\sigma^{+}+\sum_{j}c_{j}b_{j}^{\dagger }\right)\ket{G}, \tag{4}\] where \(\ket{G}\) is the ground state of the whole hybrid system. The parameters \(v_{a}\) and \(u_{e}\) are the excitation amplitudes of the SAW resonator and the giant atom, respectively. \(c_{j}\) is the probability amplitude for finding a photon excited in the \(j\)th resonator of the CRW. In the regimes of \(j<0\) and \(j>N\), the amplitude \(c_{j}\) can be written in the form \[c_{j}=\begin{cases}e^{ikj}+re^{-ikj},&j<0\\ te^{ikj},&j>N,\end{cases} \tag{5}\] where \(r\) and \(t\) are respectively the single-photon reflection and transmission amplitudes. Hereafter, the wave vector \(k\) is considered to be dimensionless by setting the distance between the two nearest resonators to be unity. In the regime covered by the giant atom, the photon propagates back and forth, and the amplitude \(c_{j}\) for \(0\leqslant j\leqslant N\) can be expressed as \[c_{j}=Ae^{ikj}+Be^{-ikj}. \tag{6}\] Solving the Schrodinger equation \(H\ket{\psi}=E\ket{\psi}\) in the region of \(j\neq 0,N\) yields a dispersion relationship of \(E=\omega_{c}-2\xi\cos k\). Furthermore, the continuity conditions at \(j=0\) and \(j=N\) tells us \(1+r=A+B\) and \(Ae^{ikN}+Be^{-ikN}=te^{ikN}\), respectively. Combining the above formulas, the reflection rate \(R=\left|r\right|^{2}\) can be obtained as \[R=\frac{4g^{4}\Delta_{k}^{2}\cos^{4}\frac{kN}{2}}{4g^{4}\Delta_{k}^{2}\cos^{ 2}\frac{kN}{2}+\xi^{2}Q^{2}\sin^{2}k+2\xi g^{2}Q\Delta_{k}\sin k\sin kN}, \tag{7}\] where \(\Delta=E-\Omega\) is the detuning between the giant atom and the propagating photons in the CRW, \(\Delta_{k}=\omega_{0}-E\) is the detuning between the propagating photons in the CRW and the SAW resonator, and the function \(Q\) is defined as \(Q=\left[\Delta\left(\Delta+\Omega-\omega_{0}\right)-\lambda^{2}\right]\). In Fig. 2, we demonstrate the reflection rate \(R\) as a function of the photon-atom detuning \(\Delta\) by considering that the giant atom reson Figure 1: Schematic diagram of simultaneous coupling of a giant atom with a SAW resonator and a CRW. Figure 2: The reflection rate \(R\) as functions of detuning \(\Delta\) for odd \(N\) in (a) and even \(N\) in (b). The parameters are set as \(g=0.5\xi\), \(\lambda=0.2\xi\), \(\omega_{0}=\omega_{c}=\Omega=20\xi\). resonator and the bare resonator in the CRW, that is, \(\Omega=\omega_{0}=\omega_{c}\). Specifically, we illustrate the reflection rate when \(N\) is odd and even in Fig. 2 (a) and (b) respectively. For odd \(N\), we find that the incident photon will completely be transmitted when it is resonant with the giant atom, that is \(R=0\) when \(\Delta=0\). It is dramatically different from the case without the SAW resonator, in which \(R=0.5\) for \(\Delta=0\)[35]. In this sense, the phonon in the SAW resonator can be used to modulate the photon scattering in the CRW. In the regime for \(\Delta\neq 0\), it shows an asymmetry line type for the single photon reflection. The modulation of the single photon scattering of the phonon in the SAW resonator is more interesting for the case of even \(N\). As shown in Fig. 2 (b), the peaks of the Rabi splitting experience a bit shift in the giant atom setup compared to the small atom case with \(N=0\). This shift is induced by the photon reflection via the two atom-CRW coupling sites. More interestingly, the narrow valley for \(N=4m\), \(m\in Z\) (\(N=0,4,...\)) is replaced by a relatively wide transmission window for \(N=4m+2\) (\(N=2,6,...\)). It is then necessary to investigate the effect of the atom-CRW and atom-SAW resonator coupling to the valley and window for the case of even \(N\). We take \(N=4\) and \(N=2\) as the examples, we demonstrate the results in Fig. 3. Comparing Fig. 3 (a) with (b), we find that the atom-SAW resonator coupling strength \(\lambda\) is more effective at controlling the width of the valley. That is, the width is nearly independent of the value of \(g\) as shown in Fig. 3 (a), but a larger \(\lambda\) will widen the valley obviously as shown in Fig. 3 (b). However, as shown in Fig. 3 (a), the atom-CRW coupling can be used to widen the peaks which are induced by the Rabi splitting. This can be explained by considering the waveguide as a structured environment, and a stronger \(g\) will induce a larger dissipation of the atom-SAW resonator dressed states, and is exhibited by the wider peaks. In Fig. 3 (c) and (d), we also find that the width of the photonic transmission window is more sensitive to the atom-SAW resonator coupling strength \(\lambda\) than of \(g\). ## IV Larger detuning In the above discussions, we have found that the atom-SAW resonator coupling strength can be used to modulate the behavior of the single photon scattering in the waveguide when the atom is resonant with the SAW resonator. Since the detuning between the atom and the SAW resonator will change the nature of the effective coupling between them, it is expected the two peaks in the reflection spectrum will emerge into a single one as shown in Fig. 4(a). This can be explained by the dispersive coupling between the giant atom and the SAW resonator. In the case of large detuning \(\lambda\ll|\omega_{0}-\Omega|\), we introduce the widely used Schrieffer-Wolff transformation [36; 37; 38; 39; 40] to derive the effective Hamiltonian \(\mathcal{H}_{\text{eff}}=e^{-S}\mathcal{H}e^{S}\), where the parent function is \[S=\frac{\lambda}{\Delta_{c}}(a\ket{e}\bra{g}-a^{\dagger}\ket{g}\bra{e}), \tag{8}\] with \(\Delta_{c}=\omega_{0}-\Omega\) being the detuning between the giant atom and the SAW resonator. Up to the second order of \(\lambda/\Delta_{c}\), the effective Hamiltonian is obtained as \[\mathcal{H}_{\text{eff}} =\omega_{0}a^{\dagger}a+\Omega|e\rangle\langle e|-\frac{\lambda^{ 2}}{\Delta_{c}}\left(aa^{\dagger}|e\rangle\langle e|-a^{\dagger}a|g\rangle \langle g|\right)\] \[+\omega_{c}\sum_{j}b_{j}^{\dagger}b_{j}-\xi\sum_{j=-\infty}^{+ \infty}\left(b_{j+1}^{\dagger}b_{j}+b_{j}^{\dagger}b_{j+1}\right)\] \[+g\left[\left(b_{0}^{\dagger}+b_{N}^{\dagger}\right)\sigma^{-}+ \text{H.c.}\right]\] \[+\frac{\lambda^{2}g}{2\Delta_{c}^{2}}\left[\left(b_{0}^{\dagger}+b _{N}^{\dagger}\right)\sigma^{-}+\text{H.c.}\right]\] \[+\frac{\lambda}{\Delta_{c}}g\left[\left(b_{0}^{\dagger}+b_{N}^{ \dagger}\right)a+\text{H.c.}\right]\left(|g\rangle\langle g|-|e\rangle\langle e |\right)\] \[+\frac{\lambda^{2}g}{\Delta_{c}^{2}}\left[\left(b_{0}^{\dagger}+b _{N}^{\dagger}\right)a+\text{H.c.}\right]a\sigma^{+}\] \[+\frac{\lambda^{2}g}{\Delta_{c}^{2}}a^{\dagger}\left[\left(b_{0}^ {\dagger}+b_{N}^{\dagger}\right)a+\text{H.c.}\right]\sigma^{-} \tag{9}\] Here, the first line represents the effective Hamiltonian of the atom-SAW resonator system, and the second line is the CRW Hamiltonian, which is not changed by the unitary transformation (same with \(\mathcal{H}_{c}\)), and the rest parts are the effective coupling between the atom-SAW resonator and the CRW. Based on the effective Hamiltonian \(\mathcal{H}_{\text{eff}}\), we can still apply the analysis of the wave function in Eqs. (4,5,6) to obtain the single photon reflection rate \(R^{\prime}\). However, it is too cumbersome to give the analytical expressions here. Therefore, we resort to a numerical calculation and illustrate the result in Fig. 4(b). As a comparison, we also plot the result of \(R\). The good agreement between results shows the validity of the Schrieffer-Wolff transformation approach in the case of large detuning. The Schrieffer-Wolff transformation supplies a way to understand why the two peaks are replaced by a single one in Fig. 4 (a) in the large detuning. As shown by the first line of Eq. (9), which shows that the giant atom and the SAW resonator will not exchange excitation, that is, they form an effective dispersive coupling. As a result, we can not observe the Rabi splitting, which occurs in the resonantly coupling regime. One should note that the mechanism for the coalesce of the two peaks is completely different from that in Ref [31]. In the later literature, the authors state that the coalesce is induced by the fact that the high temperature destroys the quantum nature of the system. ## V Remarks and conclusions The giant atom setup has been experimentally realized recently [3; 15; 29; 30], where the giant atom is served by superconducting qubit or magnon spin ensemble. With the available technologies, the coupling between the SAW and the superconducting qubit has been realized with coupling strength \(\lambda/(2\pi)\approx 20\) MHz [31], the controllable CRW has also been realized by the high-impedance microwave resonators and the nearest hopping strength has been achieved by \(\xi\approx 200\) MHz. The qubit-resonator coupling strength is approximately \(300\) MHz [43] and a weaker coupling \(\lambda=0.5\xi\) is undoubtedly available. The loss of the qubit as well as the phonon and photon mode are in the regime of tens of kHz [31; 43], which is three orders weaker than the above coupling strength, and is therefore neglected here. In this paper, we have demonstrated how to control the scattering of the photon by utilizing the phonon with the assistance of the giant atom setup. The giant atom has been realized experimentally in superconducting circuits, where the superconducting qubit or spin ensemble in magnon couples to the transmission line via more than one connecting point. On the other hand, SAW resonator has been widely used to design microelectorechnical devices [33; 41; 42], and the current technology has brought into the quantum regime, which invokes the topics of circuit quantum acoustic dynamics (cQAD). The superconducting qubits have provided a medium to realize the transform between the phonons and photons, within or outside the same frequency regime. We hope that our work on controlling photons via phonon will stimulate new studies of the hybrid system and broaden the application of artificial giant atom. ###### Acknowledgements. This is supported by National Key R&D Program of China (Grant No. 2021YFE0193500), Science and Technology Development Project of Jilin Province (Grant No. 20230101357JC)
2302.01921
Transformers in Action Recognition: A Review on Temporal Modeling
In vision-based action recognition, spatio-temporal features from different modalities are used for recognizing activities. Temporal modeling is a long challenge of action recognition. However, there are limited methods such as pre-computed motion features, three-dimensional (3D) filters, and recurrent neural networks (RNN) for modeling motion information in deep-based approaches. Recently, transformers success in modeling long-range dependencies in natural language processing (NLP) tasks has gotten great attention from other domains; including speech, image, and video, to rely entirely on self-attention without using sequence-aligned RNNs or convolutions. Although the application of transformers to action recognition is relatively new, the amount of research proposed on this topic within the last few years is astounding. This paper especially reviews recent progress in deep learning methods for modeling temporal variations. It focuses on action recognition methods that use transformers for temporal modeling, discussing their main features, used modalities, and identifying opportunities and challenges for future research.
Elham Shabaninia, Hossein Nezamabadi-pour, Fatemeh Shafizadegan
2022-12-29T11:03:19Z
http://arxiv.org/abs/2302.01921v1
# Transformers in Action Recognition: A Review on Temporal Modeling ###### Abstract In vision-based action recognition, spatio-temporal features from different modalities are used for recognizing activities. Temporal modeling is a long challenge of action recognition. However, there are limited methods such as pre-computed motion features, three-dimensional (3D) filters, and recurrent neural networks (RNN) for modeling motion information in deep-based approaches. Recently, transformers' success in modeling long-range dependencies in natural language processing (NLP) tasks has gotten great attention from other domains; including speech, image, and video, to rely entirely on self-attention without using sequence-aligned RNNs or convolutions. Although the application of transformers to action recognition is relatively new, the amount of research proposed on this topic within the last few years is astounding. This paper especially reviews recent progress in deep learning methods for modeling temporal variations. It focuses on action recognition methods that use transformers for temporal modeling, discussing their main features, used modalities, and identifying opportunities and challenges for future research. keywords: transformer, action recognition, deep learning, temporal modeling + Footnote †: journal: arXiv ## 1 Introduction Video-based action recognition is the task of recognizing human activities (including gestures, simple actions, human-object/human-human interactions, group activities, behaviors, and events) from video sequences or still images [1; 2]. Compared with video-based methods, human activity recognition (HAR) from static images is still an open and challenging task [3; 4; 5] and includes a limited range of proposed methods. Due to many applications, vision-based human action recognition is known as an old field of computer vision and different data modalities are adopted for recognition in the literature, including RGB, depth, skeleton, infrared, point cloud, etc. while the three first modalities are used primarily for human action recognition. RGB data provides the details of a scene (including shape, color, and texture) and helps describe the semantics of actions, depth maps provide three-dimensional (3D) structural information about the scene. On the other hand, skeletal data is high-level information about the 3D location of joints. Multi-modal approaches use the knowledge of different modalities for the visual understanding of complex actions. Today, human action recognition methods are mainly established with the help of deep neural networks (DNNs) [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. That is mainly due to the success of convolutional neural networks (CNNs) in encoding spatial information of images for object detection and recognition. Various studies discovered the abilities of CNNs in automatically extracting useful and discriminative features from images that generalize very well [17; 18; 19; 20]. In addition, deep networks can scale up to tens of millions of parameters and huge labeled datasets [8]. Consequently, the computer vision community mainly focused on using the capacity of deep architectures in almost all fields of research, including human action recognition. However, besides encoding spatial information of frames, video analysis is involved with modeling temporal information. Encoding temporal information is of vital importance in recognizing different subtle sub-activities. Each activity is divided into different sub-activities. The sequence of these sub-activities differentiates among different activities. However, the temporal dimension typically causes action recognition to be challenging. On the other hand, existing deep architectures generally encode temporal information with limited solutions [21; 22; 23; 24] such as 3D filters, pre-computed motion features, and recurrent neural networks (RNNs). These models are typically restricted in simultaneously acquiring local and global variations of temporal features. On the other hand, the transformer is a new encoder-decoder architecture that uses the attention mechanism to differentially weigh each part of the input data [25]. Although transformers are designed to handle sequential input data, they do not necessarily process it in order. Rather, the attention mechanism provides context for any position in the input sequence. This feature allows for more parallelization than RNNs and therefore reduces training times. Transformers achieved great success in natural language processing (NLP) tasks [25] and are now applied to images [26; 27; 28]. Along with NLP, video recognition is a perfect candidate for transformers, where videos are represented as a sequence of images (similar to language processing, in which the input words or characters are represented as a sequence of tokens [29]). The application of transformers for action recognition is relatively new. However, the amount of research proposed on this topic within the last few years is increasing. This paper aims at capturing a snapshot of trends for temporal modeling in human action recognition. It focuses on supervised learning methods that often require a large amount of data with expensive labels for training models. Meanwhile, unsupervised and semi-supervised learning techniques [30; 31] (which enable to leverage of the availability of unlabeled data to train the models) are beyond the scope of this paper. The methods are categorized into five groups: motion-based feature approaches, three-dimensional convolutional neural networks, recurrent neural networks, transformers, and hybrid methods (see Figure 1). This categorization is advised by some research approaches. Especially, three first categories of pre-computed motion features, 3D filters, and RNN are mentioned in [21; 22; 23; 24]. However, transformers for video action recognition are a new approach. Finally, the hybrid category is designed to encompass combined methods. So main contributions of this paper are as follows: 1. This paper reviews the main approaches proposed for modeling temporal information for human action recognition in deep-based methods. 2. The deep learning-based approaches are categorized into motion-based, RNNs, 3D filters, transformers, and hybrid methods. 3. In each category, methods are grouped based on used visual data modalities (RGB, depth, skeleton) to better compare similar approaches. 4. We provide a comprehensive survey of transformer-based human action recognition methods. 5. Some suggestions are proposed for future research on human action recognition using transformers. The remainder of this paper is organized as follows. Section 2 reviews related survey papers on human action recognition. Section 3 provides a brief review of traditional approaches for temporal modeling and introduces five deep learning-based approaches for modeling the time dimension. These approaches are discussed in detail in different subsections. In each subsection, distinct modalities or combinations of multiple modalities used in methods are explored. Discussions and prospects are provided in section 4. The paper concludes in section 5. ## 2 Related Survey Papers Human action recognition is one of the old and interesting topics of computer vision. There are a lot of survey papers on this topic targeting different aspects of action recognition. Table 1 lists some recent survey papers on human action recognition. As this table shows, some existing papers provide a review of both traditional and deep-based approaches, while others only concentrate on deep-based methodologies. On the other hand, there are some surveys on applications of human action recognition [32; 33] or benchmark datasets of action recognition [34; 35; 15; 36]. In addition, a group of reviews focuses on specific data modalities such as visual or sensor-based methods [7; 12; 37; 38; 39; 40; 41; 42]. While some others review approaches based on multiple data modalities [43; 44; 45; 46; 47; 9; 48]. Compared with the existing survey papers: Figure 1: The taxonomy of this paper: different methods are categorized into five approaches. 1. This paper provides a novel taxonomy to review the main approaches of temporal modeling in human action recognition, with the main emphasis on transformer-based methods. 2. The survey includes both single-modal and multi-modal approaches of human action recognition. 3. Vision-based methods with RGB, depth, skeleton and hybrid features are considered. 4. A short review of conventional methods besides a long review of deep-based approaches is included. Note that some sections of this paper that consist of input modality + modeling approach are mentioned in some other survey papers. For example in [49], some approaches of skeleton + 3D filters, in [27], some approaches of RGB/Skeleton + transformer, and in [15], some approaches of RGB/depth + Motion features are reviewed. All these papers are comprehensive and informative. However, the interests of these papers are other topics (see Table 1) and there is no paper with the main emphasis on transformers and temporal modeling that reviews all these categories and modalities altogether. ## 3 Temporal Modeling in Action Recognition Methods Video-based action recognition is a video content analysis (VCA) task responsible for automatically analyzing the captured video to detect or recognize specific actions performed. The critical issue in VCA is representing suitable spatio-temporal features and modeling dynamical patterns [57]. VCA approaches are categorized into frame-by-frame-based methods and volumetric approaches. The former typically extracts a set of features from each frame. The features are then usually considered as time-series data. On the other hand, volumetric approaches implicitly model temporal dynamics and consider the video as a 3D volume. They extend standard features used for images to the 3D case. In recent decades, the vision community has suggested numerous action recognition techniques using RGB or depth. Among them, there are some promising methods, including representations for local spatio-temporal features [58] such as SIFT3D [59], ESURF [60], HOG3D [61], HOF[62]. These traditional action recognition methods use several detected salient points and local feature descriptors for each point. The local descriptors are then collected into a holistic descriptor for the entire video to be used for classification. The pro of using these local features is that they do not require detecting the human body, and the local features are almost robust to illumination changes, cluttered background, and noise. The con is the lack of semantics and limitation in discriminative capacity [63]. Approaches such as Motionlets [64], Action Bank [65], Motion Atoms [66], Dynamic-Poselets [63], and Actons [67] are proposed to account for these limitations. For modeling the temporal variation of skeletons, different approaches are proposed in the literature for traditional methods. In some approaches, the features computed from the action sequences are clustered into posture visual words (representing the prototypical poses of actions), and then the temporal evolutions of those visual words are modeled by explicit methods such as hidden Markov models (HMM) [68; 69] or conditional random fields (CRF) [70; 71]. Some other approaches consider the manifold of the trajectories [72] or use hierarchical extended histogram (HEH) for modeling temporal variation of features acquired from individual frames of input sequence [73]. These hand-crafted features and descriptors are recently substituted with deep representations to automatically extract high-level information from training data without using hand-crafted rules. As mentioned above, in recent years there has been rapid development in deep learning-based methods for human action recognition. Numerous studies are proposed in the literature for solving different challenges of human action recognition using deep architectures. Reviewing all these methods is a relatively comprehensive task. Many surveys discuss the pros and cons of different methods in detail [6; 7; 8; 12]. Here the focus is on how different methods deal with the temporal dimension. The methods are categorized into five groups: motion-based feature approaches, three-dimensional convolutional neural networks, recurrent neural networks, transformers, and hybrid methods. These methods are discussed in detail in the following. In each group, methods are categorized based on used modalities (RGB, depth, skeleton, or combination of multiple modalities). ### Motion-based Feature Approaches The first category is based on pre-computed motion features like 2D dense optical flow maps as input to the neural networks. These networks generally use multiple streams to encode both appearance and motion of human actions using different modalities. Finally, different types of information (learned from the input) are fused to get the final result. There are different fusion approaches in the literature. In [74], fusion methods are categorized into early, late, and intermediate. Early fusion involves the integration of multiple raw or preprocessed data modalities into a vector ahead of feature extraction. In intermediate fusion, the features, respective to each stream, are concatenated before classification. Late fusion refers to collecting decisions from multiple classifiers and applying maximum or average scores to get the final decision. Similar taxonomies also exist in the literature; for example, in [75] fusion methods are grouped into feature-level, score-level, and decision-level. #### 3.1.1 Rgb In [76], flow coding images computed from consecutive video frames are fed to a deep CNN network to extract deep temporal features from flow coding images. Then, the output features of several frames are concatenated together to learn the temporal convolution. Finally, a fully connected feedforward neural network is used for classification. In [19], two separate streams (a spatial and a temporal convolution network) are simultaneously applied to learn both the appearance and motion of actions. In [77], a trajectory-pooled deep-convolutional descriptor is proposed to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. In [78], temporal linear encoding (TLE), embedded inside of ConvNet architectures is presented to aggregate information from an entire video. The pooling layer called ActionVLAD [79] is also used to combine appearance and motion streams. It aggregates convolutional feature descriptors in different image portions and temporal spans. This layer is used to combine appearance and motion streams. As mentioned before, the problem with two-stream networks is the lack of transferring knowledge between the two streams [19; 77]. Some approaches address this problem [80] to communicate between the two streams. However, this interaction between different streams is known difficult [8]. #### 3.1.2 Depth In [81], a CNN-based framework is proposed using dynamic images. Dynamic images (DIs) summarize the motion and temporal information of video sequences in a single image. Multi specifications. Different views share the same convolutional layers but there are distinct fully connected layers (Figure 2). The main goal is to reduce the gradient vanishing problem, particularly on the shallow convolutional layers. Further, spatial-temporal action proposal is used to decrease the sensitivity of CNNs to scene variations. In [82], three different depth representations are proposed: dynamic depth image (DDI), dynamic depth normal image (DDNI), and dynamic depth motion normal image (DDMNI) for segmented and continuous action recognition. Dynamic images are constructed using hierarchical bidirectional rank pooling to extract spatial-temporal information. DDIs extract dynamics of postures, while DDNIs and DDMNIs exploit 3D structural information of depth maps. These three representations of depth are fed to a pre-trained CNN model for fine-tuning without any need for training the network from scratch. In [83], weighted hierarchical depth motion maps (WHDMM) and three-channel deep convolutional neural networks (ConvNets) are suggested using depth maps of small human action recognition datasets. Depth maps from different viewpoints are used to make WHDMMs extract spatio-temporal features of actions into 2-D spatial structures. Then, these structures are converted to pseudo-color images. Finally, color-coded WHDMMs are trained via distinct pre-trained ConvNets. In [84], a method for human action recognition from depth sequences is presented. Firstly, to form the depth motion maps (DMMs), the raw frames are projected onto three orthogonal Cartesian planes and the results are stacked into three still images (corresponding to the front, side, and top views). Then, the local ternary pattern (LTP) is introduced as an image filter for DMMs to improve the distinguishability of similar actions. Finally, corresponding LTP-encoded images are classified using CNN. #### 3.1.3 Skeleton In [85] and [86] spatio-temporal information carried in 3D skeleton sequences is represented by three 2D images, referred to as joint trajectory maps (JTM), through encoding the joint trajectories and their dynamics into color distribution in the images. Then ConvNets are adopted Figure 2: Multi-view dynamic image adaptive CNN learning model [81]. to learn the discriminative features for human action recognition. Such an image-based representation enables to use of existing ConvNets models for the classification of skeleton sequences without training the networks afresh. In [87] and [88], a skeleton image representation named SkeleMotion is introduced to be used as input of CNNs. In this method, the temporal dynamics are encoded by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate temporal dynamics to the representation. In [89], the spatio-temporal information of a skeleton sequence is encoded into color texture images, called skeleton optical spectra. The encoding consists of four steps; mapping of joint distribution, spectrum coding of joint trajectories, spectrum coding of body parts, and joint velocity weighted saturation and brightness. Again, convolutional neural networks are used to learn the discriminative features for action recognition. In [90], color texture images referred to as joint distance maps (JDMs) along with ConvNets are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple-person skeletons are encoded into color variations to capture temporal information. In [91], the 3D coordinates of the human body joints carried in skeleton sequences are transformed into image-based representations and stored as RGB images. Then a deep architecture based on ResNets is proposed to learn features from obtained color-based representations and classify them into action classes. #### 3.1.4 RGB & Depth In [92], Asadi-Aghbolaghi et al. investigated the combination of hand-crafted features and deep techniques in human action recognition via RGB-D videos. Multimodal dense trajectories (MMDT) are created from RGB, depth, scene flow, and optical flow modalities which are the inputs to 2DCNNs. Dynamic images such as depth motion maps and motion history images (MHI) are the other pre-computed motion features initially presented in [93]. These features grounded on rank pooling summarize the motion and action information of a video in a single image to represent the whole sequence. A two-stream CNN network pre-trained on VGG16 [94] or ResNet-101[95] is suggested in [96]. Dynamic images created independently from RGB videos and depth sequences are fed to the network. Finally, extracted features are concatenated and fed through a fully connected layer for action class prediction. Some studies suggest the difference of successive DMM frames projected on XY, YZ, and XZ planes corresponding to front, side, and top. Singh et al. [97] employed dynamic images created from RGB videos and three DMMs as inputs to the pre-trained VGG-F model [94]. A weighted product model is used to categorize the activity. Pre-trained networks with four streams are proposed in [98; 99] that accept an MHI created from RGB and DMMs in three distinct views (top, front, and side). The score of each stream is late fused at the end of the network to categorize the activity. For surgical recognition tasks, Twin et al. [100] suggest a four-stream CNN network pre-trained on AlexNet [101] using RGB, depth, and their motions as DNN entries. Then, features are concatenated and the classifier predicts the class label. In [102], four-channel data is used via combining RGB and depth, where extracting the scene flow from RGB-D videos is considered for action recognition to summarize the RGB-D videos. In this work, RGB and depth are considered as a single unit for extracting the features. A branch of studies tries to extract common-specific features of different modalities to increase the accuracy of action prediction. Combining the common and specific components in input features may be quite complicated and highly nonlinear. Shahroudy et al. [103] proposed a deep shared-specific network using nonlinear autoencoder-based component factorization layers. Also, while RGB and depth images are inherently distinct in appearance, there is considerable consistency between them at a high-level [104], which will affect the classification accuracy. Qin et al. [104] developed a unique two-stream framework to extract common-specific features through the constraint of similarity at a high level. In [105], a multi-stream deep neural network is suggested for egocentric action recognition. The work [105] uses the complementary features of RGB and depth by learning the nonlinear structure of heterogeneous information. It strives to keep the unique features for each modality and concurrently explore their sharable information in a unified framework. In addition, it uses a Cauchy estimator to maximize the correlations of the sharable components and enforce the orthogonality constraints on the individual components to ensure their high independencies. The cross-modality complementary features are learned from RGB and depth modalities via a cross-modality compensation block (CMCB) [106]. The CMCB initially extracts features from the two separate information flows, then sends and intensifies them to the RGB-D paths using the convolution layers. To increase action recognition performance, CMCB includes two general DNN architectures: ResNet and VGG. Wang et al. [107] employ two distinct cooperative convolutional networks (c-ConvNet) to extract information from dynamic images comprised of both visual RGB (VDIs) and depth (DDIs). The c-ConvNet comprises one feature extraction network and two branches, one for ranking loss and another for softmax loss. By utilizing bidirectional rank pooling, two dynamic images represent VDIs and DDIs: forward (f) and backward (b), VDIf &VDlb and DDIf & DDlb, respectively. In [108], segmented bidirectional rank pooling is used to gather spatio-temporal information. Moreover, the multimodality hierarchical fusion method gets the complementary information of multimodal data for categorization. The multimodality hierarchical system contains visual RGB and depth dynamic images, i.e., VDIs-f, VDIs-b, DDIs-f, and DDIs-b (f for forward and b for backward) and optical flow fields (X-stream and Y-stream) formed by ConvNets. Dynamic images are created from the RGB-D series as ConvNets entries to extract spatio-temporal information [109]. Then a segmented cooperative ConvNet is applied to learn the complementary information of RGB-D modalities. In [110], RGB and depth frames are used as training inputs, but only RGB is employed during test time. A hallucination network is utilized to simulate the depth stream for test time. A strategy based on inter-stream connection is used to improve the hallucination network's learning process. A loss function that combines distillation and privileged information is also developed. #### 3.1.5 RGB & Skeleton Verma et al. [111] proposed a two-stream framework to exploit spatio-temporal features using both CNNs and RNNs. Motion history image and motion energy image (MEI) are the RGB descriptors. Further, the skeleton modality is used after developing intensity images in three views: top, side, and front. Features of each stream are fused and the final prediction is performed based on scores of each stream using the weighted product rule. Tomas et al. [112] utilized appearance and motion information from RGB and skeleton joints to detect fine-grained motions. Motion representations are learned by CNN and motion history images that are generated from RGB images. In addition, stacked auto-encoders measure the distances of the joints from the mean joint in each frame to consider discriminative movements of human skeletal joints. #### 3.1.6 Depth & Skeleton Kamel et al. [113] employed depth motion image (DMI), moving joint descriptor (MJD), and fusion of DMI with MJD as inputs of the suggested CNN framework. DMI represents the body changes of depth maps in an image, while MJD indicates body joint position and orientation changes around a fixed point. Wang et al. [114] utilized the bidirectional rank pooling approach to three hierarchical spatial levels of depth maps driven by skeletons; body, part, and joint. Each level featured various components, which possessed joint positions. Spatio-temporal and structural information at all levels is learned via a spatially structured dynamic depth image (S2DDI) conserving the coordination and synchronization of body parts throughout the action. Besides, this framework contains three weights-shared ConvNets and scored fusion for classification. In [115], a CNN-based human action recognition framework is proposed by fusing depth and skeleton modalities. The proposed adaptive multiscale depth motion maps (AM-DMMs) computed from depth maps capture shape and motion cues. Moreover, adaptive temporal windows help the robustness of AM-DMMs in front of motion speed variations. In addition, a method is also proposed for encoding the spatio-temporal information of each skeleton sequence into three maps, called stable joint distance maps (SJDMs) which describe spatial relationships between the joints. A multi-channel CNN is adopted to exploit the discriminative features from texture color images encoded from AM-DMMs and SJDMs for recognition. #### 3.1.7 RGB & Depth & Skeleton Singh et al. [116] introduced a modality fusion technique called deep bottleneck multimodal feature fusion (D-BMFF) framework for three modalities of RGB, depth, and skeleton. 3D joints are transformed into a single RGB skeleton motion history image (RGB-SkIMHI). Every ten RGB and depth frames with a single Skel-MHI image are fed to the framework to extract spatial and temporal features respectively. Extracted features of three-modality streams are combined by multiset discriminant correlation analysis. Then action classification is performed using a linear multiclass SVM. Khaire et al. [117] aim to enhance activity recognition by using skeleton images, a motion history image, and three depth motion maps from the side, top, and front as inputs of a five-stream CNN network. Elmadany et al. [118] introduced two fusion approaches to exploit common subspace from two sets and more than two sets, i.e., biset globality locality preserving canonical correlation analysis (BGLPCCA) and multiset globality locality preserving canonical correlation analysis (MGLPCCA), respectively. These strategies represent global and local data features using low-dimensional shared subspace. Besides, two descriptors are suggested for skeleton data and depth. Finally, a framework composed of proposed fusion methods and descriptors is used for action recognition. In [119], various rank pooling and skeleton optical spectra approaches are examined to create dynamic images from RGB-D and skeleton. Dynamic images are divided into five categories: a dynamic color group (DC), a dynamic depth group (DD), and three dynamic skeleton groups (DXY, DYZ, DXZ). Several dynamic images featuring the major postures for each group are developed to represent different action postures. Then, a pre-trained flow-CNN extracting spatio-temporal features are used with a max-mean aggregation. Wu et al. [120] described a deep hierarchical dynamic neural network for gesture recognition. The suggested framework is composed of a Gaussian-Bernouilli deep belief network (DBN) to extract dynamic skeletal features and a 3DCNN to represent features from RGB and depth images. Furthermore, intermediate and late fusion techniques are used to fuse RGB and depth with the skeleton. Finally, HMM predicts the gesture class label by learning emission probabilities. Romaissa et al. [121] proposed a four-step framework for action recognition. First dynamic images are created from RGB-D videos, and features of dynamic images are extracted via a pre-trained model utilizing the transfer learning approach. Then, the Canonical correlation analysis method fuses extracted features. Finally, a bidirectional LSTM is trained to recognize action labels. #### 3.1.8 Discussions In brief, utilizing motion-based features is a common approach for modeling temporal variations using distinct or multiple data modalities that allows using pre-trained 2D ConvNets for modeling motion information. These networks generally exploit multiple streams of convolutional networks to encode both appearance and motion of human actions using different modalities. Descriptors such as motion history/energy images for RGB, dynamic images for depth, and joint trajectory maps for skeleton are popular pre-computed motion features used for human action recognition. The most important problem in multi-stream networks is the necessity of communications between different streams to transfer information in learning multimodal spatiotemporal features. The lack of effective interactions between the streams is one of the major problems in multi-stream networks. Such interactions are important for learning spatiotemporal features. Finally, the multi-stream CNN architectures learn different types of information from the input (through separate networks) and then perform fusion to get the result. This enables the traditional 2D CNNs to effectively handle the video data and achieve high accuracy. However, this type of architecture is not powerful enough for modeling long-term dependencies, i.e., it has limitations in effectively modeling the video-level temporal information. ### Three-Dimensional Filters In the second category, spatiotemporal filters (for example 3D convolution and 3D pooling) are used in the convolutional layer. Convolution as the essential operation in CNNs calculates pixel values according to a small neighborhood using a kernel (filter). Spatio-temporal filters extend 2D convolution networks by using 3D convolution. 3D convolution captures the temporal dynamics over some successive frames. However, some approaches convert the entire sequence into a 2D image and then use conventional 2D filters. #### 3.2.1 Rgb In [122], the effects of different spatiotemporal convolutions are studied for action recognition. In this work, the improvement of the accuracy of 3D CNNs over 2D CNNs is empirically demonstrated within the framework of residual learning. In [123], FAST 3D convolutions are introduced, a convolution block that combines a 2D spatial convolution with two orthogonal spatio-temporal convolutions. The block is motivated by the often characteristic horizontal and vertical motion of human actions. In [17; 18; 124; 125; 126], the 3D convolution over consecutive frames is applied for action recognition. In [18], the developed deep architecture model produces multiple channels of information from adjacent input frames and performs convolution and subsampling separately in each channel. Finally, feature representation is obtained by aggregating information from all channels. In [125], pseudo-3D residual net architecture is proposed which aims to learn spatio-temporal video representation in deep networks by simplifying 3D convolutions with 2D filters on spatial dimension plus 1D temporal connections. In [126], the learning of long-term video representations is considered by studying architectures with long-term temporal convolutions (LTC). To keep the complexity of networks tractable, the temporal extent of representations is increased at the cost of decreased spatial resolution. #### 3.2.2 Depth In [127], a 3D full CNN-based framework, called 3DFCNN, is developed for real-time human action recognition from depth videos captured from an RGB-D camera. The network exploits spatio-temporal information of depth sequences to use in the categorization of actions. The aim is the use of depth in privacy-aware systems because people's identities are not recognized from depth images. #### 3.2.3 Skeleton In [128], a multi-scale temporal modeling module is designed following [129]. This module contains four branches, each containing a \(1\times 1\) convolution to reduce channel dimension. The first three branches contain two temporal convolutions with different dilations and one Max-Pool respectively following \(1\times 1\) convolution. The results of the four branches are concatenated to obtain the output. In [130], a pre-trained 2D convolutional neural network is used as a pose module. A pre-trained 3DCNN is also used as an infrared module to respectively extract features from skeleton data and visual features from videos. Both feature vectors are then fused and jointly classified using a multilayer perceptron (MLP). #### 3.2.4 RGB & Depth Li et al. [131] employed RGB and depth as inputs to a pre-trained C3D network for gesture recognition. Extracted features are concatenated or averaged. Finally, the framework uses a linear SVM as a classifier. Zhu et al. [132] employed pyramid input and fusion with multiscale contextual information via 3D CNNs to learn gestures from the whole video. Zhang et al. [133] presented 3D lightweight structures for action recognition based on RGB-D data. The suggested lightweight 3D CNNs have considerably fewer parameters with reduced computing costs, and it results in desired recognition performance compared to common 3D CNNs. Qin et al. [134] employed 3D CNNs to extract common-specific features from RGB-D data. A novel end-to-end trainable framework called TSN-3DCSF is proposed for this purpose. In [135], a fusion approach is proposed using the adaptive cross-modal weighting (ACmW) approach to extract complementarity features from RGB-D data. ACmW block explores the relationship between the complementary information from multiple streams and fuses them in the spatial and temporal dimensions. In [136], a regional attention with architecture-rebuilt 3D network (RAAR3DNet) is suggested for gesture recognition. Fixed Inception modules are replaced with the automatically rebuilt structure through neural architecture search (NAS) to acquire the varied representations of features in the early, middle, and late levels of the network. In addition, a stacking regional attention module called dynamic-static attention (DSA) is used to highlight the hand/arm regions and the motion information. #### 3.2.5 Depth & Skeleton Liu et al. [137] suggest a 3D-based deep convolutional neural network (3D2CNN) to learn depth features along with the joint vector containing skeletal features. Finally, the decision fusion of SVM classifiers demonstrates the action class. #### 3.2.6 Discussions Shortly, 3D filters as the extension of 2D filters capture the temporal dynamics at the cost of requiring more parameters than 2D convolution networks. The advantage is capturing discriminative features along both spatial and temporal dimensions while the disadvantage is the limitation to a certain temporal structure (by considering very short temporal intervals) and complicated encoding of long-term temporal data. The 3D CNN-based methods generally perform spatio-temporal processing over limited intervals (using the window-based 3D convolutional operations), where each convolutional operation is only applied to a relatively short-term context in videos. For multi-modal approaches, the 3D filters can be applied to distinct modalities to simultaneously capture spatial and temporal intra-modal features. For this purpose, multi-stream networks and different strategies for fusion may be applied. However, the fusion of features is again a concern. Score fusion and feature fusion are two widely used multi-modality fusion schemes in human action recognition [45]. The score fusion integrates the separately made decisions based on different modalities to produce the final results. Meanwhile, feature fusion generally combines the features from different modalities to yield aggregated and powerful features for recognizing different actions. However, existing multi-modality methods are not as effective as expected owing to a series of challenges, such as over-fitting [45; 138]. ### Temporal Sequence Models The third group usually aggregates CNN features applied at individual frames with temporal sequence models such as recurrent neural networks [139]. RNNs that are designed to work with sequential data, use the previous information in the sequence to produce the current output. The main problem with RNNs is the short-term memory problem, caused by the vanishing gradient problem. As RNN processes more steps, it suffers from vanishing gradient more than other neural network architectures. To overcome this problem, two specialized versions of RNNs are created; GRU (gated recurrent unit)[140] and LSTM (long short-term memory) [141]. LSTM and GRU use memory cells to store the information of previous data in long sequences using gates. Gates that control the flow of information in the network, are capable of learning the importance of inputs in the sequence and storing or passing their information in long sequences. GRU structure is less complex compared with LSTM because it has less number of gates (two gates of reset and update for GRU compared with three gates of input, output, and forget for LSTM). However, other temporal sequence models like the hidden Markov model (HMM) are also applied [120] in the literature together with CNN features. #### 3.3.1 Rgb In [123; 142], both CNNs and LSTMs are utilized for capturing spatial motion patterns along with temporal dependencies. In [143], a conflux long short-term memory network is proposed to recognize actions from multi-view cameras. The proposed framework first extracts deep features from a sequence of frames using a pre-trained VGG19 CNN model for each view. Second, the extracted features are forwarded to the conflux LSTM network to learn the view of self-reliant patterns. In the next step, the inter-view correlations using the pairwise dot product are computed from the output of the LSTM network corresponding to different views to learn the view inter-reliant patterns. Finally, flattened layers followed by a softmax classifier are used for action recognition. #### 3.3.2 Depth In [144], two networks based on ConvLSTM are suggested with different learning strategies and architectures. One network uses a video-length adaptive input data generator (stateless) while the latter discovers the stateful capability of general recurrent neural networks, but is applied in the specific case of human action recognition. This property allows the model to gather discriminative patterns from previous frames without compromising computer memory. In [145], the ConvLSTM network is used with depth videos for home caring of elderly adults. In [146], a bidirectional recurrent neural network (BRNN) is developed for depth-based human action recognition. First, the 3D depth image is projected on three 2D planes and is fed to three distinct BRNNs. In the following layers, extracted features of each BRNN are fused and fed to the next BRNNs. The network follows with fully connected and softmax layers. #### 3.3.3 Skeleton In [147], an end-to-end trainable hierarchical RNN model is developed using skeleton data for recognizing activities. The human skeleton is divided into five body parts instead of the whole skeleton data in the training phase. Then, each part is separately fed to a subnet. Next, extracted features by each subnet are hierarchically fused and fed to the higher layer. In the end, a high-level representation of the skeleton is used for the final classification. In [148; 149], a universal spatial RNN-based model uses geometric features. The multi-stream LSTM network is trained with different geometric features and a new smoothed score fusion method is used. The potential of learning complex time-series representations via high-order derivatives of states is investigated in [150]. In this work, a differential gating scheme is proposed for the LSTM neural network to highlight the change in information gain due to salient motions between consecutive frames. The proposed differential recurrent neural network (dRNN) quantifies the change in information gained by the derivative of states. In [151], an end-to-end fully connected deep LSTM framework is proposed for action recognition. The co-occurrences of skeleton joints are learned via a regularization mechanism. Further, a dropout algorithm is suggested for gates, cells, and output responses of neurons. In [152], RNNs are also used to model the temporal dependencies of the features of body parts in actions. In [153], a three-structure-based traversal method is proposed. Besides, a new gating scheme in LSTM is proposed to handle noise and occlusion of skeleton data that learns the reliability of the sequential input data and adjusts its effect on updating the long-term context information stored in the memory cell. A two-stream RNN architecture is suggested to model spatial and temporal features of actions with skeleton data as input [154]. Two different structures are designed for the temporal stream, including stacked RNN and hierarchical RNN. Further, spatial structure is modeled by two methods. In addition, 3D-based data augmentation techniques such as rotation and scaling transformation are suggested. In [155], an ensemble temporal sliding LSTM (TS-LSTM) network is introduced for skeleton-based action recognition, which consists of several parts including short-term, medium-term, and long-term TS-LSTM networks. Then, with an average ensemble among different parts various temporal dependencies are captured. In addition, features of multiple parts are visualized to demonstrate the relation between recognized action and its correspondent multi-term TS-LSTM features. In [156], it is suggested to use an independently recurrent neural network (IndRNN). Besides, network weights are regularized to resolve the gradient vanishing problem. Moreover, IndRNN is over ten times faster than the commonly used LSTM. In [157], an attentional recurrent relational network-LSTM (ARRN-LSTM) is proposed that models spatial and temporal dynamics in skeletons for action recognition. The recurrent relational part of the network learns the spatial features of a single skeleton, followed by a multi-layer LSTM that learns the temporal features in the skeleton sequences. An adaptive attentional module is used between the two modules to focus on the most discriminative parts in the single skeleton. In addition, a two-stream architecture is used to learn the structural features among joints and lines to use the complementarity from different geometries in the skeleton. #### 3.3.4 RGB & Depth Pigou et al. [158] developed an end-to-end trainable network employing temporal convolutions and bidirectional recurrence. RGB and depth are considered as four-channel data or a 4D entity. In this method, RNNs represent high-level spatial information. In addition, RNNs predict the beginning and ending frames of gestures. In [159], two-stream RNNs are used for gesture recognition that utilizes RGB-D data to represent the contextual information of temporal sequences. #### 3.3.5 Depth & Skeleton Mahmud et al. [160] employed depth quantized images and skeleton joints for dynamic hand gesture recognition. Both CNN and LSTM structures are used in the network to extract depth features, while skeleton features are extracted via LSTM following distinct MLPs. Fused scores are used in prediction with MLP scores of fused extracted features from quantized images and skeleton joints in the previous process. Lai et al. [161] proposed a framework composed of CNNs and RNNs using depth and skeleton to hand gesture recognition. Further, several fusion strategies were investigated for enhancing performance, including feature-level fusion and score-level fusion. Shi et al. [162] proposed a privileged information-based recurrent neural network (PRNN). The privileged information (PI) is only provided during training but not through the testing procedure. This model considered skeletal joints as a PI in three-phase training processes, including; pre-training, learning, and refining. The recommended network was end-to-end trainable and the CNN and RNN parameters were jointly acquired. The final network enhances latent PI iteratively in an EM procedure. #### 3.3.6 RGB & Depth & Skeleton Hu et al. [163] proposed a framework to learn modality-temporal mutual information from tensors called the deep bilinear framework. The bilinear block learns the time-varying dynamics and multimodal information consists of modality pooling and temporal layers. The deep bilinear model is created through accumulating bilinear blocks and other layers to extract video modality-temporal information. Further modality-temporal cube descriptor is presented as deep bilinear learning input. #### 3.3.7 Discussions Finally, another common approach in modeling temporal variations is using the features applied at individual frames with temporal sequence models such as RNNs. Especially for RGB and depth, it is straightforward to use CNNs to extract spatial information and then use RNNs to extract temporal information of a sequence. Although the third group can deal with longer-range temporal relations, temporal sequence models such as RNN or LSTM can only exploit partial temporal information because regular RNNs cannot access all input elements at each given time step. Having access to all elements of a sequence at each time step can be overwhelming. To help the RNNs focus on the most relevant elements, the attention mechanism can be used that assigns different attention weights to each input element. Since the skeleton encodes high-level information about important details of a scene, the skeleton may be used to guide RGB/depth features. So, the important information strongly related to the action is enhanced. ### Transformers and Attention Before the arrival of transformers, most state-of-the-art methods were based on gated RNNs (such as LSTMs and GRUs) with considering attention mechanisms. Recently, transformer models such as BERT (bidirectional encoder representations from transformers) [164], GPT (generative pre-trained transformer) [165], RoBERTa (robustly optimized BERT pre-training) [166], and T5 (text-to-text transfer transformer) [167] has shown promising results in the field of NLP for tasks such as text classification and translation [25; 168]. Following these results, transformers are starting to be used in the field of computer vision (which was dependent on deep ConvNets and RNNs in the last decade) by introducing models such as ViT [26] and DeiT [169] for image classification, DETR for object detection [170], and VisTR for video instance segmentation [171]. For action recognition, considering the sequential nature of video makes it a perfect match for transformers to be used for modeling temporal variations. Although the application of transformers to action recognition is relatively new, the amount of research that has been proposed on this topic within the last few years is surprising. Now, transformers are built on attention technologies without using a recurrent neural network backbone, demonstrating the ability of the attention mechanisms alone compared with RNNs along with attention. Some approaches are strictly dependent on the transformer and self-attention mechanisms [172] to extract spatio-temporal features. Some others use CNN features besides transformers to make benefit from both architectures [173; 174; 175]. #### 3.4.1 Rgb In [176], an attention-based model for action recognition is proposed. The model can selectively concentrate on important elements in video frames and dynamically pool convolutional feature maps to produce discriminative features using long short-term memory units. In [177], hierarchical RNN and attention mechanisms are applied to capture both short-term and long-term motion information. In [178], Girdhar and Ramanan proposed a new method for approximating bilinear pooling with low-rank decomposition. This yields an attentional pooling that substitutes calculating the second-order features with the product of two attention maps of top-down and bottom-up attention maps. Li et al. [179] introduced motion-based attention along with LSTM for end-to-end sequence learning of actions in the video. In [180], Du et al. proposed a spatio-temporal attention mechanism to selectively focus on spatial visual elements as well as keyframes. In [181], the spatial transformer network [182] is introduced with an attention mechanism to explicitly model the spatial structures of human poses. In [183], a convolutional LSTM algorithm based on the attention mechanism is proposed to improve the accuracy of action recognition by mining the salient regions of actions in videos. First, GoogleNet [184] is used to extract the features of video frames. Then, those feature maps are processed by the spatial transformer network for attention. Finally, to classify the action, the sequential information of the features is handled by the convolutional LSTM network. In [185], a video action recognition network called action transformer is proposed that uses a modified transformer architecture as a 'head' to classify the action of a person of interest. It combines two other ideas of using a spatiotemporal I3D model [124] as the base backbone to extract features and a region proposal network (RPN) [186] to localize people performing actions. The I3D features and RPN produce the query that is the input for the transformer head and combines contextual information from other people and objects in the surrounding video. In this way, the network can implicitly learn both to track distinct persons and to consider the actions of other people in the video. In addition, the transformer attends to the hand and face as the most reassuring parts when discriminating an action. In [187], 3D convolution is combined with late temporal modeling for action recognition. For this purpose, the temporal global average pooling (TGAP) layer at the end of 3D convolutional architecture is replaced with the BERT layer to model the temporal information with BERT's attention mechanism (see Figure 3). It was shown that this replacement improves the performances of popular 3D convolution architectures such as ResNet, I3D, SlowFast, and R(2+1)D for action recognition. In [174], a sparse transformer-based Siamese network (called TBSN) is proposed for few-shot action recognition, which aims to recognize new categories with only a few labeled samples. TBSN applies the sparse transformer to learn the correlation and importance of video clips, and a new measurement to calculate the distance between samples. In this paper, an embedding module is designed based on sparse-transformer whose main ideas are attention mechanism and feedforward network. This method also substitutes the softmax function with the sparsemax function, where the sparsemax function can output zero probabilities. By introducing the sparse-max, zero attention values are assigned to clips containing noises. In [188], video transformer network (VTN) architecture is proposed for real-time action recognition. The VTN is made up of an encoder that processes each frame of input sequence independently with 2D CNN (ResNet-34 [189]), and the decoder that integrates intra-frame temporal information in a fully-attentional feed-forward approach. In [175], two spatio-temporal feature extraction (GSF) and aggregation (XViT) modules are developed for action recognition: GSF is a spatio-temporal feature extracting module that can be plugged into 2D CNNs. XViT is a video feature extractor based on the transformer. The proposed method uses an ensemble of GSF and the XViT models to generate the final scores. In [190], a simple fully self-attentional architecture called action transformer (AcT) is introduced that exploits 2D pose representations over small temporal windows. In [191], a pure-transformer architecture adapted from the Swin transformer for image recognition [192] is proposed for video recognition that is based on spatiotemporal locality inductive bias. So this model is supposed to be able to leverage the power of the pre-trained image models. The Swin transformer [28] introduced the inductive biases of locality, hierarchy, and translation invariance and can be served as a general-purpose backbone for various image recognition tasks. In [193], a two-pathway transformer network (TTN) is proposed that uses memory-based attention to explicitly model the relationship between appearance and motion. Specifically, each pathway is designed to produce spatial appearance information or temporal motion information. Then the generated features from two pathways are combined at the end of the framework. Here a transformer-based decoder is used to capture the underlying relationship between the appearance and motion information to improve action recognition. The decoder takes different features as its query, key, and value inputs so that the transformer heads can aggregate contextual information Figure 3: BERT-based Temporal Modeling with 3D CNNs for Action Recognition [187]. from one modality's features in the value input to update the other modality's features in the query input. In [194], TimeSFormer is proposed that extends ViTs to videos. In this method, the video is considered as a sequence of patches extracted from individual frames. In addition, to capture spatio-temporal relationships, divided attention is proposed to separately apply spatial and temporal attention within each block. In [195], multiscale vision transformers (MViT) are proposed for video and image recognition, to relate multiscale feature hierarchies with the transformer model. MViT hierarchically expands the feature complexity while reducing visual resolution. In [196], ViViT is proposed as a video vision transformer to extract spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. To handle the long sequences of tokens encountered in the video, several variants of the model which factorize the spatial and temporal dimensions of the input are proposed. In [172], a pure transformer-based approach called the multi-modal video transformer (MM-ViT) is proposed for video action recognition in the compressed video domain. MM-ViT exploits different modalities such as appearance (I-frames), motion (motion vectors and residuals), and audio waveform. To handle the large number of spatiotemporal tokens extracted from multiple modalities, four multi-modal video transformer architectures are introduced (see Figure 4). The simple architecture adopts the standard self-attention mechanism to measure all pairwise token relations. Three efficient model variants are also presented with different approaches which factorize the self-attention calculation over the space, time, and modality dimensions. In addition, to explore the inter-modal interactions, three distinct cross-modal attention mechanisms are developed that can be integrated into the transformer architecture. Experimental experiments on public datasets demonstrate that MM-ViT performs better or equally well to the state-of-the-art CNN counterparts with much less computational cost (compared with the cost of optical flow). Figure 4: Self-attention blocks in MM-ViT and cross-modal attention mechanisms [172]. #### 3.4.2 Skeleton For skeleton-based action recognition, some approaches use attention mechanisms for modeling dependencies. In [197], a global context-aware attention LSTM (GCA-LSTM) framework is suggested for action recognition to selectively focus on the informative joints of each frame. Further, a recurrent attention mechanism is proposed to enhance attention efficiency. Thereby, the proposed two-stream framework consists of coarse-grained and fine-grained attention. In [198], an LSTM-based approach called global context-aware attention LSTM (GCA-LSTM) is proposed to selectively focus on the informative joints in the action sequence using global contextual information. Besides, a recurrent attention mechanism for the GCA-LSTM network is introduced to achieve a reliable attention scheme for the action. An end-to-end spatial and temporal attention network is suggested for human action recognition from skeleton data in [199]. The network is based on LSTM to learn selectively focus on discriminative joints in each frame, giving each frame a different degree of attention. In [200], an architecture named graph convolutional skeleton transformer (GCsT) is proposed to capture the long-term temporal context and enhance the flexibility of feature extraction for skeleton-based action recognition. The overall architecture is divided into three stages and each stage consists of two blocks. A spatial-temporal graph convolutional block (STGC) to extract the local-neighborhood relations and a spatial-temporal transformer block (STT) to capture global space-time dependencies. GCsT employs the benefits of both transformer and graph convolution network (GCN). In this way, hierarchy and local topology structure is conducted through GCN and the temporal attention and global context is provided with the transformer. In [201], a hierarchical transformer-based framework is proposed for modeling the spatio-temporal structure of a sequence of 3D human skeletons of human action. Specifically in this method, the 3D human skeletons are split into five human body parts, then they are fused hierarchically with self-attention layers based on the articulation of the human body parts. Besides, to predict the motion of the 3D skeleton it tries to model the body parts' interactions and the motion directions. In [202], a transformer-based model called Motion-Transformer is proposed to capture the temporal dependencies via self-supervised pre-training on the sequence of human action. Besides, a flow prediction task is also introduced to pre-train the Motion-Transformer to capture the intrinsic temporal dependencies. The pre-trained model is then fine-tuned on the task of action recognition. In [203], a multi-stream spatial-temporal relative transformer architecture is also used instead of graph convolution or recurrence and LSTM, to capture long-range dependencies. The proposed architecture called relative transformer is based on standard transformer. The relative transformer module respectively evolves into a spatial relative transformer and temporal relative transformer to extract spatio-temporal features (ST-RT module). In addition, the dynamic representation module combines multi-scale motion information to handle actions with different durations. Lastly, four streams of ST-RTs modules with four dynamic data streams are combined to improve the performance (see Figure 5) where each stream extracts features from a corresponding skeleton sequence to complement each other. In [204; 205], a transformer self-attention approach is introduced in skeleton activity recognition as an alternative to graph convolution. This approach tries to model interactions between joints using a spatial self-attention module (SSA) to understand intra-frame interactions between different body parts and a temporal self-attention module (TSA) to model inter-frame correlations. The two modules are combined in a two-stream network to produce the final score for action recognition. In [206], synchronous local & nonlocal along with frequency attention (SLnL-rFA) model is proposed to extract synchronous detailed and semantic information from multi-domains. SLnL-rFA includes SLnL blocks for spatio-temporal learning and a residual rFA block for extracting frequency patterns. In [207], spatial transformer block and directional temporal transformer block are designed for modeling skeleton sequences in spatial and temporal dimensions respectively. To adapt to the imperfect information condition (due to occlusion, noise, etc.), a multi-task self-supervised learning method is also introduced by providing confusing samples in different situations to improve the robustness of the model. In [208], a transformer-based model is proposed with sparse attention and segmented linear attention mechanisms applied on spatial and temporal dimensions of action skeleton sequence to replace graph convolution operations with self-attention operations while requiring significantly less computational and memory resources. #### 3.4.3 RGB & Skeleton For multimodal action recognition, the cross-modality features are also a concern besides spatio-temporal features. In [209], a spatio-temporal attention-based mechanism is proposed for human action recognition to automatically attend to the most important human hands and detect the most discriminative moments in an action. Attention is handled using a recurrent neural network and is fully differentiable. In contrast to standard soft-attention-based mechanisms, this approach does not use the hidden RNN state as input to the attention model. Instead, attention distributions are drawn using a human articulated pose as external information. In [210], a closely related approach is proposed. However, the attention mechanism on the RGB space is conditioned on end-to-end learned deep features from the pose modality and not only hand-crafted pose features. In [211], an end-to-end network for human activity recognition is proposed leveraging spatial attention on human body parts. This paper proposes an RNN attention mechanism to obtain an attention vector for soft assigning different importance to human body parts using spatio-temporal evolution of the human skeleton joints. It also designs the joint training strategy to efficiently combine the spatial attention model with the spatio-temporal video representation by formulating a regularized cross-entropy loss to achieve fast convergence. In [212], an attention-based body pose encoding is proposed for human activity recognition. To achieve this encoding, the approach exploits a spatial stream to encode the spatial relationship between various body joints at each time point to learn the spatial structure of different body joints. In addition, it also uses a temporal stream to learn the temporal variation of individual body joints over the entire sequence. Later, these two pose streams are fused with a multi-head attention Figure 5: The overall architecture of MSST-RT in [203]. mechanism. It also captures the contextual information from the RGB video stream using an Inception-ResNet-V2 model combined with multi-head attention and a bidirectional long short-term memory network. Finally, the RGB video stream is combined with the fused body pose stream to give an end-to-end deep model for human activity recognition. #### 3.4.4 RGB & Depth In [213], a transformer-based framework is proposed for egocentric RGB-D action recognition. It consists of two inter-frame transformer encoders and the mutual-attentional cross-modality modules (see Figure 6). The temporal information of distinct modalities is encoded through the self-attention mechanism. Then features from different modalities are fused via the mutual-attention layer. The inputs of this network are aligned RGB frames and depth maps (two streams). The frames are passed through a CNN and after average pooling are converted into two sequences of feature embeddings. Then both sequence features are fed to the transformer encoders to model the temporal structure respectively. Features obtained from the encoders interact through the cross-modality block and are fused to produce the cross-modality representation. The features are processed through the linear layer to get per-frame classification. The final classification is performed by averaging the decisions over the frames of the video. #### 3.4.5 Discussions From studied papers, the interest in using attention-based and transformer networks for human action recognition is growing. These networks rely on self-attention mechanism to model dependencies across features over time, so the network can selectively extract the most relevant information and relationships. In addition, the great advantage of purely transformer-based networks is the fast-learning speed, and the lack of sequential operation, as with recurrent neural networks. Although video transformers have achieved promising results, they suffer from severe memory and computational overhead [45]. In addition, to obtain the input to the transformer, a video is mapped to a sequence of tokens and then the positional embedding is added. A straightforward method of tokenizing the input video [196] is to uniformly sample frames from the input Figure 6: proposed framework in [213]. Features from each modality are interacted with and incorporated through the mutual-attentional block. video clip, embed each 2D frame independently using the same method as ViT [26], and concatenate all these tokens together. However, this method results in a large number of tokens which increases the computation. Attention can also be guided through different modalities. In the case of multimodality, transformers are used for intra-modality spatial and temporal modeling and cross-modality feature fusion. Handling a large number of spatiotemporal tokens extracted from multiple modalities is a concern. ### Hybrid Methods In a group of studies, combinations of two or more techniques are used to exploit spatiotemporal dynamics. Considering the four aforementioned approaches, 11 different arrangements are probable for the hybrid techniques (see Figure 7). #### 3.5.1 Rgb Conv3D + motion + RNN: In [214], Ma et al. make use of both LSTMs and TemporalConvNets. In this method, spatial and temporal features are extracted from a two-stream ConvNet using ResNet-101 pre-trained on ImageNet and fine-tuned for single-frame activity prediction. The spatial-stream network takes RGB images as input, while the temporal-stream network takes stacked optical flow images as inputs. Spatial and temporal features are concatenated and temporally constructed into feature matrices. The constructed feature matrices are then used as input to both proposed methods: temporal segment LSTM (TS-LSTM) and Temporal-Inception Figure 7: Different Combinations of the aforementioned approaches. (see Figure 8). In [215], information from a video is integrated into a map called a motion map using a deep 3D convolutional network. A motion map and the next video frame can be integrated into a new motion map. This technique can be trained by increasing the training video length iteratively. The acquired network can be used for generating the motion map of the whole video. Next, a linear weighted fusion scheme is used to fuse the network feature maps into spatio-temporal features. Finally, a long short-term memory encoder-decoder is used for final predictions. #### 3.5.2 Depth Conv3D + motion: In [216], 3D dynamic voxel (3DV) is proposed to facilitate depth-based 3D action recognition. The key idea of 3DV is to encode 3D motion information within depth video into a regular voxel set (i.e., 3DV), via temporal rank pooling. Each available 3DV voxel intrinsically involves both 3D spatial and motion features. 3DV is then represented as a point set and input into PointNet++ [217] for 3D action recognition. In addition, a multi-stream 3D action recognition manner is also proposed to learn motion and appearance features jointly. To extract richer temporal order information of actions, the depth video is divided into temporal splits and encode this procedure in 3DV integrally. #### 3.5.3 Skeleton Motion + RNN: In [218], a feature selection network (FSN) is proposed with actor-critic reinforcement learning. Given the extracted feature sequence, FSN learns to adaptively select the most representative features and discard the ambiguous features for action recognition. In addition, a generalized graph generation module is proposed to capture latent dependencies and further propose a generalized graph convolution network (GGCN). The GGCN and FSN are combined in a three-stream recognition framework, in which different types of information from skeleton data are fused to improve recognition accuracy. The proposed method in [219] consists of three major steps; feature extraction from a skeleton sequence (as the input of ten neural networks; three LSTM models and seven CNN models), neural networks training, and the late score fusion. Applying various methods to a skeleton sequence can obtain various features prominently in the spatial or temporal domain, and that features are defined as SPF (spatial-domain-feature) Figure 8: proposed framework in [214] that makes use of motion-based features, LSTMs, and Temporal-ConvNets to exploit spatiotemporal dynamics. and TPF (temporal-domain-feature). SPF is selected as the input of LSTM networks and TPF as the input of CNN networks. To obtain the SPF, three types of spatial domain features are extracted including R (relative position), J (distances between joints), and L (distances between joints and lines). To obtain the TPF, the methods used in [19] and [13] are followed to generate the joint distances map and joint trajectories map (JTM) respectively. Conv3D + motion: In [220], dynamic GCN is proposed in which a convolutional neural network named context-encoding network (CeN) is introduced to learn skeleton topology. In particular, when learning the dependency between two joints, contextual features from the rest joints are incorporated in a global manner. Multiple CeN-enabled graph convolutional layers are stacked to build dynamic GCN. In addition, static physical body connections and motion modalities are combined to improve results. In [221; 222], a two-stream model using 3D CNN is proposed for skeleton-based action recognition. In this method, skeleton joints are mapped into a 3D coordinate space and the spatial and temporal information are encoded. Second, 3D CNN models are separately applied to extract deep features from two streams. Third, to enhance the ability of deep features to capture global relationships, every stream is extended into a multitemporal version. In [223], a data reorganizing strategy is proposed to represent the global and local structure information of human skeleton joints. It employs the data mirror to increase the relationship between skeleton joints. Based on this design, an end-to-end multi-dimensional CNN network is proposed to consider the spatial and temporal information to learn the feature extraction transform function. Particularly, in this CNN network, different convolution kernels are used on different dimensions to learn skeleton representation to generate robust features. #### 3.5.4 RGB & Depth Conv3D + motion + RNN: In [224], weighted dynamic images, created from the depth and RGB videos are inputs of the framework. The framework is composed of bidirectional rank pooling, CNNs, and 3D ConvLSTM to extract complementary information from weighted dynamic images. Canonical correlation analysis is employed for feature-level fusion, and a linear SVM is used for predicting the class of action. In [225], a three-stream framework via 3D CNN, ConvLSTM, 2D CNN, temporal pooling, and a fully connected layer with softmax is employed to extract spatio-temporal features of RGB, depth, and optical flow modalities. In [226], Molchanov et al. employed 3D CNNs and RNNs in the proposed framework for hand gesture recognition using RGB, optical flow, depth, IR, and IR disparity modalities. Each modality's class conditional probability vectors are averaged and fused to detect and classify hand gestures. motion + RNN: Dhiman et al. [227] proposed motion and shape temporal dynamics (STD) as action cues. They employed a framework with RGB dynamic images in motion stream and depth silhouette in STD stream for recognizing action from an unknown view. Conv3D + motion: In [228] and [229], a spatio-temporal attention framework is presented to identify the most representative regions and frames in a video. Following this, RGB, flow, and depth features are retrieved by the ResC3D network and are fused using canonical correlation analysis. A linear SVM classifier estimates the class label. Duan et al.[230] propose a four-stream network for gesture recognition. This method uses a two-stream convolutional consensus voting network (2SCVN) for the RGB stream and optical flow fields to model short and long-term video sequences. Besides, a two-stream 3D depth-saliency ConvNet (3DDSN) is employed to learn fine-grain motion and eliminate background clutter. Some approaches try to predict pose from RGB data and use it in action prediction. In [231], a framework is provided for hierarchical region-adaptive multi-time resolution depth motion map (RAMDMM) and multi-time resolution RGB action recognition system. The suggested approach presents a feature representation method for RGB-D data that allows multi-view and multi-temporal action recognition. Original and synthesized viewpoints are employed for multi-view human action recognition. In addition, to be invariant to changes in an action's speed, it also employs temporal motion information by incorporating it into the depth sequences. Appearance information in terms of multi-temporal RGB data is utilized to emphasize the underlying appearance information that would otherwise be lost with depth data alone, which helps to enhance sensitivity to interactions with tiny objects. Wu et al. applied 3D CNNs with multimodal inputs to improve spatio-temporal features [232]. This method suggests two distinct video presentations; depth residual dynamic image sequence (DRDIS) and pose estimation map sequence (PEMS). DRDIS displays spatial motion changes of an action over time which is robust under lighting conditions, texture, and color variations. PEMS is created by pose (skeletal) estimation from an RGB video and removes the background clutter. In [233], a gesture recognition framework called MultiD-CNN is suggested to learn spatio-temporal features from RGB-D videos. This method includes spatial and temporal information using two recognition models: 3D-CDCN and 2D-MRCN. 3D-CDCN adds the temporal dimension and makes use of 3D ResNets and ConvLSTM to concurrently learn spatio-temporal features. On the other hand, 2D-MRCN collects the motion throughout the video sequences into a motion representation and employs 2D ResNets to learn. Conv3D + RNN: In [234], a two-stream network is suggested to extract spatio-temporal features which are more robust to background clutter from RGB-D videos. The framework contains 3D CNN, convolutional LSTM, spatial pyramid pooling, and a fully connected layer to provide better long-term spatio-temporal learning. #### 3.5.5 RGB & Skeleton motion + RNN: Song et al. [235] offered an end-to-end trainable three-stream framework from RGB and optical flow videos. The framework employs skeleton data as a guide for the RGB stream besides skeleton data is trained in a separate stream. The framework is created from a ConvNet with LSTM. Visual features around critical joints are extracted automatically using a skeleton-indexed transform layer, and via a part-aggregated pooling. The visual features of different body parts and actors are uniformly regulated. Conv3D + motion: In [236], a fusion-based action recognition framework is proposed. The suggested framework is composed of three parts, including 3DCNN, human skeleton manifold representation, and classifier fusion. In [237], a multi-stream attention-enhanced adaptive graph convolutional neural network is proposed for skeleton-based action recognition. The graph topology of the skeleton data is learned adaptively in this model. Besides, a spatial-temporal-channel (STC) attention module is embedded in every graph convolutional layer, which helps focus on more important joints, frames, and features. The joints, bones, and the corresponding motion information are modeled in a unified multi-stream framework. In addition, the skeleton data is fused with the skeleton-guided cropped RGB data, which brings additional improvement. Conv3D + attention: Das et al. [238; 239] propose video-pose networks (VPN and VPN++) for the recognition of activities of daily living. VPN requires both RGB and 3D poses to classify actions. The RGB images are processed by a visual backbone that generates a spatio-temporal feature map. The VPN that takes as input the feature map and the 3D poses consists of two components: an attention network and a spatial embedding. The attention network further consists of a Pose Backbone and a spatio-temporal Coupler. VPN computes a modulated feature map that is used for classification. VPN++ is an extension of VPN to transfer the pose knowledge into RGB through a feature-level distillation and to mimic pose-driven attention through an attention-level distillation. Features of inputs are extracted via two distinct videos and pose backbones. The videos and their 3D poses. Two separate branches are dedicated to spatial and temporal attention individually. Finally, both branches are combined to classify the activities. Figure 9 shows a detailed picture of pose driven RNN attention model which takes 3D pose input and computes m \(\times\) n spatial and t temporal attention weights for the t \(\times\) m \(\times\) n \(\times\) c spatio-temporal features from I3D. In this work, skeleton data is employed as a guide for the RGB stream. Besides, skeleton data participates in the learning itself in a stream. #### 3.5.6 Discussions Different arrangements are probable for the hybrid techniques (11 combinations). However, some approaches are more established; such as the combination of motion-based features along with 3D filters or LSTMs and Conv3D. In brief, using Conv3D is common in hybrid methods while the combination with the transformers is a new approach. Applying various temporal modeling methods to a sequence can obtain the advantages of different approaches. For example, for motion-based + Conv3D, using motion-based features is a simple approach to obtain a global sense of motion and transactions among consecutive frames. These transactions may be learned through time using 3D filters. In this way, the motion is learned hierarchically through different approaches. However, there are limited methods for these hybrid approaches, and how these approaches should be combined is not well explored. ## 4 Discussions and Future Prospects In brief, methods can learn temporal features by 3D filters in their 3D convolutional and pooling layers. 3D ConvNets are straightforward extensions of 2D ConvNets as they capture temporal Figure 9: A detailed picture of pose-driven RNN attention model in [240] information using 3D convolutions. One limitation of 3D ConvNets is that they typically consider very short temporal intervals, thereby failing to capture long-term temporal information. It has been also shown that using training networks on pre-computed motion features is a way to implicitly learn motion features. These networks generally utilize multiple convolutional networks to model both appearance and motion information in action videos. In this way, pre-trained 2D ConvNets can be utilized, allowing networks that are fine-tuned on stacked optical flow frames to achieve good performance despite limited training data. However, the major problem in the multi-stream networks is that they do not allow interactions among the streams while such an interaction is important for learning spatiotemporal features. Temporal models like RNN and LSTM cope with longer-range temporal relations but the problem with RNNs, and LSTMs, is that it's hard to parallelize the work for processing sequences. The essential advantage of approaches in the transformer group is that they do not suffer from long dependency issues. The transformers process a sequence as a whole in parallel, instead of using past hidden states to capture dependencies in RNN and LSTM. This training parallelization allows training on larger datasets than was once possible. In addition, there is no risk to lose (or "forget") past information. Moreover, multi-head attention and positional embeddings both provide information about the relationship between different elements. Note that although transformers have the potential of learning longer-term dependency, but are limited by a fixed-length context. Some studies such as Transformer-XL architecture [241] try to learn dependency beyond a fixed length without disrupting temporal coherence. Finally, fusing various temporal modeling methods can obtain the advantages of different approaches. However, there are limited methods for these combinations, and how these approaches should combine is not well explored. Table 2 lists the pros and cons of deep-based temporal modeling approaches. ### Which approaches and modalities are more common? In total, more than 170 papers on human action recognition are reviewed in this study from 2015 to 2022 (see Table 3). In addition, Figure 10 shows the number of studied papers in each year. Figure 11 shows the number of studied papers in each category. As this figure shows, the \begin{table} \begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & Motion-based & 3D-CNN & RNN/LSTM & Transformer & Hybrid \\ \hline \hline Pros & -Pre-trained 2D-ConvNets can be utilized & -A natural extension of 2D ConvNets. & -Modeling longer-range temporal relations & -Do not suffer from long temporal relations & -Benefiting from advantages of different approaches \\ & & -Quite fast to train and effective with short sequences. & -Quite fast to train and effective with short sequences. & -Past information is re-tained through past hidden states. & -Requiring significantly less computational and memory resources & -Hierarchical learning of features \\ & & -Can capture inductive biases such as translation equivariance and locality & & -Requiring significantly less computational and memory resources & -Parallel processing \\ \hline Cons & -The High computational cost of computing accu-rate optical flow & -Short temporal interval -demanding many computational resources in the training stage & -Sequential processing -Past information behind states -Waishing gradients problems -High number of learnable parameters & -Training transformer-based architectures can be expensive, especially for long sequences. -Requiring large-scale training to surpass inductive bias & -Limited studies \\ \hline \end{tabular} \end{table} Table 2: pros and cons of deep-based temporal modeling approaches. \begin{table} \begin{tabular}{|c|c|c|} \hline Time Modeling Approach & Modality & Paper \\ \hline & RGB & [76, 77, 78, 79, 80, 222, 242] [81, 82, 83, 84] \\ & Depth & [81, 84] \\ & Skeleton & [85, 86, 87, 88, 89, 90, 91] \\ Motion-based & RGB+Depth & [92, 93, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112] \\ & RGB+Skeleton & [113, 114, 115] \\ & Depth+Skeleton & [116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 180, 181, 182, 190, 191, 193, 194, 195, 253, 254, 255, 256, 257, 258, 259, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 272, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 289, 280, 283, 285, 287, 286, 288, 289, 281, 282, 284, 285, 287, 288, 289, 282, 286, 287, 288, 289, 291, 289, 280, 281, 282, 283, 285, 286, 287, 288, 289, 282, 289, 292, 280, 284, 285, 286, 287, 288, 289, 282, 289, 293, 280, 285, 287, 288, 289, 286, 289, 287, 288, 289, 294, 288, 289, 289, 295, 280, 281, 282, 283, 285, 286, 287, 289, 288, 289, 296, 287, 288, 289, 297, 288, 289, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 382, 387, 388, 389, 383, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 444, 445, 435, 436, 437, 438, 439, 44, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 480, 482, 487, 488, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 49, 49, 400, 411, 412, 413, 414, 415, 416, 417, 418, 419, 421, 423, 424, 425, 426, 427, 428, 429, 430, 432, 433, 434, 435, 436, 437, 438, 439, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 444, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44,4, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44, 44,4, 44, 44, 44, 44,4, 44, 44, 44, 44, 44, 44, 44, 44, 44, 444, 44, 44, 44, 44, 44, 44, 44, 44, 44,44, 44, 44, 44, 44,44, 44, 444, 44, 44,44, 44, 44 greatest number of methods use motion-based features for temporal modeling. In this category, RGB & depth is the popular used modality. For the categories of Conv3D and transformer, RGB is the common modality used among studied papers while temporal sequence models are used mostly along with the skeleton. This figure is represented from another view in Figure 12 where the numbers of studied papers in each modality and their combinations are shown. As this figure shows, RGB is the most popular modality among studied papers, since it contains rich information about the appearance and is easy to collect. However, methods based on the RGB modality are often sensitive to viewpoint variations and background clutters, etc. Hence, action recognition with other modalities, such as 3D skeleton data, has also received great attention. As Figure 12 shows, the skeleton is in the second rank. The skeleton provides the body structure information (as a simple, efficient, and informative representation of human behaviors). Nevertheless, action recognition using only skeleton data still faces challenges, due to its sparse representation, the noisy skeleton information, and the lack of shape information, especially for handling human-object interactions. So, the combination of multiple modalities is used frequently in the literature. In Figure 12, The combination of RGB & depth is in the third rank. It is the most popular combination among all multi-modal approaches. The RGB and depth videos respectively capture the rich appearance and 3D shape information, that are complementary and can be used for action recognition. For hybrid methods, the try is to learn temporal features by combining different approaches. While different combinations are possible, some are more promising. Figure 13 shows the number of studied papers in each combination. Note that only five combinations are shown in this figure because the remaining combinations did not include any paper among the studied papers. Note that, Conv3D is present in all five combinations since the 3D CNN-based methods are very powerful in modeling discriminative features from both the spatial and temporal dimensions. In Figure 10: the number of studied papers in each year from 2015 to 2022. Figure 11: The number of studied papers in each category. Figure 12: The number of studied papers in each modality and their combinations. addition, as this figure shows, Conv3D + motion is used a lot in the literature. Since utilizing motion-based features is also a common approach for modeling temporal variations, the combination of these two approaches (Conv3D and motion) may improve accuracy. ### Which approaches and modalities are the winner? To compare different approaches with each other, six benchmark datasets of human action recognition including NTU RGB+D [152], NTU RGB+D 120 [266], Toyota-Smarthome [240], Kinetics 400 [267], Kinetics 600 [268], and Skeleton-Kinetics are selected. As one of the most fundamental tasks in computer vision, there are numerous benchmark datasets for unimodal or multimodal vision-based human action recognition [269; 270; 271; 272]. These benchmark datasets are chosen due to their popularity and large number of actions. While the three first datasets are multimodal, the Kinetics 400 and 600 provide only RGB data and Skeleton-Kinetics include only skeletal data. NTU RGB+D [152]: is a large-scale multimodal human action recognition dataset containing 56,880 action sequences of 60 action classes. The action samples are performed by 40 persons in the lab environment and are captured by three Microsoft Kinect v2 cameras from three different views. Each sample contains an action and contains at most 2 subjects. We report the two standard evaluation protocols recommended by the authors of this dataset namely cross-subject (CS) and cross-view (CV). In the CS setting, training data comes from 20 subjects and test data comes from the other 20 subjects. In the CV setting, training data comes from camera views 2 and 3, and test data comes from camera view 1. This dataset contains RGB videos, depth map sequences, 3D skeletal data, and infrared (IR) videos for each sample. NTU RGB+D 120 [266]: extends NTU RGB+D with additional 57,600 sequences over 60 extra action classes. Totally 114,480 samples over 120 classes are performed by 106 individuals, captured with three camera views. There are two recommended evaluation protocols, namely cross-subject (CS) and cross setup (CSet). In the CS setting, 63,026 clips from 53 subjects are used for training, and the remaining subjects are reserved for testing. In the CSet setting, 54,471 clips with even setup IDs are used for training, and the rest clips with odd setup IDs are used for testing. Toyota-Smarthome [240]: is a dataset of activities of daily living recorded in an apartment where 18 older subjects carry out tasks of daily living during a day. The dataset Figure 13: the number of studied papers for hybrid methods. contains 16.1k video clips, 7 different camera views, and 31 complex activities performed in a natural way without strong prior instructions. This dataset provides RGB data and 3D skeletons which are extracted from LCRNet [273]. For the evaluation of this dataset, the cross-subject (CS) and two cross-view protocols (CV1 and CV2) [240] are reported. Kinetics [267; 268]: is a collection of large-scale, high-quality datasets of URL links consisting of 10-second videos sampled at 25fps from YouTube. Here both Kinetics 400 [267] and 600 [268] are considered, containing respectively 400 and 600 classes. Note that there is a relatively newer version of this dataset called Kinetics-700 [274] that is not considered here, because existing results are limited on this new version. As these are dynamic datasets and videos may be removed from YouTube, the size of these datasets are approximately 267k and 446k respectively. Kinetics-400 consists of \(\sim\)240k training videos and 20k validation videos. Kinetics-600 has \(\sim\)392k training videos and 30k validation videos. In addition, Skeleton-Kinetics is derived from the Kinetics-400 dataset. The skeletons are estimated by [275] from RGB videos using the OpenPose toolbox [276]. Each joint consists of 2D coordinates in the pixel coordinate system and its confidence score. There are 18 joints for each person. In each frame at most two subjects are considered. Skeleton-based approaches reported on Kinetics usually use Skeleton-Kinetics for evaluation. Here the same train-validation split as [275] is considered. That is, the training and validation sets contain 240k and 20k video clips respectively. Top-1 accuracies are reported. Table 4 shows state-of-the-art approaches on these benchmark datasets. As this table shows, Conv3D or Conv3D and its combination with attention is the top time modeling approach for NTU RGB+D, NTU RGB+D 120, and Toyota-Smarthome, all using RGB & skeleton as the input modalities. For Kinetics-400 and Kinetics-600 attention is the winner. However, for Kinetics-Skeleton, Conv3D is still at the first rank. Note that multimodal methods achieve superior results compared with unimodal methods on all three first multimodal datasets. Finally, Conv3D and attention are the most popular approaches among state-of-the-art methods for modeling temporal variations in human action recognition. Note that for RGB-based methods using pure-transformer architectures is becoming a common approach showing the capability and the increasing interest of the community in using transformers for temporal modeling. In addition, according to the table, the combination of RGB and pose is more frequent among top multi-modal human action recognition algorithms as they provide complementary information about the appearance and 3D coordinates of joints. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Dataset** & **Method** & **Accuracy** & **Modality** & **Time modeling** \\ \hline NTU RGB+D [152] & Duan et al. [250] & 97.0 (cs) 99.6 (cv) & RGB \& skeleton & Conv3D \\ & Das et al. [239] & 96.6 (cs) 99.1 (cv) & RGB \& skeleton & Conv3D+attention \\ & Shi et al. [237] & 96.1 (cs) 99.0 (cv) & RGB \& skeleton & Conv3D+motion \\ & Davoodikakhki et al. [251] & 95.66 (cv) 98.79 (cv) & RGB \& skeleton & Conv3D \\ & Das et al. [238] & 95.5 (cs) 98.0 (cv) & RGB \& skeleton & Conv3D+attention \\ & Zhu et al. [243] & 94.3 (cs) 97.2 (cv) & RGB & Conv3D \\ & Das et al. [211] & 93.0 (cs) 95.4 (cv) & RGB \& skeleton & attention \\ & Chen et al. [128] & 92.4 (cs) 96.8 (cv) & skeleton & Conv3D \\ & Das et al. [240] & 92.2 (cs) 94.6 (cv) & RGB \& skeleton & Conv3D+RNN+attention \\ & Piergiovanni et al. [244] & 93.7 (cv) & RGB & Conv3D \\ & Liu et al. [129] & 91.5 (cs) 96.2 (cv) & skeleton & Conv3D \\ & Ye et al. [220] & 91.5 (cs) 96.0 (cv) & skeleton & Conv3D+motion \\ \hline \hline \end{tabular} \end{table} Table 4: Top state-of-the-art methods on six benchmark datasets. ### Future directions and challenges We envision transformers will proceed in human action recognition. However, there are main challenges for using transformers in activity recognition that should be addressed by the community. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Dataset** & **Method** & **Accuracy** & **Modality** & **Time modeling** \\ \hline \hline NTU RGB+D 120 & Duan et al. [250] & 95.3(cs) 96.4(cset) & RGB \& skeleton & Conv3D \\ [266] & Das et al. [239] & 90.7(cs) 92.5(cset) & RGB \& skeleton & Conv3D+attention \\ & Chen et al. [128] & 88.9(cs) 90.6(cset) & skeleton & Conv3D \\ & Ye et al. [220] & 87.3(cs) 88.6(cset) & skeleton & Conv3D+motion \\ & Das et al. [238] & 86.3(cs) 87.8(cset) & RGB \& skeleton & Conv3D+attention \\ & Das et al. [240] & 83.8(cs) 82.5(cset) & RGB \& skeleton & Conv3D+RNN+attention \\ & Papadopoulos et al. [247] & 78.3(cs) 79.2(cset) & skeleton & Conv3D \\ & Caetano et al. [88] & 67.9(cs) 62.8(cset) & skeleton & motion \\ & Caetano et al. [87] & 66.9(cs) 67.7(cset) & skeleton & motion \\ & Liu et al. [277] & 64.6(cs) 66.9(cset) & skeleton & Conv2D \\ \hline Toyota-Smarthome & Das et al. [239] & 71.0(cs) 58.1(cv2) & RGB \& skeleton & Conv3D+attention \\ [240] & Yang et al. [262] & 64.3(cs) 36.1(cv1) 65.0(cv2) & skeleton & attention \\ & Ryoo et al. [263] & 63.6(cs) & RGB & motion+attention \\ & Kangaspunta et al. [264] & 62.11(cs) & RGB & Conv3D+motion \\ & Das et al. [238] & 60.8(cs) 53.5(cv2) & RGB \& skeleton & Conv3D+attention \\ & Das et al. [240] & 54.2(cs) 35.2(cv1) 50.3(cv2) & RGB \& skeleton & Conv3D+RNN+attention \\ & Wang et al. [261] & 53.6(cs) 34.3(cv1) 43.9(cv2) & RGB & attention \\ & Carreira et al. [124] & 53.4(cs) 34.9(cv1) 45.1(cv2) & RGB & Conv3D \\ & Mahasseni et al. [252] & 42.5(cs) 13.4(cv1) 17.2(cv2) & skeleton & RNN \\ & Ohnishi et al. [242] & 41.9(cs) 20.9(cv1) 23.7(cv2) & RGB & motion \\ \hline Kinetics-400 [267] & Yan et al. [253] & 89.1 (top-1) & RGB & attention \\ & Zhang et al. [254] & 87.2 (top-1) & RGB & attention \\ & Wei et al. [255] & 87.0 (top-1) & RGB & attention \\ & Yuan et al. [256] & 86.8 (top-1) & RGB & attention \\ & Liu et al. [257] & 86.8 (top-1) & RGB & attention \\ & Li et al. [258] & 86.1 (top-1) & RGB & attention \\ & Tong et al. [259] & 85.8 (top-1) & RGB & attention \\ & Ryoo et al. [260] & 85.4 (top-1) & RGB & attention \\ & Arnab et al. [196] & 84.9 (top-1) & RGB & attention \\ & Duan et al. [250] & 83.9 (top-1) & RGB \& skeleton & Conv3D \\ & Fan et al. [195] & 81.2 (top-1) & RGB & attention \\ & Bertaisu et al. [194] & 80.7 (top-1) & RGB & attention \\ & Feichtenhofer et al. [245] & 80.4 (top-1) & RGB & Conv3D \\ & Feichtenhofer et al. [246] & 79.8 (top-1) & RGB & Conv3D \\ \hline Kinetics-600 [268] & Yan et al. [253] & 89.6 (top-1) & RGB & attention \\ & Wei et al. [255] & 88.3 (top-1) & RGB & attention \\ & Yuan et al. [256] & 88.0 (top-1) & RGB & attention \\ & Li et al. [258] & 87.9 (top-1) & RGB & attention \\ & Zhang et al. [254] & 87.9 (top-1) & RGB & attention \\ & Ryoo et al. [260] & 86.3 (top-1) & RGB & attention \\ & Liu et al. [191] & 86.1 (top-1) & RGB & attention \\ & Arnab et al. [196] & 85.8 (top-1) & RGB & attention \\ & Fan et al. [195] & 84.1 (top-1) & RGB & attention \\ & Bertaisu et al.[194] & 82.2 (top-1) & RGB & attention \\ & Feichtenhofer et al. [245] & 81.9 (top-1) & RGB & Conv3D \\ & Feichtenhofer et al. [246] & 81.8 (top-1) & RGB & Conv3D \\ \hline Kinetics-Skeleton & Duan et al. [250] & 47.7 (top-1) & skeleton & Conv3D \\ [275] & Obinata et al. [248] & 38.6 (top-1) & skeleton & Conv3D \\ & Chen et al. [249] & 38.4 (top-1) & skeleton & Conv3D \\ & Liu et al. [192] & 38.0 (top-1) & skeleton & Conv3D \\ & Ye et al. [220] & 37.9 (top-1) & skeleton & Conv3D+motion \\ & Shi et al. [237] & 37.8 (top-1) & RGB \& skeleton & Conv3D+motion \\ & Yang et al. [265] & 37.5 (top-1) & skeleton & Conv3D+motion \\ & Pitzzari et al. [204] & 37.4 (top-1) & skeleton & attention \\ \hline \hline \end{tabular} \end{table} Table 4: Top state-of-the-art methods on six benchmark datasets. **Transformers with high performance and low resource cost for action recognition:** Compared with CNN models, transformers are usually huge and computationally expensive, and efficient transformers are needed especially for devices with limited resources. So compressing and accelerating transformer models for efficient implementation; specifically, transformers with high performance and low resource cost is an open problem [28]. Some works attempt to compress pre-defined transformer models into smaller ones, some others attempt to design compact models directly. The research carried out for efficient implementation includes pruning networks and decomposition [278; 279], knowledge distillation [280], network quantization [281], and compact architecture design [282]. However, models originally designed for NLP, may not be suitable for action recognition. **Transformer models capable of handling a large number of spatiotemporal tokens and extracting conjoint inter and intra-modal features:** In the case of multimodality, transformers are used for intramodality spatial and temporal modeling and cross-modality feature fusion. Handling a large number of spatiotemporal tokens extracted from multiple modalities is a concern. Some methods develop several scalable model variants which factorize self-attention across the space, time, and modality dimensions. For example in [205], a spatial self-attention module is used to understand intra-frame interactions between different body parts, and a temporal self-attention module to model inter-frame correlations. The two are combined in a two-stream network. To further explore the rich inter-modal interactions and their effects, cross-modal attention mechanisms that can be seamlessly integrated into the transformer building block are also needed to effectively exploit the complementary nature of all modalities. **Transformer models capable of processing multiple tasks of human action recognition in a single model:** In addition, following the success of some new trends in NLP [283] and CV [284; 285; 286] to develop transformer models capable of processing multiple tasks in a single model, we believe that domains including images, audio, multimodal, etc. can be unified in only one model. Advances in hybrid models combining different approaches with transformers are also expected [238; 239]. **Practical action recognition:** Existing approaches, such as 3D convolutional neural networks and transformer-based methods, usually process the videos in a clip-wise manner; requiring huge GPU memory and fixed-length video clips [287]. So, proposing models capable of working on variant-length video clips without requiring large GPU memory is needed. Finally, we think the deep learning solutions for large-scale, real-time, multi-view, and realistic action recognition topics along with newer problems like early recognition, multi-task learning, few-shot learning, unsupervised and semi-supervised learning, and recognition from low-resolution videos will receive attention in the next years. ## 5 Conclusions In this paper, a comprehensive overview of a long concern in human action recognition i.e. temporal modeling is presented. It is especially important in recognizing similar actions with subtle time differences. The taxonomy is defined to cover most of the basic and crucial approaches for modeling temporal information and then some recent methods are reviewed. Key branches introduced include motion-based feature approaches, three-dimensional convolutional neural networks, recurrent neural networks, transformers, and hybrid methods. In brief, more than 170 recent papers on human action recognition are reviewed. In each category, methods are grouped based on used modalities; RGB, depth, skeleton, or combinations of multiple modalities. Finally, different approaches are compared with each other using benchmark datasets for human action recognition. This way, popular approaches for temporal modeling along with popular modalities are recognized. From studied papers, transformers are showing promising results due to properties such as not suffering from long dependency issues and parallel processing. However, transformers are still at the start point of the way, and the question that transformers will prevail in human action recognition or not remains (due to difficulties such as the expensive cost of training transformer-based architectures, the lack of inductive biases of CNNs in transformers, and the need for large-scale training to surpass these biases). ## Acknowledgment This research is partially supported by the Iran national science foundation (INSF) [https://insf.org/en](https://insf.org/en) and the Shahid Bahonar University of Kerman [https://uk.ac.ir/en/home](https://uk.ac.ir/en/home) under grant number 98006291. ## Appendix A From self-attention to transformers; formulation overview As mentioned before, the transformer that is primarily used in NLP [25], is a new deep learning model based on the self-attention mechanism that weights the significance of different parts of the input data. The self-attention is a key idea behind transformers, which facilities capturing 'long term' dependencies between sequence elements and can be viewed as a kind of non-local filtering operation [288; 261]. Note that encoding such dependencies is a challenge in CNNs and RNNs. The main difference between self-attention with convolution is that the filters are calculated dynamically for any input compared with static filters of the convolution operation. ### Building block: self-attention and multi-head attention layer A self-attention layer projects the input sequence \(X\in R^{n\times d}\) onto three learnable weight matrices namely, Queries, Keys and Values denoted as \(W^{Q}\in R^{d\times d_{q}}\), \(W^{K}\in R^{d\times d_{k}}\) and \(W^{V}\in R^{d\times d_{r}}\). Where n is the number of entities in the input sequence, d is the embedding dimension for representing each entity, and \(d_{q}=d_{k}=d_{v}=d_{\text{model}}\). So the output of the self-attention layer \(Z\in R^{n\times d_{r}}\) is obtained as: \[Q=XW^{Q},K=XW^{K},V=XW^{V} \tag{1}\] \[Z=softmax\left(\frac{\text{Q}.K^{T}}{\sqrt{d_{k}}}\right).\text{V} \tag{2}\] The goal of self-attention is to capture the interaction among all n entities by computing the scores between each pair of different vectors. These scores determine the amount of attention given to other entities when encoding the entity at the current position. Normalizing the scores enhances gradient stability for improved training, and the softmax function is used to convert the scores into probabilities. Vectors with larger probabilities receive additional focus from the following layers. In this way, each entity is encoded in terms of global contextual information. In addition, to improve the performance of the simple self-attention layer, multi-head attention is used to compute multiple dependencies among different entities of the sequence. The multi-head attention contains multiple self-attention blocks, with each block having its own set of weight matrices \([W^{Q_{i}},W^{K_{i}},W^{V_{i}}]\), where \(i=0\dots(h-1)\) and \(h\) is the number of self-attention blocks. Finally, the outputs of all blocks are concatenated into a single matrix and projected onto a weight matrix. It has been shown in the literature that self-attention (with positional encodings) is theoretically a more flexible operation [289]. In [290], the relationship between self-attention and convolution operations is studied. Their empirical results showed that multi-head self-attention (with sufficient parameters) is a more generic operation that can model the expressiveness of convolution as a special case. Self-attention can learn the global as well as local features, and provide the capability of learning kernel weights adaptively as well as the receptive field. ### Transformer model The architecture of the transformer model contains an encoder-decoder structure, as shown in Figure 14. The left side of this image shows a simple schematic of the transformer. The encoder module contains \(N\)stacked identical blocks, with each block having two sub-layers of a multi-head self-attention network and a point-wise fully connected feed-forward network. The decoder in the transformer model also comprises \(N\) identical blocks. Each decoder block has three sub-layers of multi-head self-attention, encoder-decoder attention, and feed-forward. The multi-head self-attention and feed-forward are similar to the encoder, while the encoder-decoder attention sublayer performs multi-head attention on the outputs of the corresponding encoder block. The right side of Figure 14 shows more details of the transformer proposed in [25]. Note that After each block in the encoder and decoder, residual connections [189] and layer normalization [291] are also applied. Positional encodings are also added to the input sequence to capture the relative position of each entity in the sequence. Since there is no recurrence and convolution in the transformer model, some information must be embedded about the position of the entities in the sequence for the model to use the order of the sequence. Positional encodings have the same dimensions as the input d and can be learned or pre-defined, e.g., by sine or cosine functions. In addition, the decoder of the transformer uses previous outputs to predict the following entity in the sequence. So the decoder takes inputs from the encoder and the preceding outputs to calculate the next entity of the sequence.
2309.12391
Entanglement phases, localization and multifractality of monitored free fermions in two dimensions
We investigate the entanglement structure and wave function characteristics of continuously monitored free fermions with U$(1)$-symmetry in two spatial dimensions (2D). By deriving the exact fermion replica-quantum master equation, we line out two approaches: (i) a nonlinear sigma model analogous to disordered free fermions, resulting in an SU$(R)$-symmetric field theory of symmetry class AIII in (2+1) space-time dimensions, or (ii) for bipartite lattices, third quantization leading to a non-Hermitian SU$(2R)$-symmetric Hubbard model. Using exact numerical simulations, we explore the phenomenology of the entanglement transition in 2D monitored fermions, examining entanglement entropy and wave function inverse participation ratio. At weak monitoring, we observe characteristic $L\log L$ entanglement growth and multifractal dimension $D_q=2$, resembling a metallic Fermi liquid. Under strong monitoring, wave functions localize and the entanglement saturates towards an area law. Between these regimes, we identify a high-symmetry point exhibiting both entanglement growth indicative of emergent conformal invariance and maximal multifractal behavior. While this multifractal behavior aligns with the nonlinear sigma model of the Anderson transition, the emergent conformal invariance is an unexpected feature not typically associated with Anderson localization. These discoveries add a new dimension to the study of 2D monitored fermions and underscore the need to further explore the connection between non-unitary quantum dynamics in $D$ dimensions and quantum statistical mechanics in $D+1$ dimensions.
K. Chahine, M. Buchhold
2023-09-21T18:00:01Z
http://arxiv.org/abs/2309.12391v4
# Entanglement phases, localization and multifractality of monitored free fermions in two dimensions ###### Abstract We explore the entanglement structure and wave function properties of continuously monitored free fermions with \(U(1)\)-symmetry in two spatial dimensions (2D). Deriving the fermion replica-Keldysh field theory, and a bosonic effective long-wavelength action, we discuss similarities and differences between entanglement phase transitions of monitored fermions in two dimensions and Anderson-type localization transitions in three dimensions. We then establish the phenomenology of entanglement transitions of monitored fermions in 2D by extracting the entanglement entropy, mutual information, and wave function inverse participation ratio from exact numerical simulations. At weak monitoring, a characteristic \(L\log L\) entanglement growth and a multifractal dimension \(D_{q}=2\) are reminiscent of a metallic Fermi liquid. For strong monitoring, exponentially localized wave functions yield a saturation towards area law entanglement. In between, the critical point displays both an entanglement scaling consistent with an emergent conformal invariance and strong multifractality. The numerical results are in good agreement with a mean-field analysis and a one-loop renormalization group treatment of the field theory. This shapes the picture of a monitoring induced metal-to-insulator transition in the entanglement content and establishes 2D monitored fermions as a novel arena to explore the link between non-unitary quantum dynamics in \(D\) dimensions and quantum statistical mechanics in \(D+1\) dimensions. _Introduction._ - The advances in realizing quantum devices with high fidelity unitary evolution and mid-circuit measurements have put focus on a novel type of quantum dynamics: the competition between scrambling and localization of quantum information. Here the competition roots in the non-commutativity of unitary, i.e., Hamiltonian or gates, and non-unitary, i.e., measurements, dynamics and it is of genuine quantum mechanical origin. In this new setting, two types of evolution protocols have crystallized: _quantum circuits_ composed either of discrete unitary gates and projective measurements [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] or measurements-only [23; 24; 25; 26; 27; 28; 29; 30] and _monitored Hamiltonians_, featuring a continuous unitary evolution subject to measurements [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. The unitary evolution leads to scrambling of quantum information, e.g., by delocalizing particles or qubits, while measurements yield localization, causing a quantum phase transition in the entanglement entropy - the measure for the locality of quantum information. Scrambling versus localization of quantum information in a \(D\)-dimensional monitored system has been linked to aspects of localization-delocalization transitions in \(D+1\)-dimensional statistical mechanics of disordered quantum systems, both for circuits [53; 54; 28; 55] and Hamiltonian setups [56; 57; 58; 59; 33; 34]. In particular, for a monitored \(D\)-dimensional Hamiltonian evolution, this viewpoint interprets random, non-unitary measurements similarly to a static disorder potential in \(D+1\) dimensions. However, the majority of works so far has focused on dynamics in one spatial dimension; and for the particular case of fermions with \(U(1)\)-symmetry has reported results that are partially at variance [31; 32; 33; 36; 37]. This is a challenging limit: the quantum dynamics in \(D=1\) spatial dimension displays peculiar properties [60; 61]; just like the statistical mechanics of disordered systems at the lower critical dimension \(D+1=2\)[62; 63; 64]. Here we explore the link between monitored quantum systems and quantum disordered ground states in a new arena, which displays an even richer phenomenology and is less ambiguous. We study the dynamics of weakly monitored, \(U(1)\)-symmetric free fermions in \(D=2\) spatial dimensions in a two-thornged approach, combining analytical tools and large scale numerical simulations. A strong link is provided by fermion Keldysh replica field theory - its structure is remi Figure 1: (a) Free fermions hopping on an \(L\times L\) square lattice subject to continuous monitoring with rate \(\gamma\). (b) Sketch of the time evolution. Blue sheets represent snapshots of trajectories in which observables are computed. The half-system entanglement entropy is obtained from strips \(A=L\times L/2\) shown in yellow. (c) Schematic phase diagram showing observables and their behavior in each regime. (d) The entanglement entropy \(S(A)\) shows the onset of an area law for strong monitoring and logarithmic violations for weak monitoring. (d) Multifractal exponent \(D_{q}\) of the inverse participation ratio \(P_{q}\): At weak monitoring (blue) the system is metallic with \(D_{q}=2\) for all \(q\). At the critical point (pink), the wave functions exhibit multifractality with \(D_{2}=1.80(9)\), \(D_{3}=1.1(5)\) and \(D_{4}=0.7(4)\). niscent of the field theory for free disordered fermions, i.e., it contains a dissipative four-fermion vertex [65]. However, we also point out clear differences: the vertex is local in space and time and it has a reduced rotation symmetry in Keldysh coordinates. This yields a modified long-wavelength theory - a SU(\(R\))-symmetric nonlinear sigma model with a space-time rotation symmetry and dynamical critical exponent \(z=1\), corresponding to the chiral unitary class AIII in three dimensions. We reveal the phenomenology of a monitoring-induced entanglement phase transition (MIPT), which is reminiscent of disorder-induced localization transitions: at weak monitoring, the half-system entanglement entropy displays a characteristic \(L\log L\)-growth, which saturates towards an area law at strong monitoring. Determining the inverse participation ratio [66; 53; 54; 67] of the monitored wave functions - a key observable in the context of Anderson-type localization [68; 69] - we demonstrate multifractal behavior at the transition point. For weak monitoring, instead, the wave functions obey a behavior familiar from metallic ground states. A further link to localization transitions is established through the mutual information \(\mathcal{I}(A,B)\) between two subregions \(A,B\), separated by a distance \(d_{AB}\). It shows a scale invariant decay at weak monitoring, consistent with metallic wave functions, and the onset of exponential localization at strong monitoring. Resolving exact localization for strong monitoring turns out to be challenging due to an apparently large localization length. We thus map out the localization transition by two different scaling approaches; and by revealing an emergent conformal invariance at the critical point. A peculiar role is played by purification of an initial mixed state. We numerically show that the purification time scale directly reveals the multifractal exponent. This is particularly relevant: for free fermions the subsystem entanglement is exclusively determined by particle number fluctuations, equating the purification transition with an MIPT _and_ a charge sharpening transition [70; 71; 72] with multifractal behavior. _Microscopic model._ - We consider fermions on a half-filled 2D \(L\times L\) square lattice, described by creation and annihilation operators \(\{\hat{c}_{\ell},\,\hat{c}_{\ell^{\prime}}^{\dagger}\}=\delta_{\ell,\ell^{ \prime}}\) on lattice sites \(\ell,\ell^{\prime}\). The fermions undergo coherent nearest-neighbour hopping while the particle number \(\hat{n}_{\ell}\) at each site is continuously monitored. In the quantum state diffusion protocol they follow the stochastic Schrodinger equation (SSE) [31; 32] with monitoring rate \(\gamma\), \[d\ket{\psi_{t}} =\Big{[}-i(\hat{H}-i\hat{D})dt+\sum_{\ell}\xi_{\ell,\ell}(\hat{n} _{\ell}-\langle\hat{n}_{\ell}\rangle_{t})\Big{]}\ket{\psi_{t}}, \tag{1}\] \[\hat{H} =-\sum_{\langle\ell,m\rangle}\hat{c}_{m}^{\dagger}\hat{c}_{\ell}+ \hat{c}_{\ell}^{\dagger}\hat{c}_{m}^{\dagger},\ \hat{D}=\frac{\gamma}{2}\sum_{\ell}(\hat{n}_{\ell}-\langle\hat{n}_{\ell}\rangle _{t})^{2}.\] Here \(\xi_{\ell,\ell}\) is a Gaussian white noise with zero mean \(\overline{\xi_{\ell}}=0\) and short-ranged correlations \(\overline{\xi_{\ell,\ell}}\overline{\xi_{\ell^{\prime},\ell^{\prime}}}=\gamma dt \delta_{\ell,\ell^{\prime}}\delta(t-t^{\prime})\). The overbar denotes the trajectory average. _Replica master equation._ - An analytical treatment of the SSE (1) demands a proper average over the random measurement outcomes, which is done in a replica framework for the unnormalized wave function \(\ket{\vec{\psi}_{t}}\). It is convenient to express the measurement update via the generalized projector \[\hat{M}(\{J_{\ell,t}\})=\prod_{\ell}[\tfrac{2\gamma dt}{\pi}]^{ \frac{1}{4}}\exp\Bigl{[}-\gamma dt(J_{\ell,t}-\hat{n}_{\ell})^{2}\Bigr{]}. \tag{2}\] After an outcome \(J_{\ell,t}\in\mathds{R}\) was recorded in a weak measurement of \(\hat{n}_{\ell}\) at time \(t\), it evolves the wave function \(\ket{\vec{\psi}_{t}}\rightarrow\ket{\vec{\psi}_{t+dt}}=\exp(-i\hat{H}dt) \hat{M}_{\ell}(\{J_{\ell,t}\})\ket{\vec{\psi}_{t}}\). The Born probability to detect outcome \(\{J_{\ell,t}\}\) is \(p(\{J_{\ell,t}\})=\langle\hat{\psi}_{t}|\hat{M}_{\ell}(\{J_{\ell,t}\})|\vec{ \psi}_{t}\rangle\langle\hat{\psi}_{t}|\vec{\psi}_{t}\rangle^{-1}=\langle\hat{ \psi}_{t+dt}|\vec{\psi}_{t+dt}\rangle\langle\hat{\psi}_{t}|\vec{\psi}_{t} \rangle^{-1}\). This formulation is equivalent to the SSE (1) in the limit of infinitesimal time steps \(dt\to 0^{+}\). Then the Born probabilities are implicitly implemented by replacing \(J_{\ell,t}\rightarrow\langle\hat{n}_{\ell}\rangle_{t}+\frac{\xi_{\ell^{\prime}} }{2ydt}\) with the Gaussian white noise \(\xi_{\ell,t}\) defined above [73; 46]. Now we introduce \(R\) replicas of the fermion Hilbert space, labeled by \(r=1,...,R\) and fermion operators acting on Hilbert space \(r\): \(\hat{c}_{\ell}^{(r)},\hat{c}_{\ell^{\prime}}^{(r)}\) and \(\hat{n}_{\ell^{\prime}}^{(r)}=\hat{c}_{\ell^{\prime}}^{(r)}\hat{c}_{\ell^{ \prime}}^{(r)}\). For fixed \(R,M>0\), we define the measurement-averaged density matrix \[\rho_{R,M} =\overline{p(\{J_{\ell,t}\})^{R}\otimes_{r=1}^{M}\ket{\psi_{t}} \!\bra{\psi_{t}}}=\overline{\mathrm{Tr}\bigl{[}\ket{\vec{\psi}_{t}}\!\bra{\vec{ \psi}_{t}}\!\bigr{]}^{R-\frac{M}{2}}\ket{\vec{\psi}_{t}}\!\bra{\vec{\psi}_{t}}}\] \[=\mathrm{Tr}_{r>M}\hat{p},\ \mathrm{with}\ \ \tilde{p}=\overline{\phi_{r=1}^{R}\ket{\vec{ \psi}_{t}}\!\bra{\vec{\psi}_{t}}}. \tag{3}\] Here, the trajectory average \(\overline{\cdot\cdot\cdot}\) corresponds to integrating over all possible outcomes \(J_{\ell,t}\in\mathds{R}\). The equation relates the _nonlinear average_ over an \(M\)-replicated, normalized wave function, weighted with Born probability \(p(\{J_{\ell,t}\})^{R}\) to a _linear average_ over \(R\)-replicated, unnormalized wave functions. It is well-defined for \(R\geq M\). For \(M>1\), the physically meaningful case \(R=1\) has to be obtained via analytic continuation. We consider the evolution of \(\tilde{p}\). Taking \(\partial_{t}\tilde{p}\), performing the measurement average and expanding the result up to \(\mathcal{O}(dt)\) yields the replica quantum master equation (rQME) [73] \[\partial_{t}\tilde{p}= [\tilde{p},\hat{H}_{R}]-\frac{\gamma}{R}\sum_{\ell}\left([\hat{N }_{\ell}(R-\hat{N}_{\ell}),\tilde{p}]+\tfrac{1}{2}[\hat{N}_{\ell},[\hat{N}_{ \ell},\tilde{p}]]\right). \tag{4}\] Here, \(\hat{H}_{R}=\sum_{r}\hat{H}^{(r)}\) and \(\hat{N}_{\ell}=\sum_{r}\hat{n}_{\ell}^{(r)}\) are the sum over each \(r\)-replicated Hamiltonian and number operator. Both are quadratic in fermions and thus invariant under replications. A symmetry, which is inherited by \(\tilde{p}\). The evolution features a competition between \(\hat{H}_{R}\) and the second term \(\sim\hat{N}_{\ell}\), which induces the entanglement phase transition: the latter implements a non-Hermitian evolution towards a replica-aligned local particle density \(\hat{N}_{\ell}=0,R\). The Hamiltonian instead pushes \(\hat{N}_{\ell}\rightarrow\frac{R}{2}\), aiming to maximize the kinetic energy. _Replica field theory._ - The fermion rQME is equivalent to a Keldysh path integral [74; 65], where the fermion operators acting at time \(t\) and site \(\ell\) are replaced by Grassmann variables \((\hat{c}_{\ell}^{(r)},\hat{c}_{\ell}^{(r)})\rightarrow(\tilde{\psi}_{\alpha, \mathbf{x}}^{(r)},\psi_{\alpha,\mathbf{x}}^{(r)})\). Each Grassmann variable acquires an additional Keldysh index \(\alpha=1,2\), distinguishing retarded/advanced and Keldysh sectors, and we draw the continuum limit of the space-time lattice \((t,\ell)\rightarrow(t,\mathbf{x})\equiv X\in\mathds{R}^{3}\). Following standard procedure [73; 65; 74], the rQME yields a partition function \(Z=\int\mathcal{D}[\{\psi,\tilde{\psi}\}]\exp(iS_{\psi})\) with the action \[S_{\psi}=\int_{X}\left\{\vec{\psi}_{X}G_{0}^{-1}\psi_{X}-\frac{\gamma}{2R} \,\mathrm{Tr}\bigl{[}(\sigma_{X}^{K}\psi_{X}\vec{\psi}_{X})^{2}\bigr{]} \right\}. \tag{5}\] Here \(\psi_{X}=\{\psi_{a,x}^{(r)}\}\) is a \(2R\)-Grassmann vector and the trace runs over replica and Keldysh index; the Pauli matrix \(\sigma_{x}^{K}\) acts on the Keldysh index \(\alpha\). At half-filling, the bare Keldysh Green's function in momentum space \(P\equiv(\omega,\mathbf{p})\) is \(G_{0}(P)=\delta_{r,r^{\prime}}[(\omega-\epsilon_{\mathbf{p}})\mathds{1}-i \Omega^{+}\sigma_{x}^{K}]^{-1}\) with dispersion \(\epsilon_{\mathbf{q}}=2\cos(p_{x})+2\cos(p_{y})\). The action \(S_{\psi}\) is reminiscent of the Keldysh action for disordered fermions with \(U(1)\)-symmetry [65]. The quartic fermion vertex, however, displays two crucial differences compared to static disorder, which are intrinsic to a monitored, i.e., dynamic, theory. Firstly, the structure in Keldysh space is modified by \(\sigma_{x}^{K}\) (disorder: \(\sigma_{x}^{K}\rightarrow\mathds{1}\)). This reduces the symmetry of rotations in Keldysh-replica space: it gaps out \(R^{2}-1\) rotation modes, which can be treated perturbatively. The result is an _emergent space-time invariance_ at long wavelengths and a dynamical critical exponent \(z=1\)[73]. This enables an emergent conformal invariance at the critical point. Secondly, the vertex is space-time local: measurements, unlike disorder, vary in _time and space_. This eliminates the possibility of fermions to interfere with their time-reversed partners and gives rise to _two local_ long-wavelength modes. In 2D, fermions do not separate into left- and right-movers [33] and bosonization amounts to a Hubbard-Stratonovich decoupling \(\psi_{X}\bar{\psi}_{X}\rightarrow\frac{1}{2}Q_{X}\in\mathbb{C}^{2R\times 2R}\). This yields \[iS_{Q}=iS_{0}+\frac{\gamma}{8R}\int_{X}\left[\left[\mathrm{Tr}\left(\sigma_{x} ^{K}Q_{X}\right)\right]^{2}-\mathrm{Tr}\left[(\sigma_{x}^{K}Q_{X})^{2}\right] \right]. \tag{6}\] with \(iS_{0}=\int_{X}\mathrm{Tr}\ln\left(G_{0}^{-1}+i\frac{\gamma}{2R}Q_{X}\right)\). Each \(Q_{X}\) is a local \(2R\times 2R\) matrix in Keldysh and replica index. The replica-diagonals \(Q_{\alpha R,X}^{(r)}=\psi_{\alpha,X}^{(r)}\bar{\psi}_{\beta,X}^{(r)}\) represent the physical fermion bilinears, such as, e.g., the fermion density \(n_{X}^{(r)}=\frac{1}{2}(Q_{12,X}^{(r)}+Q_{21,X}^{(r)})\). This provides access to the connected correlation function \(C(\mathbf{x}-\mathbf{x}^{\prime},t)=\langle n_{x,t}^{(r)}|n_{x,t}^{(r)}-n_{x,t }^{(r^{\prime})}\rangle=\langle\overline{n}_{x}\overline{n}_{x}\rangle- \langle\overline{n}_{x}\rangle\langle\overline{n}_{x}\rangle\) in the field theory framework [34]. For free fermions this further enables to compute the entanglement entropy \(S_{A}=2\zeta(2)C_{A}^{(2)}+O(C_{A}^{(2n_{2}+4)})\), where \(C_{A}^{(2n)}\) is the \(2n\)-th order cumulant of the particle number \(\hat{N}_{A}=\sum_{\ell\in A}\hat{n}_{\ell}\) in subregion \(A\)[75, 76]. In particular, \(C_{A}^{(2)}=\int_{X,\mathbf{x}^{\prime}\in A}C(\mathbf{x}-\mathbf{x}^{\prime},t)\), which has to be computed with 'absorbing' temporal boundary conditions [73, 34]. We analytically compute the entanglement in a Gaussian approximation: we determine the saddle-point of \(S_{Q}\) - equivalent to the self-consistent Born approximation with \(Q_{X}=\delta_{r,r^{\prime}}\sigma_{z}\) and the fermion retarded Green's function \(g^{R}(P)=(\omega-\epsilon_{\mathbf{p}}+i\frac{\gamma}{2R})^{-1}\) - then we expand around the saddle up to quadratic order in \(Q_{X}\). Replica diagonal and off-diagonal terms decouple and for \(|\mathbf{p}|\ll\pi\) and \(A=L/2\times L\) we find \[C(P)=\frac{\mathbf{p}^{2}}{\gamma(\omega^{\pm}+\mathbf{p}^{2})}\text{ and }S_{A}=\frac{\pi}{3}\ln(\pi L/2)L. \tag{7}\] _Renormalization group analysis._ - In order to incorporate long-wavelength fluctuations at the phase transition, we parameterize each matrix \(Q_{X}=\mathcal{R}_{X}\Lambda\mathcal{R}_{-1}^{-1}\) in terms of rotations \(\mathcal{R}_{X}\) around the saddle point \(\Lambda=\delta_{r,r^{\prime}}\sigma_{x}^{K}\). For the \(2R\times 2R\) matrix \(Q_{X}\), this yields \(4R^{2}\) generators of rotations, \(\sigma_{a}^{K}\otimes\Theta_{\alpha}\), \(\alpha\in\{0,x,y,z\}\), which are compatible with the \(U(2R)\)-symmetry. Each \(\Theta_{\alpha}\) is a \(R\times R\) hermitian matrix and \(\sigma_{a}^{K}\) are Pauli matrices in Keldysh space with \(\sigma_{0}^{K}=\mathds{1}_{2\times 2}\). The saddle point \(\Lambda\) is not rotated by the \(2R^{2}\) generators \(\sim\sigma_{0},\sigma_{z}\), leaving \(2R^{2}\) possible generators and a \(U(2R)/U(R)\times U(R)\) manifold. The quartic part of the action \(S_{Q}\) additionally commutes with \(\sigma_{x}^{K}\) but not with \(\sigma_{y}^{K}\). Thus rotations generated by the former (latter) are gapless (gapped), except for the trace of \(\Theta_{y}\). Performing the common derivation of the nonlinear sigma model [73, 65], and eliminating the gapped \(\Theta_{y}\)-modes in a Gaussian approximation yields the action \[iS_{U}=-\frac{g}{2}\int_{X}\left[\partial_{t}U\partial_{t}U^{-1}+\nabla U \nabla U^{-1}\right] \tag{8}\] with \(U\in\mathrm{SU}(R)\) and \(g=(32\gamma^{2})^{-\frac{1}{2}}\). This sigma model also emerges from a disordered Hamiltonian in the chiral unitary class AIII [69]. In the replica limit \(R\to 1\) and in \(2+1\) dimensions, the one-loop renormalization group (RG) flow for the dimensionless coupling \(\tilde{g}=gl\) is (RG scale \(l\)) [62, 63, 77] \[\partial\tilde{g}(l)/\partial\ln(l)=\tilde{g}(l)-\frac{1}{4\pi}\text{ }\Rightarrow\tilde{g}(l)=\frac{1}{4\pi}+(\tilde{g}(l_{0})-\frac{1}{4\pi}) \frac{l}{l_{0}}. \tag{9}\] Here, we have introduced the UV-scale \(l_{0}=1\) (dimensionless lattice spacing) and \(\tilde{g}(l_{0})=g\). The flow equation predicts a monitoring-induced phase transition at a _critical monitoring Figure 2: Entanglement phase transition. (a) A sharp crossing of \(\gamma c(\gamma)\) locates the critical point at \(\gamma_{c}=2.15\). (b) Finite size scaling collapse of the entanglement entropy for \(\gamma_{c}=2.15\). A different exponent is found for the metallic (\(\nu=1\)) and the localized phase (\(\nu=1.12\)). (c) Entropy density scaling with strip size \(A=L\times\ell_{A}\) compared with with Eq. (12). Deviations are visible for all curves except for \(\gamma=\gamma_{c}\) (see [73] for original data). (d) A scaling collapse of the mutual information as a function of the chord distance \(\tilde{d}\) is only possible in the metallic phase. The inset shows the unrescaled data in a semi-log scale, highlighting the exponential decay of the mutual information for \(\gamma\gg\gamma_{c}\). rate_\(\gamma_{c,\mathrm{th}}=\frac{\pi}{\sqrt{2}}\approx 2.22\). For \(\gamma<\gamma_{c,\mathrm{th}}\) (\(\gamma>\gamma_{c,\mathrm{th}}\)), the theory flows to weak (strong) coupling, i.e., \(\gamma\to 0\) (\(\gamma\to\infty\)). _Numerical simulations._ - The SSE (1) is quadratic in \(\hat{c}_{\ell},\hat{c}_{\ell}^{\dagger}\) and number conserving. It is exactly and efficiently simulated with Gaussian wave functions [31, 32] of the form: \[\ket{\psi_{t}}=\prod_{1\leq s\leq L^{2}}c_{s}^{\dagger}\ket{0},\quad c_{s}^{ \dagger}=\sum_{1\leq\ell\leq L^{2}}\psi_{\ell,\ell}^{s}c_{\ell}^{\dagger}. \tag{10}\] \(\psi_{\ell,\ell}^{s}\in\mathbb{C}\) is the single-particle wave function of fermion \(s\) at site \(\ell\). Both the state \(\ket{\psi_{t}}\) and the wave functions \(\psi_{\ell,\ell}^{s}\) at time \(t\) implicitly depend on the history of noise events \(\{\xi_{\ell,\ell^{\prime}}\}\) with \(t^{\prime}<t\). We initialize each \(\ket{\psi_{t=0}}\) in a random state and evolve it until observables reach stationary values. The observables here are the von Neumann entanglement entropy \(S(A)\), the mutual information \(\mathcal{I}(A,B)\) and the instantaneous single-particle wave functions \(\psi_{\ell}^{s}\). The trajectory average of \(S(A)\) and \(\mathcal{I}(A,B)\) for 2D subsystems \(A,B\) are \[S(A)=-\operatorname{Tr}\!\left(\overline{\rho_{A}\ln\rho_{A}}\right),\;\mathcal{ I}(A,B)=S(A)+S(B)-S(A\cup B).\] The reduced density matrix \(\rho_{A}=\operatorname{Tr}_{A}\ket{\psi_{t}}\!\bra{\psi_{t}}\) is obtained by tracing over the complement of \(A\)[73]. Here, we take each subsystem as a strip of size \(A=L\times l_{A}\), see Fig. 1(b). The wave functions \(\psi_{\ell}^{s}\) are characterized by their inverse participation ratio (IPR) \(P_{q}\), its variance \(\sigma_{q}\) and anomalous dimension \(D_{q}\), \[P_{q}=2L^{-2}\sum_{s,\ell}\ket{\psi_{\ell,\ell}^{s}}^{2q}\sim L^{-D_{q}(q-1)}, \;\sigma_{q}^{2}=\operatorname{var}(\ln P_{q}). \tag{11}\] _Entanglement phase transition._ - Numerical simulations underpin two different entanglement regimes: for weak measurements the entanglement entropy for \(A=L\times L/2\) grows as \(S(A)=c(\gamma)L\ln(L)+s(\gamma)L\), which is reminiscent of a 2D metallic state and consistent with the field theory result in Eq. (7). The prefactor of the logarithmic growth coincides well with the predicted value \(c(\gamma)\sim\frac{\pi}{\gamma\gamma}\), shown in Fig. 2(a). For strong measurements one observes the onset of a saturation towards an area law \(S(A)\to s(\gamma)L\), visible in Fig. 1(d). However, even up to linear sizes of \(L=80\) and up to \(\gamma\sim 4.5\) a faster than linear growth of \(S(A)\) is observed. In order to detect the measurement-induced entanglement phase transition, we extract the prefactor \(c(\gamma)\). It is shown in Fig. 2(a) and it displays a sharp crossing at \(\gamma=2.15\). This indicates a phase transition close to the theory prediction \(\gamma_{c,\mathrm{th}}=2.22\). We further confirm the entanglement transition and its scaling in two steps. (i) We perform a scaling analysis of the entanglement entropy according to the one-loop RG result for the renormalized coupling \(\tilde{g}(l)\) in Eq. (9). In lowest order approximation \(\tilde{g}(l)\) replaces the prefactor \(\sim\gamma^{-1}\) in the entanglement scaling in Eq. (7). This predicts a deviation \(|S(\gamma)-S(\gamma_{c})|\sim|1-\frac{\pi}{\gamma}|^{r}L\ln L\) with one-loop critical exponent \(\nu=1\). For \(\gamma_{c}=2.15\), we show the scaling collapse in Fig. 2(b). It confirms the functional form \(\sim L\ln(L)\) but predicts different scaling exponents for the metallic (\(\nu=1\)) and the localized (\(\nu=1.12\)) side of the transition. This may be attributed to a strong renormalization of the Gaussian prediction in Eq. (7) in the localized phase. (ii) We consider the scaling of the entropy line density \(s(A)=S(A)/L\) for fixed system size \(L=64\) and variable strip size \(A=L\times l_{A}\) in Fig. 2(c). We compare it with the formula \[s(A)=c(\gamma)\ln\left[\frac{L}{\pi}\sin\left(\frac{\pi l_{A}}{L}\right)\right] +s_{0}(\gamma), \tag{12}\] which we expect to hold for a quasi-one-dimensional system with conformal symmetry [78, 79]. At \(\gamma=\gamma_{c}\), we observe a perfect match with Eq. (12), while the entanglement curve is flatter (sharper) for \(\gamma>\gamma_{c}\) (\(\gamma<\gamma_{c}\)), indicating a higher symmetry at the critical value, i.e., confirming a critical point. _Mutual Information._ - The mutual information \(\mathcal{I}(A,B)\) represents an upper bound for the correlations between two disjoint subsystems \(A,B\)[80]. For free fermions with \(U(1)\) symmetry, it is related to particle number fluctuations between \(A\) and \(B\) via the entanglement entropy [75, 76]. To leading order, one finds \(\mathcal{I}(A,B)=4\zeta(2)\sum_{s\in A,s^{\prime}\in B}C(\mathbf{x}-\mathbf{x} ^{\prime},t)\). For two strips of size \(L\times 1\), separated by a distance \(d_{AB}\), see Fig. 2(d), Eq. 7 predicts \(\mathcal{I}(A,B)\sim d_{AB}^{-2}/\gamma\). This is confirmed by a scaling collapse of \(\mathcal{I}(A,B)\) for \(\gamma\leq\gamma_{c}\) in Fig. 2(d). For \(\gamma\gg\gamma_{c}\), one instead observes an exponential decay \(\log(\mathcal{I}(A,B))\sim-d_{AB}\), consistent with an area law, localized state. We note that an apparent algebraic decay is still found for relatively large \(\gamma>\gamma_{c}\). The scaling collapse, however, is absent. We interpret this as a signature for a large correlation length in the area law phase. _Multifractality._ - A striking feature of Anderson-type local Figure 3: Multifractality and purification. (a) Multifractal exponent \(D_{q}\) as a function of \(\gamma\) for different values of \(q\) and obtained by two different methods. (b) The variance \(\sigma_{q}\) of the distribution \(\mathcal{P}(\ln P_{q})\) both for the monitored fermions (blue crosses) and the analytical prediction from the three-dimensional Anderson model [68] (orange and green lines). (c) Finite size scaling for the ancilla purification. The purification rate \(\tau_{\mathrm{pur}}\sim L^{\sigma}\) distinguishes strong (red, \(\alpha=0\)) and weak (blue \(\alpha=2\)) monitoring. (d) Comparison of the purification rate exponent \(\alpha\) and the multifractal exponent \(D_{2}\) for different \(\gamma\) (dashed lines are guides to the eye). Inset: sketch of the purification setup with ancilla (yellow). ization transitions is multifractality at the critical point [69]. It describes strong fluctuations of single-particle wave functions in terms of the so-called multifractal exponent \(D_{q}\) of the IPR, defined in Eq. (11). It distinguishes a metallic phase, with \(D_{q}=D\) the spatial dimension, from a localized, area law phase where \(D_{q}\to 0\), both independent of \(q\). At a multifractal critical point \(0<D_{q}<2\) becomes a non-trivial function of \(q\). We observe clear signatures of multifractality, demonstrated by the scaling of \(P_{q}\) at the critical point in Fig. 1(e). In Fig. 3(a) we show \(D_{q}\) for a range of \(q\) and \(\gamma\) values, which witnesses an extended region of multifractal behaviour. We compute \(D_{q}\) in two different ways, from a fit of \(\ln(P_{q})/(1-q)\) versus \(\ln(L)\) for \(L=10-64\) and directly through \((1-q)D_{q}=\ln(P_{q})/\ln(L)\) at \(L=64\). \(D_{q}\) shows a sigmoid behaviour: it saturates to \(D_{q}\to 2\) for \(\gamma\to 0\) for all \(q\) as expected and slowly decays with increasing monitoring strength. We note that \(D_{q}\) approaches the value \(D_{q}\to 0\) only very slowly, i.e., for both \(L\) and \(\gamma\) large, consistent with a large localization length. A second key observable characterizing multifractality is the distribution function of the IPR, \(\mathcal{P}(\ln P_{q})\), in particular its variance \(\sigma_{q}\). We show \(\sigma_{q}\) at the critical point \(\gamma_{c}\) in Fig. 3(b) and compare it to the analytical prediction for the orthogonal class A in \(D=3\) spatial dimensions. It predicts [68] \[\sigma_{q}=\left\{\begin{array}{ll}2\pi b|q(q-1)|&\text{for }|q|<q_{+}\\ q/q_{+}&\text{for }|q|>q_{+}\end{array}\right., \tag{13}\] with \(b\simeq 0.088\) and \(q_{+}\simeq\sqrt{2}\). Despite the different symmetry class and replica limit (\(R\to 1\)), we find good agreement between our data and Eq. (13). This further underpins the link between monitored fermions and Anderson-type localization. _Purification. -_ MIPTs can be revealed from the purification of an initial mixed state [81, 82, 83, 4]. Here we extract the purification time scale \(\tau_{\text{pur}}\sim L^{\alpha}\) in a modified setup. In an initial stage (\(t<0\)), the \(L\times L\) lattice is coupled to a \(L\times 1\) ancilla lattice via coherent hopping (\(\gamma=0\)). It entangles the system with the ancilla by bringing fermions into a superposition between both, see Fig. 3(d). At time \(t=0\), the ancilla is decoupled and only the \(L\times L\) lattice undergoes monitoring described by the SSE (1). The purity is provided by the entropy density of the ancilla \(s_{\text{anc}}\equiv S(L\times 1)/L\sim\exp(-t/\tau_{\text{pur}})\). For \(\gamma<\gamma_{c}\), we observe that the purification exponent \(\alpha\) closely resembles the multifractal exponent \(D_{2}\). For \(\gamma>\gamma_{c}\), it rapidly approaches \(\alpha\to 0\). The relation between purification and multifractality appears intuitive: the system purifies when particles (or holes) that were initially shared by system and ancilla are localized in the system by measurement. The probability of finding such particle in one measurement step will depend on how it swave function is scrambled, i.e., on its multifractal structure. For free fermions, purification can thus be used to probe the multifractal behavior at the MIPT. Due to the equivalence of particle number fluctuations and entanglement entropy, purification, including multifractal behavior, is identical to charge sharpening. _Outlook.-_ Continuously monitored, free fermions in 2D undergo a MIPT, whose effective field theory and phenomenology are strongly reminiscent of an Anderson-type localization transition in 3D. However, Anderson transitions are quantum phase transitions - they vitally depend on wave function interference. This raises the question of how strong the link between monitored fermions in \(D\) dimensions and Anderson localization in \(D+1\) dimensions is - and whether monitored fermions represent a new universality class, for which the \(D\)-dimensional quantum dynamics maps to a \(D+1\)_quantum_ statistical mechanics. The fact that both the entanglement transition and the peculiar phenomenon of multifractality are observable via purification makes 2D monitored fermions an ideal test bed to explore these questions. An interesting future direction would then be to devise experimental schemes to explore the transition [84], for instance through appropriate feedback protocols [85, 86, 87, 88, 89]. ## Acknowledgements We thank S. Diehl, C. M. Jiang, K. Klocke, I. Poboiko, M. Szyniszewski, X. Turkeshi and J. H. Wilson for fruitful discussions. We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 390534769, and by the DFG Collaborative Research Center (CRC) 183 Project No. 277101999 - project B02. The code for our numerical computations was implemented in Julia [90]. _Note added -_ During the completion of the manuscript, we learned about related works on monitored fermions in \(D\geq 2\) that appeared simultaneously [91, 92].
2309.10586
Adversarial Attacks Against Uncertainty Quantification
Machine-learning models can be fooled by adversarial examples, i.e., carefully-crafted input perturbations that force models to output wrong predictions. While uncertainty quantification has been recently proposed to detect adversarial inputs, under the assumption that such attacks exhibit a higher prediction uncertainty than pristine data, it has been shown that adaptive attacks specifically aimed at reducing also the uncertainty estimate can easily bypass this defense mechanism. In this work, we focus on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate, but regardless of the correctness of the prediction; in particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator. Following such direction, we: \textit{(i)} design a threat model for attacks targeting uncertainty quantification; \textit{(ii)} devise different attack strategies on conceptually different UQ techniques spanning for both classification and semantic segmentation problems; \textit{(iii)} conduct a first complete and extensive analysis to compare the differences between some of the most employed UQ approaches under attack. Our extensive experimental analysis shows that our attacks are more effective in manipulating uncertainty quantification measures than attacks aimed to also induce misclassifications.
Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
2023-09-19T12:54:09Z
http://arxiv.org/abs/2309.10586v1
# Adversarial Attacks Against Uncertainty Quantification ###### Abstract Machine-learning models can be fooled by adversarial examples, i.e., carefully-crafted input perturbations that force models to output wrong predictions. While uncertainty quantification has been recently proposed to detect adversarial inputs, under the assumption that such attacks exhibit a higher prediction uncertainty than pristine data, it has been shown that adaptive attacks specifically aimed at reducing also the uncertainty estimate can easily bypass this defense mechanism. In this work, we focus on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate, but regardless of the correctness of the prediction; in particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator. Following such direction, we: (i) design a threat model for attacks targeting uncertainty quantification; (ii) devise different attack strategies on conceptually different UQ techniques spanning for both classification and semantic segmentation problems; (iii) conduct a first complete and extensive analysis to compare the differences between some of the most employed UQ approaches under attack. Our extensive experimental analysis shows that our attacks are more effective in manipulating uncertainty quantification measures than attacks aimed to also induce misclassifications. ## 1 Introduction Machine Learning (ML) covers nowadays multiple applications, including safety-critical domains such as medical diagnosis, self-driving cars, and video surveillance. Leaning towards ML-based systems tailored to cope with such scenarios, the research community also focused on enhancing the _trustworthiness_ of such systems. In this regard, Uncertainty Quantification (UQ) methods have been fostered throughout the years, establishing themselves as methods capable of assessing the degree of _uncertainty_ of the predictions made by an ML-based system [12]. Unfortunately, ML models have been found to be susceptible to carefully-crafted input samples aimed at causing wrong predictions, known as _adversarial examples_[1, 24]. Several defensive countermeasures have been developed, aiming to build robust models, including _adversarial training_[21] and also uncertainty quantification. In particular, UQ has been proposed as a _defense_ technique for adversarial attack _detection_ at test time, based on the rationale that attack samples aimed at causing wrong predictions are characterised by high uncertainty. However, analogously to other defense techniques, some works have shown that it is indeed possible to generate _adaptive_ attacks capable of causing wrong predictions and at the same time of evading detection, in this case by reducing the corresponding uncertainty measure [4, 11]. In this work, we focus on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate, but regardless of the correctness of the prediction; in particular, the goal is to undermine the use of UQ techniques for ML models when their outputs are consumed by a downstream module or by a human operator. For instance, in the medical domain, a doctor may avail of uncertainty for distinguishing if an ML prediction (_i.e._, a tumor segmentation) is reliable enough or requires more attention from the doctor. Having an estimate about the reliability of the system's predictions would allow a healthcare operator to accurately weigh its time, giving an additional effort when interpreting more uncertain cases. Another example is a crowd counting tool that processes in real-time video streams coming from a video surveillance network to support law enforcement agency officers in crowd monitoring. Such a system may provide an estimate of the uncertainty of the predicted crowd count (_e.g._, in terms of a 95% confidence interval) to make its users aware of the reliability of its predictions. This may allow detecting out-of-distribution (OOD) frame sequences (_e.g_., due to extreme lighting conditions) that are likely to be characterized by high uncertainty, whose corresponding predicted count would be disregarded by the users. We argue that, in application scenarios like the ones described above, an attacker may be interested in undermining _only_ the UQ component, regardless of the predictions. For the medical domain case, _e.g_., an attacker may target an ML system to increase the uncertainty associated with its predictions, resulting in an unnecessary additional workload for the operator in charge (_e.g_., evaluating a tumor diagnosis). On the crowd counting side, lowering the level of uncertainty may lead the LEA operator to think the count is always correct, even in the presence of under-estimated predictions caused, _e.g_., by inadequate illumination or extreme weather conditions (and, hence, increasing the odds of casualties). We thus believe that focusing on attacks targeting the sole uncertainty can be highly relevant for safety applications and that a proper understanding of such attacks, to the best of our knowledge, is still missing. Consequently, the state of the art lacks a practical implementation and empirical evaluation of UQ techniques under attack. In this work, we move the first steps towards this direction by providing the following contributions: * We design a threat model for attacks targeting UQ and yielding _wrong uncertainty estimates_; * We develop and implement different attack strategies on conceptually different UQ techniques spanning over both classification and semantic segmentation tasks; * We conduct a first complete and extensive analysis to compare the differences between some of the most employed UQ approaches under attack. ## 2 Background and Related Work We summarize here the essential concepts of UQ techniques, adversarial machine learning, and overview existing work on attacks against UQ. ### Uncertainty Quantification In classification problems characterized by a \(d\)-dimensional feature space \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and a \(L\)-dimensional output space \(\mathcal{Y}\subseteq\mathbb{R}^{L}\) being \(L\) the number of classes, an ML-based predictor implements a decision function \(f^{\theta}:\mathcal{X}\mapsto\mathcal{Y}\) mapping an input vector to an output categorical distribution, where the parameters \(\theta\) are obtained by minimizing a given loss function on a training set \(\mathcal{D}\) of \((\mathbf{x},\mathbf{y})\) pairs. Predictions are subject to two kinds of uncertainty: **aleatoric** uncertainty (a.k.a. _data_ uncertainty), due to the inherent randomness of the class label (i.e., overlapping class-conditional distributions), and **epistemic** uncertainty (a.k.a. _model_ uncertainty), due to a lack of knowledge on the "correct" prediction model (such as the DNN's weights), which can be caused, e.g., by a training set that is not entirely representative for a given task. UQ techniques aim to associate with each prediction a numerical estimate of its uncertainty [12]. **Probabilistic approaches** - Bayesian Neural Networks (BNNs) are a well-known probabilistic model, which naturally allow assessing the uncertainty of their predictions [20]. They assume a prior \(p(\theta)\) over the model's parameters and marginalize over it to compute a predictive distribution on a given training set \(\mathcal{D}\) by _Bayesian Model Averaging_ (BMA): \[f_{BMA}=p(y|x,\mathcal{D})=\int_{\theta}p(y|x,\theta)\cdot p(\theta|\mathcal{ D})\text{d}\theta \tag{1}\] Since Eq. 1 is intractable in practice, an approximating distribution \(q(\theta)\) is commonly used, minimizing its divergence from the actual distribution. In this work we focus on two common approximations: Monte-Carlo Dropout and Deep Ensemble. **Monte-Carlo (MC) dropout** approximation [8] consists of activating dropout at test time, either in an ad hoc way [8] (namely embedded dropout), using the dropout rate found during training (where dropout is also used for regularization), or in a post hoc way [19, 16] (namely dropout injection), i.e., on already trained networks. An alternative solution, which has been shown capable of outperforming MC dropout, is based on **Deep Ensembles**[15], which trains multiple DNNs starting from random weights and approximate BMA by combining the corresponding predictions obtained from the different instances of \(\theta\). In both cases, for a given sample \(\mathbf{x}\) one can compute its corresponding uncertainty \(\mathcal{U}(\mathbf{x})\) by computing a statistic (e.g., the variance) over the Monte-Carlo predictions. In addition, we recall many other state-of-the-art Bayesian methods. Among them, we can find Concrete Dropout [9] (an improvement of MC-dropout for finding the dropout rate during training), BayesByBackprop [3], and the whole class of Laplace Approximations [20, 14] (which are one of the most prominent post hoc UQ techniques). **Deterministic approaches** - A drawback of Bayesian models is their computational cost due to the multiple forward passes required to obtain a point-wise prediction. Several deterministic approaches have been proposed to deal with this issue, such as Deterministic Uncertainty Quantification (DUQ) [25], Spectral-normalized Neural Gaussian Process (SNGP) [17] and Deep Deterministic Uncertainty (DDU) [22]. For instance, for \(L\)-class classification problems, DUQ learns \(L\) centroids in the feature space \(\mathcal{X}\) and, for any in put \(\mathbf{x}\), it returns a \(L\)-dimensional vector with the distance between the feature vector (defined as \(f^{\theta}(\mathbf{x})\) with abuse of notation) and the centroids, computed using a Radial Basis Function (RBF) kernel. The predicted class is the one associated to the closest centroid, and the corresponding distance is interpreted as the uncertainty measure \(\mathcal{U}(\mathbf{x})\). ### Adversarial Machine Learning ML models have been found to be susceptible to adversarial attacks [24], i.e., input samples carefully crafted to be misclassified. Several attacks and defenses have been proposed so far. Two seminal yet still widely used attack strategies are the Fast-Gradient Sign Method attack (FGSM) [10] and the Projected Gradient Descent attack (PGD) [21]. Under a "standard" untargeted \(\ell_{\infty}\) threat model with a perturbation budget \(\epsilon\), FGSM crafts an adversarial example \(\mathbf{x}^{\star}\) by adding to a given sample \(\mathbf{x}\) an \(\ell_{\infty}\) norm perturbation of magnitude \(\epsilon\), pointing to the steepest ascent direction of the loss \(L\) from the point \(\mathbf{x}\): \[\mathbf{x}^{\star}=\mathbf{x}+\epsilon\cdot\mathrm{sgn}\left(\nabla_{\mathbf{x}}L\left(f_ {\theta}(\mathbf{x}),\mathbf{y}\right)\right)\, \tag{2}\] where \(\nabla\) denotes the gradient operator. The PGD attack implements an iterative version of FGSM by projecting after each iteration the obtained perturbation to the feasible domain \(\Gamma=\{\mathbf{x_{t}}\in\mathcal{X}:||\mathbf{x_{t}}-\mathbf{x_{0}}||_{\infty}<\epsilon\}\): \[\mathbf{x_{t+1}}=Proj_{\Gamma}(\mathbf{x_{t}}+\alpha\cdot\mathrm{sgn}\left(\nabla_{ \mathbf{x_{t}}}L\left(f_{\theta}(\mathbf{x_{t}}),\mathbf{y}\right)\right)), \tag{3}\] On the defense side, the par excellence technique is Adversarial Training [21]: \[\min_{\theta}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}\left[\max_{\delta\in B (\mathbf{x},\varepsilon)}\mathcal{L}(\theta,\mathbf{x}+\mathbf{\delta},\mathbf{y})\right]\, \tag{4}\] where \(B(\mathbf{x},\varepsilon)\) denotes the set of allowed adversarial perturbations, bounded by \(\epsilon\). Eq. 4 amounts to solve a min-max optimization problem, where the worst-case loss \(\mathcal{L}\) (inner problem) has to be minimized (outer problem). The goal is to train the model to be robust to adversarial examples. ### Evasion Attacks Involving Uncertainty Previous work in the adversarial machine learning field has considered UQ only as a defense strategy, as a means for detecting adversarial samples crafted for _evading_ a classifier, i.e., to cause wrong predictions. For instance, the authors of [7] proposed to assess uncertainty as the variance computed using embedded MC dropout (with a dropout rate of \(0.5\) after each convolutional layer). Using a detection threshold \(\tau=0.02\), such that samples whose variance is below it are rejected as adversarial, 96% of adversarial examples were correctly identified and rejected on the CIFAR-10 dataset, with a false-positive rate of 1%. Following the usual arms race approach, subsequent works devised evasion attacks capable of bypassing uncertainty-based defenses. The attack presented in [4] manipulates a given sample to reduce the corresponding MC sample variance below the detection threshold and consequently induces a misclassification. On the same CIFAR-10 dataset, it bypassed the above defense with a success rate of 98%. However, this result was attained at the expense of a notably large perturbation size. The authors of [11], proposed the "High-Confidence Low-Uncertainty" attack. For a given sample \(x\), the underlying idea is to craft an adversarial example \(x+\delta\) pushing the prediction confidence for the target (wrong) class over 95% and simultaneously keeping the corresponding uncertainty not higher than the one of the original sample. Previous work involving UQ considered only _evasion_ attacks aimed at causing _wrong predictions_, where uncertainty measures were used and manipulated only as detection tools. In this work, we focus instead on a different attack scenario where the goal is to manipulate uncertainty measures _per se_, i.e., to produce _wrong uncertainty estimates_, thus undermining their original purpose of providing an assessment of the reliability of ML-based systems predictions, to be used by a downstream processing module or by a human operator, regardless of the correctness of the predictions. Additionally, we extensively test attacks to diverse UQ techniques to assess how such attacks are supposed to mutate depending on the given uncertainty-related scenario. ## 3 Uncertainty Quantification Under Attack In this section we formally present our threat model, where the attacker's goal is to produce _wrong uncertainty estimates_ regardless of the correctness of the prediction, develop a possible implementation for classification tasks, and show how it can be extended to other tasks using semantic segmentation as a case study. ### Threat Model Evasion attacks aim at getting a given sample misclassified, with respect to its ground-truth label. However, UQ techniques do not have a ground truth, and thus it is not straightforward to define what a "wrong" uncertainty estimate is. Ideally, higher uncertainty values should be associated with higher misclassification probability. The stronger such statistical correlation is, the more "correct" the uncertainty measure will get. Accordingly, the considered attack against UQ should result in _breaking this statistical correlation up_. **Taxonomy of attacks to uncertainty quantification** - According to the above threat model, two possible kinds of attacks can be identified: * Overconfidence Attack (O-attack): Its goal is to _re duce_ the uncertainty measure of a given predictor, thus tricking an ML-based system into being overconfident. This will impact in particular wrong predictions and out-of-distribution (OOD) samples, resulting in undermining the UQ module _integrity_. * Underconfidence Attack (U-attack): The goal of this attack, conversely, is to _increase_ the uncertainty measure, which would result in considering all the predictions as unreliable, which in turn would lead the downstream modules or human operators to disregard the outputs of an ML-based system. We, therefore, classify the U-attack as a threat undermining the _availability_ of an ML-based system. Theoretically, one can formulate the problem as the search for the perturbation \(\delta\), bounded by \(\epsilon\), minimizing (O-attack) or maximizing (U-Attack) the uncertainty estimate \(\mathcal{U}(\mathbf{x}+\mathbf{\delta})\): \[\operatorname*{argmin}_{\mathbf{\delta}}\ \gamma\cdot\mathcal{U}(\mathbf{x}+\mathbf{ \delta}),\quad s.t.\|\mathbf{\delta}\|_{p}<\epsilon\, \tag{5}\] where \(\gamma\in\{-1,1\}\) controls the attack objective: \(\gamma=-1\) corresponds to the U-attack, whereas \(\gamma=1\) corresponds to the O-attack. While the threat model encompasses both attacks, in the rest of our work we focus on the O-Attack, being the latter (just like "standard" evasion attacks [2]) a violation of the integrity of the ML-based system. Therefore, in the following section, we propose a possible implementation of the O-Attack. ### Attacking Probabilistic Models The attack strategy of Eq. 5 can be implemented both for probabilistic and deterministic UQ models. **Minimum Variance Attack** - In probabilistic models, the prediction and uncertainty value for a given sample are obtained by combining a set of predictions. Such models commonly leverage uncertainty measures such as predictive variance (epistemic), entropy (aleatoric) or, less frequently, mutual information [23] (either epistemic or predictive). The intrinsic probabilistic nature of such methods requires attacks to rely on expectations over a set of MC samples. In this context, a first possible solution consists of modifying a given input sample \(x\) in such a way that the predictor's probabilistic outcomes are as _concordant_ as possible. This can be formulated as a direct minimization of the predictive variance; accordingly, we refer to this attack as Minimum Variance Attack (MVA): \[\operatorname*{argmin}_{\mathbf{\delta}}\ \mathbb{E}_{S}[(\mathbf{x}+\mathbf{ \delta})^{2}]-\mathbb{E}_{S}[(\mathbf{x}+\mathbf{\delta})]^{2},\quad s.t.\|\mathbf{\delta} \|_{p}<\epsilon, \tag{6}\] \[\mathbb{E}_{S}[(\mathbf{x}+\mathbf{\delta})^{2}]:= \frac{1}{S}\sum_{s=1}^{S}f^{\theta_{s}}(\mathbf{x}+\mathbf{\delta})^{ \intercal}\cdot f^{\theta_{s}}(\mathbf{x}+\mathbf{\delta}),\] \[\mathbb{E}_{S}[(\mathbf{x}+\mathbf{\delta})]^{2}:= \mathbb{E}_{S}(\mathbf{x}+\mathbf{\delta})^{\intercal}\cdot\mathbb{E}_{S} (\mathbf{x}+\mathbf{\delta}),\] where \(\mathbf{\delta}\) denotes the perturbation, \(S\) the Monte-Carlo sample size, and \(f^{\theta_{s}}\) the predictor corresponding to the parameters \(\theta_{s}\) obtained from the \(s\)-th Monte Carlo sample (see Sect. 1). Finally, \(\mathbb{E}_{S}(\mathbf{x}+\mathbf{\delta})\approx f_{BMA}(\mathbf{x}+\mathbf{\delta})\) is the Monte-Carlo approximation of the BMA using the set of size \(S\). **Auto-Target Attack** - Albeit the attack described above aims at minimizing the uncertainty measure directly, there are other ways to optimize Eq. (5). A simple yet effective alternative idea has indeed been proposed in [4] to evade the detection of adversarial examples (modeled as an uncertainty threshold). To this aim, the authors proposed to get the probabilistic model's average prediction closer to the most likely incorrect class; since it is equivalent to choosing an automatic target, we refer to this attack as Auto-Target Attack (ATA). A possible formulation can be obtained by minimizing the Cross-Entropy (CE) loss [4]: \[\operatorname*{argmin}_{\mathbf{\delta}}\ -\log(\mathbb{E}_{S}(\mathbf{x}+\mathbf{ \delta})_{c})\,\quad s.t.\|\mathbf{\delta}\|_{p}<\epsilon\, \tag{7}\] where \(c\) denotes the automatically chosen target class and \(\mathbb{E}_{S}(\mathbf{x}+\mathbf{\delta})\) the expectation of the predictions over \(S\) Monte-Carlo forward passes. Bringing the average of a prediction's set closer to a certain target corresponds to getting all the predictions closer to a common target. Albeit the above approach was originally formulated as a C&W attack [5], we point out that it can be extended to several attacks. As mentioned in previous work [4], the above attack required a particularly large perturbation to be effective: such a relatively high perturbation was necessary to evade the model's predictions, besides reducing the uncertainty measure. Indeed, using ATA with the primary goal of evading the predictions does not result in a sudden variance minimization but will instead take two stages: in the first stage, after a warm-up phase, the variance starts growing as long as the prediction flips from correct to incorrect; in the second stage, the probability of the class being maximized overtakes the others, leading to a further stabilization and, thus, to the variance minimization. Therefore, an attacker interested in the efficacy of such an attack should favor correctly classified clean samples over misclassified adversarial examples with higher uncertainty estimates. **Stabilizing Attack** - We further improve this simple idea by taking the most likely class indiscriminately (instead of the most likely incorrect one) since we are not interested in the correctness of the prediction. The effect of our formulation, which we name _Stabilizing Attack_ (STAB), is to get every MC prediction closer to the mean basin of attraction, thus _stabilizing_ the predictions, which results in turn to lower variance and average prediction's entropy. ### Attacking Deterministic Models Due to the nature of deterministic models, an attacker can evade their associated uncertainty measure by focusing on a single parameter configuration \(\theta\), without the need of MC sampling. As an example, we show how our STAB attack can be extended to the widely used DUQ technique [25] and other deterministic methods. For a deterministic UQ model, it is sufficient to craft the adversarial sample \(\mathbf{x^{*}}\) to make it approach a centroid \(\mathbf{e}_{c}\) associated to a target class \(c\): \[\operatorname*{argmin}_{\mathbf{\delta}}\;K(f^{\theta}(\mathbf{x}),\mathbf{e}_{c})\;,\quad s.t.\|\mathbf{\delta}\|_{p}<\epsilon\;, \tag{8}\] where \(f^{\theta}(\mathbf{x})\) denotes the feature vector parameterized with \(\theta\), and \(K\) the RBF kernel. As mentioned above about probabilistic models, also the efficiency of attacks against deterministic models is affected by the choice of a proper target. In the case of DUQ, the attack can be crafted more easily by targeting the class _nearest_ to the centroid. Due to the deterministic nature of the considered models, we argue that in the absence of an adversarial training technique, it is quite easy for the attacker to craft the desired attack sample. Furthermore, the direct correspondence between the uncertainty measure and the distance to the closest centroid makes DUQ even less robust to attacks, since attacking the prediction also results in minimizing the uncertainty, with no additional perturbation required. ### Case Study: Semantic Segmentation We have shown how attacks targeting the uncertainty measure can be formulated for standard classification problems. Here we show how they can be extended to complex computer vision problems such as semantic segmentation, which can be seen as a multivariate classification problem, where a class label is assigned to each pixel. In this task, uncertainty is computed in a pixel-wise manner. To this aim, two commonly used metrics for aleatoric and epistemic uncertainty are the average prediction entropy and the prediction variance, respectively [13]. While recalling that it is common for segmented objects to present high uncertainty along the edges, we directly apply our attack formulation of Eq. 6 to semantic segmentation and, indeed, we find it challenging to decrease the uncertainty measure around the _edges_ of the segmented objects (see Sect. 4.2). We hypothesize this is due to the fact that the network is "forced" to abruptly change prediction around the edges, which are therefore inherently characterised by high uncertainty. We, therefore, devised an application-specific attack to semantic segmentation. The underlying rationale is that a pixel closely surrounded by several pixels from different classes exhibits a correlation to each of such classes, whereas a pixel surrounded by a region mostly belonging to a single class exhibits a high correlation only with that specific class. Accordingly, we can force the network to predict a single, identical class for all the image pixels by minimizing the pixel-wise cross-entropy: \[\operatorname*{argmin}_{\mathbf{\delta}}\;-\sum_{\omega\in\Omega}\log(f(\mathbf{x}+ \mathbf{\delta})_{\omega,c})\;,\quad s.t.\|\mathbf{\delta}\|_{p}<\epsilon\;, \tag{9}\] where \(c\) denotes the index of the target class, \(\Omega\) the set of pixels, and \(f(\mathbf{x}+\mathbf{\delta})_{\omega}\) the predicted probability vector for the pixel \(\omega\). The target class \(c\) should be chosen as the one that minimizes the uncertainty. To this aim, as a rule of thumb, one can choose the _most representative class_, i.e., the class corresponding to the majority of pixels in the predicted segmentation map. The above criterion presents multiple advantages. First, the attacker will be trivially required to flip the smallest number of pixels, as the majority of them are already assigned to the target class. Secondly, considering the strong overall correlation induced by massive occurrences of pixels of the most representative class \(c\) on the image, a pixel of a different class can be misled towards \(c\) with much more ease compared to a different and less impactful class. We refer to this attack with the name _Uniform Segmentation Target Attack (UST)_. ## 4 Experimental Analysis We empirically evaluated the proposed O-Attack against several UQ techniques both in classification and semantic segmentation tasks, under two different operational scenarios: the traditional setting of _independent and identically distributed_ (IID) data, which is practically implemented using training and testing data from the same data set, and the case of _out-of-distribution_ (OOD) data, which was simulated using different data sets for training and for testing. ### Experimental Setup **Data sets** - We used CIFAR-10 for IID experiments, whereas for OOD experiments we used CIFAR-10 for training and CIFAR-100 for testing. To evaluate the performance under the OOD setting, we used accuracy-rejection curves evaluated on a mixed testing set made up of 600 CIFAR-10 samples and 900 CIFAR-100 samples. We further assessed the O-Attack in a semantic segmentation task, on the PASCAL VOC data set [6]. **UQ techniques and models** - We considered four different UQ methods: **MC dropout**[8] (implemented both in ad hoc and post hoc fashion), **Deep Ensemble**[15] and **DUQ**[25]. We also considered three DNN architectures to implement the models: ResNet18, ResNet34 and Resnet50. We trained 9 different versions of ResNet34 and Resnet50 and 10 versions of ResNet18: one baseline version for the post hoc dropout, 3 versions with ad hoc dropout (using a dropout rate \(\phi\in[0.1,0.3,0.5]\), and five classic ResNet's for constructing a deep ensemble. For models including MC dropout-based architectures, we added the dropout rates after each convolutional and linear layer, thus obtaining a probability distribution over each weight. For ResNet18, we trained an additional network used as a feature extractor for DUQ. For the semantic segmentation task, we used the pre-trained Torch implementation of a Fully Convolutional Network (FCN) [18]. We then applied post hoc dropout with a dropout rate of \(0.1\) after each block of four convolutions, since a too high randomization may induce prediction deterioration when using injected dropout [16]. **Attack implementation** - We based the implementation of our attack (see Sect. 3) on the PGD attack with \(\ell_{\infty}\) norm, using \(150\) iterations, MC samples size of \(30\) and step size of \(2\cdot 10^{-3}\) for the case of probabilistic UQ methods, and \(10\) iterations with a step size of \(1\cdot 10^{-3}\) for the deterministic method DUQ. For the MVA attack of Eq. 6, we minimize the logarithm of the variance to attain better performances. We implemented the attacks on CIFAR-10 with \(\epsilon\) ranging from \(1/255\) to \(8/255\). This allowed us to plot the associated security evaluation curves showing how the uncertainty measure changes as a function of \(\epsilon\). For semantic segmentation, we still use the PGD attack with \(100\) iterations, a step size of \(1\cdot 10^{-3}\), set \(\epsilon\) to \(2/255\) and MC sample size of \(20\). **Uncertainty measures** - To attack probabilistic UQ methods, we use MC sample size of \(100\) (for both classification and segmentation) to estimate the **predictive variance** and the **entropy** as measures of _epistemic_ and _aleatoric_ uncertainty, respectively. To attack DUQ, we use the distance from the closest centroid to measure epistemic uncertainty. ### Experimental Results We first present and discuss the results attained by attacking _probabilistic_ UQ, for both IID and OOD data, then the ones attained for the _deterministic_ DUQ method, and finally the results related to semantic segmentation. **Probabilistic UQ methods, IID setting** - Fig. 1 shows the results of the experiments conducted on CIFAR-10, in the IID setup, for all the considered probabilistic UQ methods. The Minimum Variance Attack (MVA) and Stabling Attack (STAB) are conceived to minimize the uncertainty measure. MVA focuses on minimizing epistemic uncertainty, whereas STAB focuses on the predictive measure, thus minimizing both epistemic and aleatoric uncertainty. Interestingly, although not surprisingly, we can see that STAB turned out to be more effective in minimizing aleatoric uncertainty. In fact, pushing towards more stable predictions ultimately yields the double effect of increasing the target class probability and minimizing the entropy. However, for both attacks, the clean accuracy does not suffer any decline. Whereas ATA is initially less efficient than STAB (since it attempts to induce misclassifications) both techniques stabilize as the attack proceeds. Such ATA behavior is caused by the initial warm-up phase described in Sect. 3.2, where the predictions necessarily cross the boundary before being uniformly pushed towards the same class. However, there are still some differences between the two techniques, indicating that ATA does not necessarily converge to STAB's performances for \(\epsilon=8/255\) (e.g., on post hoc dropout). For what concerns the comparison between UQ methods based on ad hoc and post hoc dropout, we did not observe any significant difference. From a broader perspective, MVA attacks appear to better fit post hoc dropout, whereas ATA seems more effective for ad hoc dropout. However, in both cases, STAB outperforms MVA and ATA for both aleatoric and epistemic uncertainty. Overall, post hoc dropout attains a higher starting variance, which results in more difficulties in zeroing the uncertainty. Besides being Deep Ensembles widely recognized as highly accurate techniques, in our experiments, we notice a conflicting trend. Starting from comparable uncertainty levels with respect to ad hoc dropout, we notice a considerable decline in both aleatoric and epistemic uncertainty. In fact, all the attacks easily reduce variance to the order of magnitude of \(10^{-6}\) with a perturbation of \(\epsilon=8/255\), conversely to the ad hoc dropout, which attains an order of magnitude of \(10^{-4}\). As shown in Fig. 1 and already stated in [4], a "standard" attack aiming to cause wrong predictions (denoted as ATA _(acc)_) can also reduce uncertainty. However, by look Figure 1: Behaviour of classification accuracy, aleatoric and epistemic uncertainty on CIFAR-10 under an IID setup, using a ResNet18 model with MC-dropout (with a dropout rate of 0.3) and Deep Ensembles, under different attacks, as a function of \(\epsilon\). More architectures and dropout rates are present in the supplementary material. ing at Fig. 1, we find out that the criterion for choosing the _best_ adversarial example at each iteration is crucial. Indeed, in a traditional set-up, when we find an adversarial example fooling the prediction (i.e., misclassified by the model), we consider it a "success" and then save it. Nevertheless, this strategy is sub-optimal when an attacker is interested in evading the uncertainty measure. Indeed, we observe a first warm-up phase where the sample's uncertainty increases and then a stabilization where it consistently decreases (as hypothesized in Sect 3.2). Conversely, when always saving the sample with lower uncertainty, the uncertainty measures decrease consistently, as expected in this setting. **Probabilistic UQ methods, OOD setting** - We focused on the STAB and MVA attacks applied to post hoc dropout, ad hoc dropout and Deep Ensemble. Fig. 2 shows the corresponding accuracy-rejection curves. The green line, for \(\epsilon=0\), shows that all UQ methods exhibit an adequate capability of detecting OOD samples. However, as the perturbation \(\epsilon\) for OOD samples increases, their effectiveness decreases, up to a point where they start rejecting IID samples before OOD ones, which indicates that the estimated uncertainty is higher for OOD than for IID samples: this is just the opposite behaviour to the desired one (i.e., indicates an attack success). The above results clearly show that the considered UQ methods, including Deep Ensembles, are vulnerable to adversarial attacks, also in the presence of OOD samples. We also point out that for ad hoc dropout, the MVA attack turned out to be less effective than STAB, which, with \(\epsilon=8/255\), completely breaks the other techniques. We finally argue that the robustness of Deep Ensembles could be improved by increasing the ensemble size (which was set to 5 in our experiments), although at the expense of an increase in processing cost. **Deterministic UQ methods** - In Fig. 3 and Fig. 4 we can see the results for IID and OOD (respectively) experiments using DUQ. Since deterministic methods do not perform MC sampling, attacks against them can be designed and implemented more easily. This leads to lower robustness for attacks targeting both uncertainty and predictions (where, as opposed to probabilistic attacks, no trade-off is needed). Nevertheless, more interesting behaviors can be observed when DUQ is used in the case of OOD samples, as seen from Fig. 4. In this scenario, even small perturbations quickly deteriorate the quality of the uncertainty measure. Still, for larger perturbations, the accuracy does not drop to zero: such behavior may indicate that deterministic methods assign larger uncertainty values to OOD samples, making it challenging to get the perturbed samples very close to Figure 4: Accuracy-rejection curves attained in a OOD setting (see Sect. 4.1) by a ResNet18 model using the DUQ method, under the STAB attack. Figure 3: Classification accuracy and uncertainty of a ResNet18 as a feature extractor for the DUQ method on CIFAR10 in the IID setting, under two different attacks, as a function of \(\epsilon\). Figure 2: Accuracy-rejection curves of a ResNet18 model under the STAB and MVA attacks against MC-dropout (with a dropout rate of \(0.3\)) and against Deep Ensembles, as a function of \(\epsilon\), in a OOD setting simulated with a mixture of 600 CIFAR-10 images and 900 CIFAR-100 testing images. a target centroid. **Semantic segmentation** - We finally show in Fig. 5 the results obtained when attacking UQ methods used for a semantic segmentation task. Using a clean image as input, we see that the considered model is not very accurate in correctly segmenting the whole object. Nevertheless, high uncertainty values are correctly assigned to regions where segmentation errors occur, corresponding to the object edges and to missing objects. MVA, albeit reducing the overall epistemic uncertainty, is less effective in reducing the uncertainty on the edges. On the other hand, the attack aimed at obtaining a uniform segmentation map whose target is the most representative class (usually, the "background" class), which we refer to as UST (Bg) for convenience, turns out to be effective in reducing both the epistemic and the aleatoric uncertainty for each pixel. However, attacks aimed at evading the predictions using a similar strategy, i.e., assigning a wrong label (referred to as UST (Fb), _i.e_. "Full Break"), did not achieve a similar reduction in uncertainty, despite they evaded a large region of the image. ## 5 Conclusions and Future Work In this work, we first proposed and modeled adversarial attacks against UQ techniques used by ML predictors, aimed at producing _wrong uncertainty estimates_, regardless of the correctness of the prediction. We formally defined a taxonomy and a threat model and implemented several possible attacks against different UQ techniques, both in classification and in semantic segmentation tasks. From our preliminary results on classification tasks we can draw the following conclusions: Generally speaking, **UQ techniques are not robust to adversarial attacks**: they can be easily manipulated using attacks specifically crafted to evade the uncertainty measure. Surprisingly, Deep Ensemble turned out to be the less robust UQ technique against adversarial attacks targeting uncertainty. On the other hand, MC dropout tends to be the most robust among the analyzed methods (as we can see from the experiments on OOD data). Our preliminary example on semantic segmentation shows that attacks against UQ methods can be effective also in other, more complex CV tasks. We finally point out the following directions for future work: (i) implementing and investigating under-confidence attacks (U-attacks); (ii) exploring the proposed attacks against a wider range of UQ methods; (iii) analyzing black-box attacks; and (iv) exploring the attack transferability between different UQ methods; (v) investigating adversarial training and other robust defense techniques to counter attacks against UQ. ## Acknowledgments This work has been supported by the European Union's Horizon Europe research and innovation program under the project ELSA, grant agreement No 101070617; by Fondazione di Sardegna under the project "TrustML: Towards Machine Learning that Humans Can Trust", CUP: F73C22001320007; and by project SERICS (PE00000014) under the NRRP MUR program funded by the EU - NGEU. Emanuele Ledda, Daniele Angioni, and Giorgio Piras are affiliated with the Italian National Ph.D. in Artificial Intelligence, Sapienza University of Rome. They also acknowledge the cooperation with and support from the Pattern Recognition and Applications Laboratory of the University of Cagliari. Figure 5: Two examples of attacks against UQ in a semantic segmentation task. In each of the two groups of columns, from left to right: (i) the original clean image, (ii) the predicted segmentation maps, and the corresponding epistemic (iii) and aleatoric (iv) uncertainty maps. In the rows, from top to bottom: results obtained under normal operating conditions (with no attacks), and under the MVA, UST (Fb), and UST (Bg) attacks.
2309.07944
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image. Our proposed method, Text-to-Image Models for Counterfactual Explanations (TIME), is a black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely the image and its prediction, omitting the need for the classifier's structure, parameters, or gradients. Before generating the counterfactuals, TIME introduces two distinct biases into Stable Diffusion in the form of textual embeddings: the context bias, associated with the image's structure, and the class bias, linked to class-specific features learned by the target classifier. After learning these biases, we find the optimal latent code applying the classifier's predicted class token and regenerate the image using the target embedding as conditioning, producing the counterfactual explanation. Extensive empirical studies validate that TIME can generate explanations of comparable effectiveness even when operating within a black-box setting.
Guillaume Jeanneret, Loïc Simon, Frédéric Jurie
2023-09-14T09:03:52Z
http://arxiv.org/abs/2309.07944v2
# Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach ###### Abstract This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image. Our proposed method, **T**ext-to-**I**mage **M**odels for Counterfactual **E**xplanations (TIME), is a black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely the image and its prediction, omitting the need for the classifier's structure, parameters, or gradients. Before generating the counterfactuals, TIME introduces two distinct biases into Stable Diffusion in the form of textual embeddings: the context bias, associated with the image's structure, and the class bias, linked to class-specific features learned by the target classifier. After learning these biases, we find the optimal latent code applying the classifier's predicted class token and regenerate the image using the target embedding as conditioning, producing the counterfactual explanation. Extensive empirical studies validate that TIME can generate explanations of comparable effectiveness even when operating within a black-box setting. ## 1 Introduction Recently, deep neural networks (DNN) have seen increased attention for their impressive forecasting abilities. The use of deep learning in critical applications, such as driving automation, made the scientific community increasingly involved in what a model is learning and how it makes its predictions. These concerns shed light on the field of Explainable Artificial Intelligence (XAI) in an attempt to "open the black-box" and decipher its induced biases. Counterfactual explanations (CEs) are an attempt to find an answer to this previous problem. They try answering the following question: _What do we need to change in \(X\) to change the prediction from \(Y\) to \(Z\)?_ Because CEs give intuitive feedback about what to change to get the desired result, two applications use these explanations: feedback recommendation systems and debugging tools. Take an automated loan approval system as an example. From a user's point of view, if it gets a negative prediction, the user would be more interested in knowing what plausible changes can be made to get a positive result, rather than having an exhaustive list of explanations for why the result is unfavorable. From the debugger's point of view, it can look for biases that were considered in the decision when they should not have been, thus revealing the classifier's weaknesses. While there are multiple ways to address this question for visual systems, _e.g_. by adding adversarial noise [16], the modifications must be sparse and comprehensive to provide insight into which variables the model is using. To this end, most studies for CEs use generative models, such as GANs [15], Denoising Diffusion Probabilistic Models (DDPMs) [19], or VAEs [30], as they provide an intuitive interface to approximate the image manifold and constrain the generation in an appropriate space. Although they have several advantages, training these generative models is cumbersome and may not yield adequate results, especially when the data is limited [27]. To this end, we expect that the use of large generative models trained on colossal datasets, such as LAION-5B [43], can provide a sufficient tool to generate CEs. On the one hand, these generative models have shown remarkable qualitative performance, an attractive feature to exploit. Second, since the generative model is already optimized, it can be used to capture data set specific concepts - _e.g_. textual inversion [12] captures the main aspects of a target object when subject to only three to five images. In this paper, we explore how to take advantage of Text \begin{table} \begin{tabular}{c|c c c c} \hline \hline Method & Model & Training & Specificity & Optim \\ \hline DiVE [39] & VAE & Days & Only DNN & Yes \\ STEEX [23] & GAN & Days & Only DNN & Yes \\ DiME [24] & DDPM & Days & Only DNN & Yes \\ ACE [25] & DDPM & Days & Only DNN & Yes \\ \hline TIME (Ours) & T2I & Hours & Black-Box & No \\ \hline \hline \end{tabular} \end{table} Table 1: **Advantages of the proposed methodology.** TIME uses a pre-trained T2I model and trains only a few textual embeddings, requiring hours of training instead of days. It does not require access to the target model (completely black-box) and does not involve any optimization during counterfactual generation. - specifically, using Stable Diffusion [10]. To do so, we take a distillation approach to transfer the learned information from the model into new text embeddings to align the concept class in text space. Second, we use inversion techniques [49] to find the optimal noise to recover the original instance. Finally, with our distilled knowledge, we denoise this optimal point to recover the final instance using the target label, thus generating the CE. This is advantageous because we can tackle the challenging scenario of explaining a black-box model, having access only to its predictions. Our proposed approach has three main advantages over previous literature, as shown in Table 1. First, we only train some textual embeddings, making the training efficient, while previous methods require training a generative model from scratch. Second, we do not require an optimization loop when generating the final counterfactual, which reduces the generation time. Finally, our explainability tool works in a completely black-box environment. While most modern approaches [23, 24, 25, 39, 51] are DNN-specific, because they rely on gradients, our approach, which uses only the output and input as cues, can be used to diagnose any model regardless of its internal functioning. This setting is crucial for privacy-preserving applications, such as medical data analysis, since eliminating access to the gradients could prevent data leakage [54], as it helps protect personal or confidential information. We summarize our contributions as follows1: Footnote 1: Code is available at [https://github.com/guillaumejs2403/TIME](https://github.com/guillaumejs2403/TIME) * We propose TIME: Text-to-Image Models for Counterfactual Explanations, using Stable Diffusion [40] T2I generative model to generate CEs. * Our proposed approach is completely black-box. * Our counterfactual explanation method based on a distillation approach does not require any optimization during inference, unlike most methods. * From a quantitative perspective, we achieve similar performance to the previous state-of-the-art, while having access only to the input and the prediction of the target classifier. ## 2 Related Work ### Explainable Artificial Intelligence The research branch of XAI broads multiple ways to provide insights into what a model is learning. As a bird's view analysis, there are two main distinctions between methods: _Interpretable by-design_ architectures, and _Post-Hoc_ explainability methods. The former searches to create algorithms that directly expose why a decision was made [2, 3, 5, 8, 22, 53, 23]. Our research study is based on the latter. _Post-hoc_ explainability methods study pretrained models and try to decipher the variables used for forecasting. Along these lines, there are saliency maps [4, 26, 37, 44], concept attribution [11, 14, 29], or distillation approaches into interpretable by-design models [13]. In this paper, we study the on-growing branch of CEs [48]. In contrast to previous methods, these explanations are simpler and more aligned with human understanding, making them appealing to comprehend machine learning models. ### Counterfactual Explanations The seminal work of Watcher [48] defined what a counterfactual explanation is and proposed to find them as a minimization problem between a classification loss and a distance loss. In the image domain, optimizing the image's raw pixels produces adversarial noises [16]. So, many studies based their work on Watcher [48]'s optimization procedure with a generative model to regularize the CE production, such as variational autoencoders [39], generative adversarial networks [23, 28, 32, 51, 24], and diffusion models [1, 24, 42, 25]. In contrast to these works, our proposed approach, TIME, is a distillation approach for counterfactuals. Our method does not require any optimization loop when building the explanation, since we transfer the learning into the T2I model. Furthermore, we do not require access to the gradients of the target model but only the input and output, making it black-box, unlike previous methods. Co-occurent works analyze dataset biases using T2I models to create distributional shifts in data [38, 47]. Although a valid approach to debug datasets, we argue that these approaches do not search what a model learned but instead a general strategy for the biases in datasets under distributional shifts (it is normal to misclassify a dog with glasses since the model was not trained to classify dog with glasses). Further, their proposed approaches are computationally heavy, since they require fine-tuning Large Language Models or optimizing each inversion step on top of Stable Diffusion. Instead, ours requires training a word embedding, and the inference merely requires Stable Diffusion without computing any gradients, which fits into a single small GPU. ### Customization with Text-to-Image Models Due to the interest in creating unimaginable scenarios with personalized objects, customizing T2I diffusion models has gained attention in recent literature. Textual Inversion [12] and following works [7, 34, 52, 17, 17] are popular approaches to learn to generate specific objects or styles by fine-tuning all or some part of the T2I model. Thus, the new concept can be used in a phrase such that the T2I model will synthesize it. One of the most difficult problems is editing real-world images with T2I models. The pioneer work of Song _et al_. [46] proposed a non-stochastic variant of DDPMs, called Denoising Diffusion Implicit Models (DDIM). Hence, a single noise seed yields the same image. So, to find an approximate noise, DDIM Inversion noises the image using the diffusion model. Yet, some problems arise with this approximation. So, novel works [33, 36] modify the inversion process by including an inner gradient-based optimization at each noising step, making it unfeasible when analyzing a bundle of images. Finally, Wallace _et al_. [49] proposed to modify the DDIM algorithm into a two-stream diffusion process, reaching a "perfect" inversion. We take advantage of these works and distill the learned information from a classifier to generate counterfactual explanations of real images, a step to interpret the target classifier. ## 3 Methodology This section explains the proposed methodology for generating counterfactuals using T2I generative models. In section 3.1, we briefly introduce some useful preliminary concepts of diffusion models. Then we describe our proposed method in a three-step procedure. First, we explain how to transfer what the classifier has learned into the generative model as a set of new text tokens (Section 3.2). Second, using recent advances in DDIM Inversion, we revert the image to its noise representation using the original prediction of the classifier. Finally, we denoise the noisy latent instance using the target label (Section 3.3). ### DDPM Preliminaries Diffusion models [19] are generative architectures that create images by iteratively _removing_ noise. DDPMs are based on two inverse Markov chains. The forward chain _adds_ noise, while the reverse chain _removes_ it. Thus, the generation process is reverse denoising, starting from a random Gaussian variable and removing small amounts of noise until a plausible image is returned. Formally, given a diffusion model \(\epsilon_{\theta}\) and a fixed set of steps \(T\), \(\epsilon_{\theta}\) takes as input a noisy image \(x_{t}\), the current step \(t\) to compute a residual shift, and a textual conditioning \(C\), in our case. For the generation, \(\epsilon_{\theta}\) updates \(x_{t}\) following: \[x_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(x_{t}-\frac{1-\alpha_{t}}{\sqrt{1- \tilde{\alpha}_{t}}}\epsilon_{\theta}(x_{t},t,C)\right)+\sigma_{t}\epsilon, \tag{1}\] where \(\sigma_{t}\), \(\alpha_{t}\) and \(\bar{\alpha}_{t}\) are some predefined constants, and \(\epsilon\) and \(x_{T}\) are extracted from a Gaussian distribution. This process is repeated until \(t=0\). To train a DDPM, for a given an image-text pair \((x,C)\), each optimization step minimizes the loss: \[L(x,\epsilon,t,C)=\|\epsilon-\epsilon_{\theta}(x_{t}(x,t,\epsilon),\,t,C)\|^{2}, \tag{2}\] with \[x_{t}(x,t,\epsilon)=\sqrt{\bar{\alpha}_{t}}\,x+\sqrt{1-\bar{\alpha}_{t}}\, \epsilon. \tag{3}\] The pioneering work of Ho _et al_. [19] focused on training and evaluating these models in the pixel space, making them computationally heavy. Latent Diffusion Models [40] proposed to reduce this burden by performing the diffusion process in the latent space of a Quantized Autoencoder [10]. Further, they augment the generation by using textual conditioning \(C\) at its core to steer the diffusion process, as well as increasing the quality of the generation using Classifier-Free Guidance [20] (CFG). The CFG [20]'s core modifies the sampling strategy in Eq. 1 by replacing \(\epsilon_{\theta}\) with \(\epsilon_{\theta}^{f}\), a shifted version defined as follows: \[\epsilon_{\theta}^{f}(x_{t},t,C):=(1+w)\,\epsilon_{\theta}(x_{t},t,C)-w\, \epsilon_{\theta}(x_{t},t,\varnothing), \tag{4}\] where \(\varnothing\) is the empty conditioning and \(w\) is a weighting constant, resulting in a qualitative improvement. ### Distilling Knowledge into Stable Diffusion To use large generative models, and in particular Stable Diffusion [40], we chose to distill the learned biases of the target classifier into the generative model to avoid any gradient-based optimization during the CE formation. A model is subject to several biases as it learns, of which we distinguish two. The first is a _context bias_. This bias refers to the way images are formed. For example, ImageNet images [6] tend to have the object (_e.g_., animals, cars, bridges) in the center, while CelebA HQ images [31] are human faces. The second bias is class-specific, and it relates to the semantic cues extracted by the classifier to make its decision, _e.g_. white and black stripes for a zebra. So, we take a textual inversion approach to distill the context bias and the knowledge of the target classifier into the textual embedding space of Stable Diffusion. In a nutshell, textual inversion [12] links a new text-code \(c^{*}\) and an object (or style) such that when this new code is used, the generative model will generate this new concept. To achieve this, Gal _et al_. [12] proposed to instantiate a new text embedding \(e^{*}\), associate it to the new text-code \(c^{*}\), and then train \(e^{*}\) by minimizing the loss \[\mathbb{E}_{(x,C)\sim D,t\sim U[1,T],e\sim\mathcal{N}(0,I)}\left[L(x,t, \epsilon,C)\right]. \tag{5}\] Here, \(D\) is the set of images containing the concept to be learned, \(U\) is the uniform distribution of natural numbers between \(1\) and \(T\), and \(C\) is a text prompt containing the new text code \(c^{*}\). Accordingly, to distill the context bias into Stable Diffusion, we follow [12] practices and learn a new textual embedding \(e^{*}_{context}\) minimizing Eq. 5 using as the conditioning the phrase A \(c^{*}_{context}\) picture. Here, \(c^{*}_{context}\) is the textual code related to textual embedding \(e_{context}^{*}\). In our setup, we used the complete training set of images with no labels where the model was trained. So far, we have not been required to use the classifier. To transfer the knowledge learned by the classifier to the T2I generation pipeline, we follow a similar approach. In this case, we train a new textual embedding \(e_{i}^{*}\) for each class \(i\) and represent its text token with \(c_{i}^{*}\). However, instead of using the full training dataset \(D\), we used only those images that the classifier predicted to be the source class \(i\). As for the conditioning sentence, we take the previously learned context token and add the new class token to the sentence. Thus, we optimize Eq. 5 with the new phrase \(\mathbb{A}\)\(c_{context}^{*}\) image with a \(c_{i}^{*}\) and the filtered dataset. For the rest of the text, we will refer to this prompt as \(C_{i}\). ### Counterfactual Explanations Generation Now we want to use the learned embeddings to generate explanations. Current research on diffusion models has attempted to recover input images by retrieving the best noise, such that when the DDIM sampling strategy is used, it generates the initial instance. This is advantageous for our goal, since we can use current technological advances to generate this optimal latent noise and then inpaint the changes necessary to flip the classifier. Since we need to perform perfect recovery to avoid most changes in the input image, we use EDICT [49]'s perfect inversion technique. In fact, they showed that inverting an image with a caption (Eqs 8) and then denoising it (Eqs. 7) with a modified version of the original caption will produce semantic changes in the image. In short, EDICT modifies the DDIM [46] sampling strategy for diffusion models into a two-flow invertible sequence. By introducing a new hyperparameter \(0<p<1\), setting \(x_{0}\) and \(y_{0}\) as the target image, and new variables: \[\begin{split} a_{t}&=\sqrt{\bar{\alpha}_{t-1}/\bar {\alpha}_{t}}\\ b_{t}&=\sqrt{1-\bar{\alpha}_{t-1}}-\sqrt{\bar{ \alpha}_{t-1}(1-\bar{\alpha}_{t})/\bar{\alpha}_{t}},\end{split} \tag{6}\] the denoising phase becomes: \[\begin{split} x_{t}^{inter}&=a_{t}\,x_{t}+b_{t}\, \epsilon_{\theta}^{f}(y_{t},t,C)\\ y_{t}^{inter}&=a_{t}\,y_{t}+b_{t}\,\epsilon_{ \theta}^{f}(x_{t}^{inter},t,C)\\ x_{t-1}&=p\,x_{t}^{inter}+(1-p)\,y_{t}^{inter}\\ y_{t-1}&=p\,y_{t}^{inter}+(1-p)\,x_{t-1}.\end{split} \tag{7}\] In a similar vein, the inversion phase is the inverse of Eqs. 7: \[\begin{split} y_{t+1}^{inter}&=(y_{t}-(1-p)\,x_{t} )\ /\ p\\ x_{t+1}^{inter}&=(x_{t}-(1-p)\,y_{t+1}^{inter})\ /\ p\\ y_{t+1}&=\frac{1}{a_{t+1}}(y_{t+1}^{inter}-b_{t+1} \,\epsilon_{\theta}^{f}(x_{t+1}^{inter},t+1,C))\\ x_{t+1}&=\frac{1}{a_{t+1}}(x_{t+1}^{inter}-b_{t+1 }\,\epsilon_{\theta}^{f}(y_{t+1}^{inter},t+1,C)).\end{split} \tag{8}\] We can see a clear connection between Wallace _et al_. [49]'s work and our main objective. If we invert an image using the caption with our context and source class tokens and then denoise it by changing the prompt to include the target token (learned in Section 3.2), we can hope to generate the necessary changes to flip the classifier's decision. However, while adapting the EDICT method, we noticed a major problem with this approach. Although the chosen algorithm recovers the input instance, many images were difficult to modify. To circumvent this issue, we had to adjust the scores of the CFG in Eq. 4. As diffusion models are seen as score-matching models, the term \[w(\epsilon_{\theta}(x_{t},t,C_{i})-\epsilon_{\theta}(x_{t},t,\varnothing)) \tag{9}\] in Eq. 4 are gradients pointing to the target distribution conditioned on \(C_{i}\). We call this the positive drift. Thus, by including a negative drift term, \[-w(\epsilon_{\theta}(x_{t},t,C_{j})-\epsilon_{\theta}(x_{t},t,\varnothing)), \tag{10}\] we can lead the generation process _away_ from the source distribution conditioned in \(C_{j}\). Therefore, we reformulate the CFG scores \(\epsilon_{\theta}^{f}\), and rename it to \(\epsilon_{\theta}^{c}\), as follows: \[\begin{split}\epsilon_{\theta}^{c}(x_{t},t,C_{i},C_{j})=(1+w) \,\epsilon_{\theta}(x_{t},t,C_{i})\\ -w\,\epsilon_{\theta}(x_{t},t,C_{j}).\end{split} \tag{11}\] As a result, and given the previously introduced notions, we propose **T**ext-to-**I**mage **M**odels for counterfactual **E**xplanations (TIME), illustrated in Figure 1. To leverage these big generative models, we first distill the context bias into the pipeline's text embedding space by training a text embedding with the complete dataset. Then, we transfer the knowledge of the classifier by training a new embedding but using solely the instances with the same predictions. Finally, given an input image classified as \(i\) and the target \(j\), we invert the image (Eqs. 8) using \(\epsilon_{\theta}^{c}\) as the score network (Eq. 11) using as the positive and negative drift \(C_{i}\) and \(C_{j}\), respectively. Then, we denoise the noisy state using Eqs. 7 but switching textual conditionings. Practical considerations.To avoid large changes in the image, the inversion stops at an intermediate step \(\tau\) instead of \(T\). In addition, we have found that using more than a single embedding for the context and class biases yield further expressiveness. Also, if we fail to find a valid counterfactual, we choose a new \(\tau\) and \(w\) to rerun the algorithm. We will give the implementation details later in Section 4. ## 4 Experimental Validation Datasets and Models.We evaluate our counterfactual method in the popular dataset CelebAHQ [31]. The task at hand is classifying smile and age attributes from face instances, computed with a DenseNet121 [21] with an image resolution of \(256\times 256\) as in [25, 23]. The evaluation is performed on the test set. To make the assessment fair with previous methods, we used the publicly available classifiers for CelebA HQ dataset from previous studies [23]. Implementation Details.We based our approach on Stable Diffusion V1.4 [10]. For all dataset, we trained three textual embeddings for the context and class biases for 800 iterations with a learning rate of 0.01, a weight decay of 1e-4, and a batch size of 64. For the inference, we used the default EDICT's hyperparameter \(p=0.93\) and a total of 50 steps. For the smiling attribute, we begin the CE generation with \((\tau,w)=(25,3)\). In case of failure, we increased the tuple to \((30,4),(35,4)\) or \((35,6)\). For the age attribute, we used \((\tau,w)\in\{(30,4),(30,6),(35,4),(35,6)\}\). We performed all training and inference in a Nvidia GTX 1080. ### Quantitative Assessment Assessing counterfactuals presents inherent challenges. Despite this, several metrics approximate the core objectives of counterfactual analysis. We will now provide a concise overview of each objective and its frequent evaluation protocol, reserving an in-depth exploration of these metrics for the supplementary material. Validity.First, we need to quantify the ability of the counterfactual explanation method to flip the classifier. This is measured by the Success Ratio (SR aka Flip Rate). Sparsity and Proximity.A counterfactual must have sparse and proximal editions. Several metrics have been proposed to evaluate this aspect, depending on the data type. For face images [24, 25, 39, 45], there are the face verification accuracy (FVA), face similarity (FS), mean number of attributes changed (MNAC), and Correlation Difference (CD). For general-purpose images, like BDD100k [50], the quantitative assessment is done via the SimSiam Similarity (\(S^{3}\)) [25] and the COUT metric [28]. Realism.The CE research adapts its evaluation metrics from the generation field. Hence, the realism of CEs is commonly measured with the FID [18] and sFID [25] metrics but only in the correctly classified images. Efficiency.An efficiency analysis is often omitted by many methods. A crucial criterion for counterfactual generation techniques is to minimize computation time for generating explanations in "real time". We evaluate this by contrasting efficiency using floating point operations (FLOPs) per explanation - lower values signify faster inference - and by measuring the average time taken to generate an explanation, specifically within our cluster environment. #### 4.1.1 Main Results. Table 2 shows the results of TIME and compares them to the previous literature. Although we do not outperform the state-of-the-art in any metric, we found that our results are similar even when our proposed method is restricted to be black-box. Further, it does not require training of a completely new generative model and does not rely on any optimization for CE generation. For the realism metric, we expected to get a low FID [18] and sFID [25] due to the use of Stable Diffusion and beat ACE [25]. However, ACE uses an inpainting strategy to post-process their counterfactuals. This reduces this metric because they keep most of the original pixels in their output. If we remove the post-processing, Figure 1: **TIME Overview. Our proposed method consists of three steps: (a) We learn a context token for the whole dataset using textual inversion. (b) We filter out the images that the classifier predicts as source class \(i\) and learn a new embedding. (c) Finally, to generate the counterfactual explanation, we invert the input image using a prompt containing the source embedding and then denoise it using the target embedding.** the FID increases dramatically. With these results, we confirm that T2I generative models are a good tool to explain classifiers counterfactually in a black-box environment. #### 4.1.2 Qualitative Results We show some qualitative results in Figure 2 and added more instances in the supplementary material. First, we see that DiME [24], ACE [25], and TIME generate very realistic counterfactuals, and the differences are mostly in the details. However, the most notable changes are between ACE and our method. When we check the regions where ACE made the changes, they are blurred. This is due to their over-resp spacing to create the counterfactual. For DiME, we checked and found that some of their modifications seem out-of-distribution, for many cases. However, TIME produces realistic changes most of the time. Finally, in our opinion, TIME alterations can be spotted with more ease. #### 4.1.3 Efficiency Analysis We continue our analysis and study the efficiency of TIME when creating the CE with respect to previous state-of-the-art methods, DiME [24] and ACE [25]. We estimated that TIME uses \(98\) TFLOPs and \(45\) seconds to create a single counterfactual, using \(\tau=35\) as the worst case scenario. In contrast, ACE took \(279\) TFLOPs and \(62\) second per CE while DiME took \(1004\) TFLOPs and \(163\) seconds. ### Ablations To show the effectiveness of each component, we realized thorough ablation experiments. To this end, we first show the hyperparameter exploration between the depth of the chain of noise \(\tau\) and the guidance scale \(w\). Additionally, we will show the effect of including multiple textual tokens, the context tokens, and, finally, the effect of adding our negative drift - please refer to the practical consideration in section 3.3 for the variable \(\tau\). Unless explicitly \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c|}{**Smile**} & \multicolumn{8}{c}{**Age**} \\ \cline{2-13} & FID (s) & \#ID (s) & FVA (t) & FS (t) & WNAC (t) & CD (s) & COUT (t) & SR (t) & FID (s) & FVA (t) & FS (t) & WNAC (t) & CD (s) & COUT (t) & \$R(t) \\ \hline DAVE [99] & 107.0 & - & 35.7 & - & 7.41 & - & - & - & 107.5 & - & 32.3 & - & 6.76 & - & - & - \\ STERE [23] & 21.9 & 97.6 & - & 5.27 & - & - & 26.8 & - & 96.0 & - & 5.63 & - & - & - \\ DIME [24] & 18.1 & 27.7 & 96.7 & 0.6729 & 2.63 & 1.82 & 0.6495 & 97.0 & 18.7 & 27.8 & 95.0 & 0.6597 & 2.10 & 4.29 & 0.5615 & 97.0 \\ ACE [25] & 26.1 & 36.8 & 99.8 & 0.8200 & 2.33 & 2.49 & 0.4716 & 95.7 & 24.6 & 38.0 & 99.6 & 0.7680 & 1.95 & 4.61 & 0.4550 & 98.7 \\ ACE [25] & 23.2 & 20.2 & 100.0 & 0.8941 & 1.56 & 2.61 & 0.5496 & 95.0 & 5.31 & 21.7 & 99.6 & 0.8085 & 1.53 & 5.4 & 0.9894 & 95.0 \\ ACE [25] & 26.0 & 35.2 & 99.9 & 0.8010 & 2.39 & 2.40 & 0.5088 & 97.9 & 24.2 & 34.9 & 99.4 & 0.7609 & 2.02 & 4.29 & 0.5332 & 99.7 \\ ACE [25] & 6.9 & 22.0 & 100.0 & 0.8440 & 1.87 & 2.21 & 0.5946 & 95.0 & 16.4 & 28.2 & 99.6 & 0.7743 & 1.92 & 4.21 & 0.5503 & 95.0 \\ \hline TIME (Ours) & 10.98 & 23.8 & 96.6 & 0.7896 & 2.97 & 2.32 & 0.6303 & 97.1 & 20.9 & 32.9 & 79.3 & 0.6328 & 4.19 & 4.29 & 0.3124 & 89.9 \\ \hline \hline \end{tabular} \end{table} Table 2: **CelebAHQ Evaluation.** While TIME does not outperform the state-of-the-art metrics, our proposed method provides competitive performance while being completely black-box, having access only to the input and output of the model. ACE* is [25]’s method without their post-processing method. Figure 2: **Qualitative Results.** We present qualitative examples and compare them to the previous state of the art. DiME generates some out-of-distribution noise, while ACE creates blurry image sections. In contrast, TIME produces more realistic changes by harnessing the generative power of the T2I model. told, we set \(\tau=35\) and \(w=4\) for all the ablations. For the dataset, we did the ablation using 1000 instances of the CelebA HQ validation dataset for the smiling attribute. As the quantitative metrics, we used the SR, the FID, the FS, and the CD. Regarding the FID metric, please note that this metric is very sensible to the number of images. When using fewer images, the FID becomes less reliable to compare two methods, and hardly becomes intelligible if the two approaches are evaluated on different number of images. Since we use the FID to compare counterfactual on only those instances that flipped the classifier, comparing FIDs where the SR varies significantly does not give any cues. Steps and scale trade-off.To begin with, we investigate the effect of the number inversion steps and the scale of the guidance. We jointly explore both variables to check the best trade-off, as shown in Table 3. At first glance, we notice that adding a higher guidance scale or more noise inversion steps produces more successful counterfactuals, assessed with the SR. Yet, it comes with a trade-off in other compartments: namely, the quality of the CE, and the amount of editions into the image. Generally, increasing \(\tau\) or \(w\) reflects a decrease in the quality of the image and the increasing numbers of editions. Learning the Context Token.Continuing with our study, we analyze the inclusion of our novel context token into our counterfactual generation pipeline. To ablate this component, we test whether using our learned context tokens has any advantage in contrast to giving a generic description. The results are in Table 4. As we can see, including our tokens provides the best performance gains in terms of SR. Qualitatively, the images are similar, yet, the images without context present some artifacts in some cases. Furthermore, we see that removing the context provides a boost in the CD and FS metrics. Although it seems counterintuitive to include this component, we can easily reach these values by decreasing \(\tau\) or \(w\) (setting \(\tau=30\) and \(w=4\), check Table 3), and reducing the inference time. Effect of the guidance.We further explore the inclusion of the negative drift term in Eq. 10 and show the results in Table 5. From the quantitative assessment, we initially observed that using the classifier-free guidance (CFG in the Table) decreases the SR. When denoising the current stage \(x_{t}\) at time \(t\), the CFG in Eq. 4 estimates gradients of the log-likelihood conditioned on \(C_{j}\), \(-\nabla_{x_{t}}\log(p(x_{t}|C_{j}))\), [20] thus, pushing the generation _toward_ the distribution of \(C_{j}\). In contrast, incorporating the negative guidance (NG) helps steer the generation _away_ from the distribution conditioned on \(C_{i}\). Therefore, the combined effect results in moving the instance from the boundary decision. From a qualitative perspective, we did not see major differences.Nonetheless, as noted in the context of ablation, this can be easily mitigated by reducing \(w\) and \(\tau\). Multi-token Inclusion.Finally, we explore using multiple tokens instead of a single one for both the context and class embeddings, shown in Table 6. Without any sur \begin{table} \begin{tabular}{c|c c c c} \hline \hline Tokens & SR (\(\uparrow\)) & FID (\(\downarrow\)) & FS (\(\uparrow\)) & CD (\(\downarrow\)) \\ \hline Single & 88.1 & 22.02 & 0.7177 & 3.02 \\ Multiple & 92.9 & 24.37 & 0.6731 & 3.03 \\ \hline \hline \end{tabular} \end{table} Table 6: **Multiple-tokens Ablation. We test if using multiple tokens in our pipeline provides any advantage. The results show an increase in SR.** \begin{table} \begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{Steps} & GS & SR (\(\uparrow\)) & FID (\(\downarrow\)) & FS (\(\uparrow\)) & CD (\(\downarrow\)) \\ \hline & 3 & 30.1 & 35.26 & 0.8957 & 2.82 \\ 25 & 4 & 41.0 & 30.23 & 0.8570 & 2.61 \\ & 5 & 50.1 & 27.39 & 0.8231 & 2.33 \\ \hline & 3 & 62.1 & 23.15 & 0.8147 & 2.34 \\ 30 & 4 & 74.0 & 22.51 & 0.7710 & 2.66 \\ & 5 & 80.8 & 23.51 & 0.7300 & 2.85 \\ \hline & 3 & 87.1 & 21.69 & 0.7227 & 2.63 \\ 35 & 4 & 92.9 & 24.37 & 0.6731 & 3.03 \\ & 5 & 95.0 & 27.53 & 0.6306 & 3.54 \\ \hline \hline \end{tabular} \end{table} Table 3: **Steps-Scale trade-off. We analyze the trade-off between our hyperparameters \(\tau\) and \(w\). Our results show that increasing \(\tau\) gives a strong boost in SR while impacting the other metrics and increasing the generation time. In contrast, \(w\) has a similar effect but is less potent without any effect on the generation time.** \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Steps} & GS & SR (\(\uparrow\)) & FID (\(\downarrow\)) & FS (\(\uparrow\)) & CD (\(\downarrow\)) \\ \hline & 3 & 30.1 & 35.26 & 0.8957 & 2.82 \\ 25 & 4 & 41.0 & 30.23 & 0.8570 & 2.61 \\ & 5 & 50.1 & 27.39 & 0.8231 & 2.33 \\ \hline & 3 & 62.1 & 23.15 & 0.8147 & 2.34 \\ 30 & 4 & 74.0 & 22.51 & 0.7710 & 2.66 \\ & 5 & 80.8 & 23.51 & 0.7300 & 2.85 \\ \hline & 3 & 87.1 & 21.69 & 0.7227 & 2.63 \\ 35 & 4 & 92.9 & 24.37 & 0.6731 & 3.03 \\ & 5 & 95.0 & 27.53 & 0.6306 & 3.54 \\ \hline \hline \end{tabular} \end{table} Table 3: **Steps-Scale trade-off. We analyze the trade-off between our hyperparameters \(\tau\) and \(w\). Our results show that increasing \(\tau\) gives a strong boost in SR while impacting the other metrics and increasing the generation time. In contrast, \(w\) has a similar effect but is less potent without any effect on the generation time.** prise, we noticed that using a single token reduces the SR by a small factor. This aligns with the observations given by [12], a token catches enough information of an object or style - or in this case, inductive biases. Like in previous analyses, including multiple tokens will increase the efficiency of the model, since we can reach similar performances by tuning \(\tau\) or \(w\). Qualitatively, the most notable change between the images is sharpness. Recommendations.Given the previous results, we propose several recommendations for the user and the model debugger, as explained in the introduction. Recall that the counterfactual explanations are used as well to recommend changes to the user to get a positive outcome. So, for the user, we recommend using the lower amount of iterations \(\tau\) and guidance scale \(w\). This results in a similarity increase and fewer edited characteristics (as evidenced by the CD and FS metrics). If the algorithm fails, it is preferable to adjust the guidance scale rather than the number of steps. For the debugger, always use the context, the negative guidance, and multiple tokens. When building the counterfactuals, follow the same recommendations for the user. ### Limitations. To test TIME in more complex scenarios, we generate CEs in the BDD100k [50] dataset using a DenseNet121 [21] trained in a _move-forward/stop_ binary classification, as in [23]. We show the quantitative evaluation in Table 7. When generating the explanations, we noticed that TIME modifies most parts of the image, unfortunately, as shown by the \(S^{3}\) metric. This is expected, as this task is challenging since it requires multiple factors to decide if to stop or to move forward. Nevertheless, we believe that these explanations still give some useful insights as a debugging tool. For example, Figure 3 shows that removing the red lights and adding motion blur will change the classification from _stop_ to _move_, as evidenced in [25], or adding objects in front will flip the prediction to _stop_. We believe that counterfactual methods for tasks dependent on complex scenes, where the decision is impacted by large objects or co-occurrences of several stimuli, require specific architectures. In fact, we noticed that ACE [25] mainly adds some small modifications (_e.g_. changing the red lights), which is not inaccurate but is too constrained and cannot explore more insights about the learned features. Indeed, the work of Zemni _et al_. [51] focuses only on the object aspect of counterfactuals, in this case using an object-centric generator, BlobGAN [9]. This suggests that general-purpose counterfactual methods are not adapted for these tasks. ## 5 Conclusion In this work, we present TIME, a counterfactual generation method to analyze classifiers disregarding their architecture and weights, only by looking at their inputs and outputs. By leveraging T2I generative models and a distillation approach, our method is capable of producing CEs for black-box models, a complex scenario not tackled before. Further, we show the advantages and limitations of TIME and shed light on possible future works. We believe that our approach opens the door to research focus on counterfactual methods in the challenging scenario of the black-box models. **Acknowledgements** Research reported in this publication was supported by the Agence Nationale pour la Recherche (ANR) under award number ANR-19-CHIA-0017. \begin{table} \begin{tabular}{c|c c c c c} \hline Method & FID (\(\downarrow\)) & sFID (\(\downarrow\)) & S\({}^{3}\) (\(\uparrow\)) & COUT (\(\uparrow\)) & SR (\(\uparrow\)) \\ \hline STEEX & 58.8 & - & - & - & 99.5 \\ DiME & 7.94 & 11.40 & 0.9463 & 0.2435 & 90.5 \\ ACE \(\ell_{1}\) & 1.02 & 6.25 & 0.9970 & 0.7451 & 99.9 \\ ACE \(\ell_{2}\) & 1.56 & 6.53 & 0.9946 & 0.7875 & 99.9 \\ \hline TIME (Ours) & 51.5 & 76.18 & 0.7651 & 0.1490 & 81.8 \\ \hline \end{tabular} \end{table} Table 7: **BDD Assessment.** We evaluate the performance of TIME on the complex BDD100k benchmark. On this dataset, there is still room for improvement for black-box counterfactual methods. Figure 3: **BDD100k, a limit for TIME.** TIME changes the entire scene when generating the counterfactuals. Nevertheless, it still gives some insight into what the models have learned, as illustrated by the features inside the red boxes.
2309.16849
Space-Time Attention with Shifted Non-Local Search
Efficiently computing attention maps for videos is challenging due to the motion of objects between frames. While a standard non-local search is high-quality for a window surrounding each query point, the window's small size cannot accommodate motion. Methods for long-range motion use an auxiliary network to predict the most similar key coordinates as offsets from each query location. However, accurately predicting this flow field of offsets remains challenging, even for large-scale networks. Small spatial inaccuracies significantly impact the attention module's quality. This paper proposes a search strategy that combines the quality of a non-local search with the range of predicted offsets. The method, named Shifted Non-Local Search, executes a small grid search surrounding the predicted offsets to correct small spatial errors. Our method's in-place computation consumes 10 times less memory and is over 3 times faster than previous work. Experimentally, correcting the small spatial errors improves the video frame alignment quality by over 3 dB PSNR. Our search upgrades existing space-time attention modules, which improves video denoising results by 0.30 dB PSNR for a 7.5% increase in overall runtime. We integrate our space-time attention module into a UNet-like architecture to achieve state-of-the-art results on video denoising.
Kent Gauen, Stanley Chan
2023-09-28T20:59:51Z
http://arxiv.org/abs/2309.16849v2
# Space-Time Attention with Shifted Non-Local Search ###### Abstract Efficiently computing attention maps for videos is challenging due to the motion of objects between frames. While a standard non-local search is high-quality for a window surrounding each query point, the window's small size cannot accommodate motion. Methods for long-range motion use an auxiliary network to predict the most similar key coordinates as offsets from each query location. However, accurately predicting this flow field of offsets remains challenging, even for large-scale networks. Small spatial inaccuracies significantly impact the attention module's quality. This paper proposes a search strategy that combines the quality of a non-local search with the range of predicted offsets. The method, named Shifted Non-Local Search, executes a small grid search surrounding the predicted offsets to correct small spatial errors. Our method's in-place computation consumes 10 times less memory and is over 3 times faster than previous work. Experimentally, correcting the small spatial errors improves the video frame alignment quality by over 3 dB PSNR. Our search upgrades existing space-time attention modules, which improves video denoising results by 0.30 dB PSNR for a 7.5% increase in overall runtime. We integrate our space-time attention module into a UNet-like architecture to achieve state-of-the-art results on video denoising. ## 1 Introduction Attention modules form data-dependent receptive fields to aggregate related features from arbitrary coordinates. This functionality is considered to be central to the success of large-scale networks (Dosovitskiy et al., 2020; Hassani et al., 2023; Tian et al., 2020; Liang et al., 2022). Recent efforts aggregate features across frames of a video, enabling deep networks to learn temporal re Figure 1: **Comparing the Search Space of Attention Modules.** (From left to right) ViT uses an exhaustive, global grid search which is computationally costly (Dosovitskiy et al., 2020). A non-local search can be implemented efficiently but does not shift the search space according to the motion between frames (Hassani et al., 2023). The predicted offsets used in Guided Deformable Attention allow for long-range dependencies, but the flow fields contain small spatial inaccuracies (Liang et al., 2022). Our method, the Shifted Non-Local Search, combines the quality of a non-local search with the range of predicted offsets. It executes a small grid search surrounding the predicted offsets to correct small spatial errors. For images, the receptive fields are often bounded by a window surrounding the query location to reduce computation and the risk of overfitting. However, across frames of a video, this window must shift to data-dependent locations according to the motion. Long-range offsets are required, such as optical flow or nearest neighbors field (Barnes et al., 2010; Ranjan and Black, 2017). Non-local search strategies, such as NATTEN, provide excellent short-range receptive fields (Hassani et al., 2023). However, this category of method does not offset the search window, so it cannot handle the motion inherent to a space-time search. Alternative methods, such as Guided Deformable Attention, predict long-range offsets using an auxiliary network to accommodate motion (Liang et al., 2022). However, accurately predicting flow fields remains an open challenge, even for large-scale networks (Butler et al., 2012). This paper combines the quality of the non-local search with the range of predicted offsets. Our method, named Shifted Non-Local Search (Shifted-NLS), executes a small windowed grid search surrounding the predicted offset. For a marginal increase in wall-clock runtime, our search method acts as a correction step to the predicted offsets. In addition, our grid search is differentiable, which allows networks to learn long-range offsets. Our method works for attention because, unlike optical flow's goal of estimating apparent motion, standard attention modules are defined through a grid search. We show our search method improves video alignment, upgrades existing space-time attention modules, and enables a state-of-the-art architecture for video denoising. Critically, this paper also offers a practical means to compute the Shifted Non-Local Search. An important related work, named N3Net, already offers a similar method (Plotz and Roth, 2018). However, their method is not presented in the context of attention and requires integer-spaced indexing. Also, the N3Net search's forward runtime is 3-7x slower than our search and requires over a 10-25x spike in GPU memory. These computational demands may explain why the module has not been adopted in recent works on space-time attention, and our Pytorch-friendly module offers a practical alternative (Paszke et al., 2019). In summary, our contributions are: (i) We propose the shifted non-local search module for space-time attention. The module corrects spatial errors of predicted offsets using a high-fidelity windowed grid search. (ii) Our implementation uses in-place computation to reduce computational demands compared to previous work, using 10 times less memory and executing 3 times faster than N3Net (Plotz and Roth, 2018). While our code is not explicitly optimized for speed, our search's runtime is only 1 - 2.5 times slower than an optimized space-only non-local search (Hassani et al., 2023). (iii) Our search method improves video alignment quality by more than 3 dB PSNR, yielding improved deep network quality for video denoising. ## 2 Related Works **Space-Only Attention:** Attention modules often use a modified search space to be computationally efficient, and most of them search only spatially (Dosovitskiy et al., 2020; Mou et al., 2021; Liu et al., 2021). Hassani et al. (2023) offers an efficient non-local search but cannot accommodate long-range offsets. Xia et al. (2022) applies predicted offsets for single images but suffers from the inaccuracy of using a network to predict the flow field. **Space-Time Attention:** Recent works propose temporal attention modules using predicted offsets learned with auxiliary networks which are inspired by Deformable Convolution (Dai et al., 2017). Examples of these methods include the temporal mutual self-attention module (TSMA), the temporal deformed alignment module (TDAN), and the guided deformable attention module (GDA) (Liang et al., 2022; Tian et al., 2020; Liang et al., 2022). Each method predicts pixel-level offsets and warps an adjacent frame to match a query frame. These methods all require training a network to learn these offsets. Plotz and Roth (2018) proposed N3Net which does execute a shifted grid search, but its implementation is not connected to attention modules, does not propagate gradients through its grid search, and requires expensive computation. Video Non-Local Bayes is a classical method that can be formulated as an attention module (Arias and Morel, 2018). Figure 1 compares the search space of related works on a single frame. **Restoration Architectures:** Presented concurrently with new attention modules, authors often present an architecture design for video restoration. TDAN is used for video super-resolution, and RVRT is applied to video super-resolution, deblurring, and denoising (Tian et al., 2020; Liang et al., 2022). Their attention only applies to frame pairs, while ours searches multiple frames in parallel. ## 3 Method ### Problem Setup The attention modules described in this section introduce increasingly sophisticated search methods to establish notation and illustrate how the Shifted Non-Local Search naturally extends related works. **Global Attention.** An input video, \(\mathbf{X}_{\text{in}}\), has shape \(T\times H\times W\times F\) denoting frames, height, width, and features. The video is projected with a \(1\times 1\) convolution to create the query (\(\mathbf{Q}\)), key (\(\mathbf{K}\)), and value (\(\mathbf{V}\)) videos. When the videos are reshaped into matrices of size \(THW\times F\), we use a subscript \(M\), i.e. \(\mathbf{Q}_{M}\). Attention consists of two steps: search and aggregate. Searching computes the similarity between the queries and keys, often using an outer product written as the matrix \(\mathbf{S}=\mathbf{Q}_{M}\mathbf{K}_{M}^{\mathsf{T}}\) with shape \(THW\times THW\). Aggregation computes the weighted sum of key rows written as \(\mathbf{A}=\sigma(\mathbf{S})\mathbf{V}_{M}\) with shape \(THW\times F\) where \(\sigma(\cdot)\) is the softmax function applied across the columns. In summary, \(\mathbf{X}_{\text{out}}=\text{Attention}(\mathbf{X}_{\text{in}})=\text{reshape}( \sigma(\mathbf{Q}_{M}\mathbf{K}_{M}^{\mathsf{T}})\mathbf{V}_{M})\)(Dosovitskiy et al., 2020). The global search requires expensive computation and is unnecessary for some applications. **Neighborhood Attention.** Neighborhood Attention constructs a sparse similarity matrix by reducing the number of similarities computed between the queries and keys (Hassani et al., 2023). With specialized code, this attention is much faster than the global search and reduces the risk of overfitting. For each query, the similarity will only be computed for keys within a spatial window of size \((W_{s},W_{s})\) surrounding the query's coordinate. To describe this in detail, we associate the \(i^{\text{th}}\) row of the similarity matrix with the 3D coordinate at \((t_{i},h_{i},w_{i})\). The similarities are now computed as \(\mathbf{S}[i,j]=\mathbf{Q}_{M}[i]\mathbf{K}_{M}[j]^{\mathsf{T}}=\mathbf{Q}[t_{i},h_{i},w_{i}] \mathbf{K}[t_{j},h_{j},w_{j}]^{\mathsf{T}}\) when \((t_{j},h_{j},w_{j})\in\{(t_{i},h_{i}-W_{s}/2+\delta_{h},w_{i}-W_{s}/2+\delta_{ w}):\delta_{h},\delta_{w}\in\{0,\dots,W_{s}-1\}\}\). Since most columns of \(\mathbf{S}\) are zero, the data is restructured as \(\mathbf{S}[i,\delta_{h},\delta_{w}]=\mathbf{Q}[t_{i},h_{i},w_{i}]\mathbf{K}[t_{i},h_{i}-W _{s}/2+\delta_{h},w_{i}-W_{s}/2+\delta_{w}]^{\mathsf{T}}\). **The Non-Local Search.** Rather than compute similarities between pixels, the standard non-local search from denoising literature operates on patches (Buades et al., 2011). Patches are more robust to noise than pixels and allow query coordinates to be skipped with an integer-valued query stride. The final output will be valid (e.g. no holes) when the patch size (\(P\)) and query stride (\(S_{Q}\)) satisfy the following condition, \([(P-1)/2]<S_{Q}\). To clean-up the messy indexing, we compactly write the spatial (height) index as \(h_{i}(\delta_{h})=h_{i}-W_{s}/2+\delta_{h}\). Similarity values are now computed as \(\mathbf{S}[i,\delta_{h},\delta_{w}]=\sum_{p_{h},p_{w}=-P/2,-P/2}^{P/2}\mathbf{Q}[t_{i},h_{i}+p_{h},w_{i}+p_{w}]\mathbf{K}[t_{i},h_{i}(\delta_{h}+p_{h}),w_{i}(\delta_{w} +p_{w})]^{\mathsf{T}}\) where \(i\in\{0,\dots,T(HW/S_{Q}^{2})-1\}\) so \(\mathbf{S}\) has shape \(THW/S_{Q}^{2}\times W_{s}\times W_{s}\). ### The Shifted Non-Local Search **The Shifted Non-Local Search.** A Shifted Non-Local Search (Shifted-NLS) executes a Non-Local Search with the center of each spatial window shifted by an offset. The offsets between frames \(t\) and \(t-1\) are denoted as \(\mathbf{F}_{\text{in}}\) with shape \(T\times H\times W\times 2\). The center of the search window is _shifted_ from \((h_{i},w_{i})\) to \((h_{i}+\Delta_{h}(i),w_{i}+\Delta_{w}(i))\) with \((\Delta_{h}(i),\Delta_{w}(i))=\mathbf{F}_{\text{in}}[t_{i},h_{i},w_{i}]\). This shift is depicted in Figure 2 by the colored circles at the end of the arrows under "Predicted Offsets". The similarities are computed as \(\mathbf{S}[i,\delta_{h},\delta_{w}]=\sum_{p_{h},p_{w}=-P/2,-P/2}^{P/2,P/2}\mathbf{Q}[t_ {i},h_{i},w_{i}]\mathbf{K}[t_{i}-1,h_{i}(\delta_{h}+p_{h})+\Delta_{h}(i),w_{i}( \delta_{w}+p_{w})+\Delta_{w}(i)]^{\mathsf{T}}\) using compact notation for the spatial (height) Figure 2: **The Shifted Non-Local Search for Space-Time Attention.** This figure depicts a space-time attention module using the Shifted Non-Local Search. The query points are deformed using the predicted offsets. Next, a grid search is executed surrounding the predicted offsets, and then the most similar locations are chosen from the search window. These locations are aggregated using a module such as Guided Deformable Attention. index, \(h_{i}(\delta_{h})=h_{i}-W_{s}/2+\delta_{h}\). These offset search windows are depicted by the colored squares under "Shifted Non-Local Search" in Figure 2. The output offsets are the displacements from each query coordinate: \(\mathbf{F}_{\text{out}}[i,\delta_{h},\delta_{w}]=(h_{i}(\delta_{h})+\Delta_{h}(i)-h_ {i},w_{i}(\delta_{w})+\Delta_{w}(i)-w_{i})\). Once the similarities are computed, we collapse the search dimensions (\(W_{s}\times W_{s}\)) into a single dimension (\(W_{s}^{2}\)) and retain only the top-L (aka "top-K") most similar columns, \(\mathbf{S}_{L},\mathbf{F}_{\text{out},L}=\text{top-L}(\mathbf{S},\mathbf{F}_{\text{out}},L)\). The top-L operator has known theoretical issues with differentiation, but we observe networks still learn good weights despite this (Plotz & Roth, 2018). The top-L (\(L=1\)) coordinates are depicted under "Selected Locations" on the far right of Figure 2. This output is written as the similarity (\(\mathbf{S}_{L}\)) and offset (\(\mathbf{F}_{\text{out},L}\)) tensors with shapes \(T(HW)/S_{Q}^{2}\times L\) and \(T(HW)/S_{Q}^{2}\times L\times 2\), respectively. In summary: \(\mathbf{S}_{L},\mathbf{F}_{\text{out},L}=\text{Shifted-NLS}(\mathbf{Q},\mathbf{K},\mathbf{F}_{\text {in}},L)\). In practice, the Shifted-NLS is computed in parallel across a temporal window of size \(W_{t}\). Additionally, a key stride (\(S_{K}\)) changes the spacing between points in the grid search to allow for sub-pixel correction, \(h_{i}(S_{K}\delta_{h}+p_{h})=h_{i}-S_{K}W_{s}/2+S_{K}\delta_{h}+p_{h}\). And since these coordinates are floating-points, bilinear interpolation is used for efficient indexing (Jeon & Kim, 2017). **Aggregation.** The features from the Shifted-NLS are aggregated, and an example method is a weighted sum of non-local patches. The output video is initialized to zero, and each non-local patch is added in parallel (using atomic operators) weighted by a normalized similarity value. For example, writing the offsets as \((\Delta_{h}(i,l),\Delta_{w}(i,l))=\mathbf{F}_{\text{out},L}[i,l]\), each patch's \((p_{h},p_{w})\) pixel is added as \(\mathbf{X}_{\text{out}}[t_{i},h_{i}+p_{h},w_{i}+p_{w}]+=\sum_{l=1}^{L}\sigma(\mathbf{S }_{L})[i,l]\mathbf{V}[t_{i}-1,h_{i}+\Delta_{h}(i,l)+p_{h},w_{i}+\Delta_{w}(i,l)+p_ {w}]\), where \(\sigma(\cdot)\) is the softmax function applied across the columns. Each pixel coordinate is divided by the number of contributing terms to normalize the output. When the patch size is 1, this is logically identical to Guided Deformable Attention (GDA) (Liang et al., 2022b). And while the Shifted-NLS is compatible with GDA, GDA is limited to aggregating features from a single frame. For our Space-Time Attention Network (STAN) architecture, we would like to aggregate features across multiple frames in parallel according to learned weights, similar to PacNet (Vaksman et al., 2021). To implement this logic, we create a module to stack \(L\) patches and apply 3D convolution to reduce the stack across \(L\). Details are in Supplemental Section 7. ### Why are predicted offsets not enough? A Shifted Non-Local Search executes a grid search surrounding a predicted offset to correct spatial inaccuracies. In this section, we explain why even this simple grid search can intuitively outperform small networks by reviewing results in the closely related research area of optical flow. **Millions of parameters for a 6-pixel error.** The best methods for optical flow today, according to the Sintel-Clean benchmark, report an average end-point error of about 1 pixel (Butler et al., Figure 3: **Predicted offsets are only a few pixels away from their optimal location.** This figure shows query points in the query frame (top; black points), and their counterparts in the adjacent frame shifted with optical flow (bottom; blue points). The optical flow points are then corrected by a grid search of size \(41\times 41\) (bottom; yellow points). The spatial similarity between the blue and yellow points show that repurposing optical flow estimates for attention requires only small spatial corrections. The right subfigure plots the distribution of these corrections. The peak value is positioned at the center, indicating no correction is necessary for 3.5% of all cases. The two ellipses form the 68% and 90% confidence intervals. 2012). Meanwhile, the classical pyramid-based method of 2014 reports an error of 6.73 pixels (Sun et al., 2014). Although the average improvement of about 6 pixels is impressive, this gap is closed using sophisticated training methods and network architectures with millions of parameters. Some applications claim to hugely benefit from the subpixel accuracy of these methods. However, it seems unlikely that _each instance_ of an attention module will require its own auxiliary network with millions of parameters to simply predict coordinates with similar features. **Assessing the Error of Optical Flow for Attention.** While the end-point-error is compared against an optical flow groundtruth, we qualitatively find the error to be similar when optical flow is used to estimate locations for attention. Using OpenCV's implementation of Farneback's optical flow method from 2003, Figure 3 qualitatively shows the flow's errors are concentrated in a small region surrounding the initial estimate, despite a large search grid of size \(41\times 41\)(Itseez, 2015; Farneback, 2003). This supports our idea to execute a small windowed grid search to correct the predicted offsets. ### An Inplace Computation **Our In-Place Computation.** Our in-place computation of the Shifted Non-Local Search executes each query-key pair's similarity using the indexing from Section 3.2. The term _in-place_ specifies our search does not require storing additional data related to the video. This is similar to NATTEN, but unlike N3Net which requires the construction of a patch database. However, NATTEN's fixed-window search uses tiling to reduce the number of reads from global memory, which does not freely extend to a shifted search. Also, the global memory access pattern of a shifted window search is undesirable, which necessarily increases our method's runtime. Section 4.4 shows despite this issue, our method is 3 - 7x faster than N3Net. In some cases, our search is even faster than NATTEN. **Limitations of NATTEN.** NATTEN is designed to execute a non-local search with a small runtime (Hassani et al., 2023). Their core efficiency comes from reducing the number of reads from global memory by sharing global reads across the threads of a CUDA block. This principle does not freely extend to space-time because the search windows shift to data-dependent, non-overlapping locations, as depicted in Figure 4. Let \(Q=3\) be the tiled size and \(W_{s}\) as the window size; then overlapping windows require only \(Q+W_{s}-1\) global reads while non-overlapping windows require \(Q\cdot W_{s}^{2}\) global reads. The far-right subfigure in Figure 4 plots these two quantities, showing a significant disparity between the two cases. The necessarily increased number of global reads for a space-time search is a fundamental difference from space-only operators. **Limitations of N3Net.** The query (\(\mathbf{Q}\)) and key (\(\mathbf{K}\)) videos can be _unfolded_ to construct a database of patches, written as \(\mathbf{Q}_{P}\) and \(\mathbf{K}_{P}\) with shape \(T(HW/S_{Q}^{2})\times FP^{2}\) and \(T(HW/S_{K}^{2})\times FP^{2}\), respectively. The query (\(S_{Q}\)) and key (\(S_{K}\)) strides must be integer-valued. Normally, operators can batch across large dimensions, such as \(T(HW/S_{K}^{2})\), to control memory consumption. However, the data-dependent indexing across space-time makes batching across the keys impossible. The entire key database must be simultaneously represented in memory since each query patch may access any key patch. If queries are searched in parallel, the memory consumption increases by \(P^{2}\times(1/S_{Q}^{2}+1/S_{K}^{2})\). For example, if \(P=3\) and \(S_{Q}=S_{K}=1\), the memory consumption of the videos increases by a factor of \(18\). Figure 4: **Video Dynamics Challenge Existing Computational Approaches.** Searching across time is computationally challenging because _spatially adjacent patches in one frame have data-dependent spatial locations in adjacent frames_. This figure shows two neighboring locations in one frame (the chick and the kitten) move to separate spatial locations in the next frame. The benefit of NATTEN’s tiling is lost because the search windows no longer overlap (Hassani et al., 2023). The rightmost subfigure plots the number of global memory reads, highlighting the lost benefit of tiling. Experiments First, video alignment (Sec 4.1) demonstrates the Shifted Non-Local Search (Shifted-NLS) dramatically improves an attention module's quality. Next (Sec 4.2), RVRT's network is upgraded by replacing the Predicted Offsets with our Shifted-NLS, showing the improved attention module quality translates to improved denoising quality. Finally (Sec 4.3), RVRT's pairwise frame restriction is lifted to a multi-frame network (STAN), which achieves state-of-the-art video denoising results. ### Video Frame Alignment The Shifted Non-Local Search (Shifted-NLS) corrects the small spatial errors of predicted offsets (e.g. optical flow). However, assessing these spatial errors by directly comparing the offsets is misleading. Since the offsets are subsequently used for aggregation, similar offsets can (and do) produce dissimilar outputs. Video alignment provides a ground-truth target for the attention module's final output with standard qualitative and quantitative evaluation criteria. For video alignment, we first execute the search with the queries set to frame \(t\), \(\mathbf{Q}=\mathbf{X}_{\text{in}}[t]\), and keys and values set to frame \(t+1\), \(\mathbf{K}=\mathbf{V}=\mathbf{X}_{\text{in}}[t+1]\). Second, we aggregate using only the most similar patches (top-\(L=1\)). The output should match frame \(t\) of the input, i.e. \(\mathbf{X}_{\text{out}}\approx\mathbf{X}_{\text{in}}[t]\). This experiment uses the first 10 frames from the DAVIS training dataset (Pont-Tuset et al., 2017). When searching and computing the Farneback optical flow, we add a small amount of Gaussian noise (\(\sigma^{2}=15\)) to simulate the training dynamics between the query and key values (Farneback, 2003). Alignment quality is measured as the PSNR between the noise-free aligned and reference images. Both the Shifted-NLS and the Non-Local Search (NLS) methods use our implementation since NATTEN's patch size is fixed to 1 and limited to a search space of \(13\) (\(W_{s}=13\)). Figure 5 compares the alignment quality and runtime of the Shifted-NLS and the NLS as the search space expands. Each point is associated with a spatial window size, \(W_{s}\in\{1,3,11,15,21,27,33\}\). A window of size \(1\) indicates no search. Currently, NATTEN supports window sizes up to \(13\), as indicated by the dotted circles. For the Shifted-NLS, the PSNR plateaus around window size \(11\), while for the NLS it plateaus around \(21\). This matches our intuition that optical flow contains small spatial errors, which our grid search corrects. When the spatial search window is \(11\), the Shifted-NLS yields \(30.60\) dB PSNR while the NLS and the Predicted Offsets yield \(26.63\) and \(24.11\) dB PSNR, respectively. Figure 6 shows our Shifted-NLS method's improvement depends on video motion. Each point is the difference in PSNR between the Shifted-NLS and the NLS for each video in the DAVIS training dataset. When motion is larger than about 3 pixels, Shifted-NLS improves the alignment quality by more than 5 dB PSNR. When the average motion is less than 1 pixel, the Shifted-NLS degrades the search quality. In the case of small motion, the offset values act as noise. Figure 7 shows qualitative examples of the aligned images when the spatial search window is set to \(11\times 11\). The NLS patch size is set to \(1\) to match NATTEN, and the Shifted-NLS patch size and query stride is indicated in the column title. The NLS method creates a doubling effect because the search radius cannot compensate for the motion shifts. For example, the first number of the speed limit sign (top row) reads "5" rather than "2". The Shifted-NLS largely removes the doubling effect, but not entirely. When the optical flow is inaccurate, a doubling effect still appears. For example, in the third row, a face appears where only a fence should be visible. The errors from Predicted Offsets create a warping effect similar to psychedelic art or the melting clocks of Salvador Dali. The Shifted-NLS method visually removes the warping effect, replacing wavy edges with sharp ones. Figure 7 also shows the impact of patch size and query stride. A larger patch size reduces noise since the overlapping patches are averaged together. This explains the qualitative difference between the grainy image with patch size 1 and the smoothed image with patch size 3. When the query stride is 2, patches no longer overlap producing the grainy output (middle row). ### Upgrading Space-Time Attention This experiment shows that replacing a small neural network with our Shifted Non-Local Search improves denoising quality. Guided Deformable Attention (GDA) uses an auxiliary network to produce offsets for aggregation by transforming an input video clip and optical flow offsets: \(\mathbf{F}_{\text{out}}=\text{Availinary Network}(\mathbf{X_{\text{in}}},\mathbf{Y_{\text{in}}}, \mathbf{F_{\text{in}}})\). We replace their auxiliary network with our Shifted Non-Local Search: \(\mathbf{\cdot}\), \(\mathbf{F}_{\text{out},L}=\text{Shifted-NLS}(\mathbf{X_{\text{in}}},\mathbf{Y_{\text{in}}},\mathbf{F_{\text{in}}},L)\) with \(L=9\) to match RVRT. The spatial window is \(9\times 9\), the temporal window is fixed to \(1\) by architecture design, the query stride is 1 (\(S_{Q}=1\)), the key stride is \(1/2\) (\(S_{K}=1/2\)), and the patch size is 1. Table 1 shows the denoising quality improves when using our search method compared to using predicted offsets. The improvement is between \(0.20-0.40\) dB across all noise levels, an increase often attributed to a new architecture. \begin{table} \begin{tabular}{c c c} \hline \hline \(\sigma\) & Predicted Offsets & Shifted-NLS \\ \hline 10 & 38.69/0.966/0.004 & **38.90/0.967/0.004** \\ 20 & 35.32/0.933/0.013 & **35.58/0.936/0.012** \\ 30 & 33.39/0.902/0.026 & **33.68/0.907/0.024** \\ 40 & 32.02/0.873/0.042 & **32.35/0.881/0.040** \\ 50 & 30.93/0.844/0.062 & **31.30/0.854/0.058** \\ \hline Time (sec) & 23.86 & 25.56 \\ Mem (GB) & 10.06 & 10.06 \\ \hline \hline \end{tabular} \end{table} Table 1: **Upgrading Previous Space-Time Attention Modules.** [PSNR!/SSIM!/ST-RRED!] This table reports denoising results on the DAVIS test dataset. RVRT’s GDA module aggregates features according to the input offsets. This table compares using the offsets from an auxiliary network (Predicted Offsets; the default) with a small grid search (Shifted-NLS). Figure 7: **Comparing alignment quality for different search methods.** This figure uses the indicated columns’s search method to align the adjacent frame (right) to the reference frame (second from right). The Non-Local Search and the Shifted Non-Local Search use a search space of \(11\times 11\). The bottom-right corner reports the PSNR between the aligned and reference images; higher is better. ### Space-Time Attention Network (STAN) We integrate the Shifted Non-Local Search into our Space-Time Attention Network (STAN). The architecture is a simple mixture of the UNet and RVRT networks (Ronneberger et al., 2015). We train the network for video denoising on the DAVIS train-val dataset (Pont-Tuset et al., 2017). We test the network on the DAVIS testing dataset and the Set8 dataset (Tassano et al., 2020). Due to space, we regulate details to Supplemental Section 6.4. Table 2 shows our network achieves state-of-the-art results on video denoising. We note the original RVRT network reports better results, but we re-train RVRT to compare both networks trained on the same number of steps. This reproducibility problem may be due to the computational environment or insufficient training time (see Supplemental Section 6.4). However, we copy RVRT's training procedure for both RVRT and STAN. Our method outperforms all other published video denoising methods, which supports the hypothesis that the Shifted Non-Local search is a useful module for space-time attention (Arias and Morel, 2018; Tassano et al., 2020; Vaksman et al., 2021). ### Computational Benchmarking This section compares the computation for three non-local search strategies. The Shifted-NLS and N3Net methods execute a space-time search, and NATTEN executes a space-only (unshifted) search. Each benchmark includes a function call to a top-L function for compatibility with existing aggregation methods. Figures 9 and 10 report benchmark results of each search method executed on a \(5\) frame video with varying resolution. Due to NATTEN's tiling, its query stride is fixed at \(1\). The other methods vary the query stride to \(1\) or \(2\) as indicated by the dotted and solid lines, respectively. Figure 9 reports memory consumption for an input video with \(192\) features (such as in RVRT) using images with resolution \(152\times 152\). Both N3Net and Shifted-NLS use a patch size of \(7\). N3Net requires \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \(\sigma^{2}\) & VNLB & FastDVDNet & PaCNet & RVRT (Reproduced)* & STAN* \\ \hline \multirow{6}{*}{Set8} & 10 & **37.26** & 36.44 & 37.06 & 36.66(0.955/0.003) & 37.19(0.960/0.002) \\ & 20 & 33.72 & 33.43 & 33.94 & 33.47(0.918/0.011) & **34.27(0.931/0.007)** \\ & 30 & 31.74 & 31.68 & 32.05 & 31.65/0.885/0.022 & **32.58(0.905/0.013)** \\ & 40 & 30.39 & 30.46 & 30.70 & 30.38/0.855/0.035 & **31.39/0.880/0.021** \\ & 50 & 29.24 & 29.53 & 29.66 & 29.41/0.829/0.052 & **30.46/0.856/0.030** \\ \hline \multirow{6}{*}{DAVIS} & 10 & 38.85 & 38.71 & 39.97 & 39.29/0.97(0.003) & **40.22/0.976/0.002** \\ & 20 & 35.68 & 35.77 & 36.82 & 36.00/0.942(0.010) & **37.30/0.956/0.007** \\ & 30 & 33.73 & 34.04 & 34.79 & 34.12/0.915/0.021 & **35.49/0.937/0.012** \\ & 40 & 32.32 & 32.82 & 33.34 & 32.80/0.891/0.034 & **34.26/0.918/0.020** \\ & 50 & 31.13 & 31.86 & 32.20 & 31.78/0.868/0.050 & **33.26/0.901/0.029** \\ \hline \multicolumn{2}{c}{Time (sec)} & 497.93 & 0.11 & 182.34 & 1.63 & 3.26 \\ \multicolumn{2}{c}{GPU Memory (GB)} & 0.0 & 0.37 & 12.35 & 4.25 & 10.75 \\ \multicolumn{2}{c}{Parameters (\(10^{6}\))} & N/A & 2.4 & 2.9 & 12.8 & 12.1 \\ \hline \hline \end{tabular} \end{table} Table 2: **State-of-the-art Video Denoising.**[PSNR\(\uparrow\)/SSIM\(\uparrow\)/ST-RRED\(\downarrow\)] This table reports state-of-the-art results on video denoising. *RVRT and STAN explicitly use space-time attention. The runtime and memory usage are recorded using a single 10-frame video of resolution \(480\times 480\). We report reproduced RVRT results with further details in Supplemental Section 6.4. Figure 8: **Qualitatively Comparing Denoised Outputs.**[PSNR\(\uparrow\)] RVRT and STAN use space-time attention and are trained using the same procedure. STAN recovers more small details than RVRT. dramatically more memory than the Shifted-NLS module since it explicitly constructs a database of patches. When the spatial window size is 3, N3Net consumes 12.43 GB of memory, while the Shifted-NLS consumes 0.33 GB of memory. NATTEN's memory consumption grows from about \(0.75\) GB to \(0.97\) GB. NATTEN searches pairs of frames, so parallel searching across space-time requires stacking frames of the temporal search window along the batch dimension1. Footnote 1: We note this stacking of frames is not measured in NATTEN’s runtime Figure 10 reports runtimes using images with resolution \(320\times 320\), a search window of size \(9\), and a patch size of \(1\). As expected, the Shifted-NLS module is slower than NATTEN when the query stride is fixed to \(1\). For example, when the number of features is 32 the runtime of NATTEN and Shifted-NLS is about \(36.77\) and \(84.36\) milliseconds (ms), respectively. N3Net is far slower than both methods; N3Net's runtime for a query stride of \(1\) is too slow to plot clearly (about 490 ms). Notably, the Shifted-NLS is faster than NATTEN when the query stride can be set to \(2\). For 32 features, the runtime of the Shifted-NLS drops from \(84.36\) to \(27.95\) ms. However, the search quality will degrade as the query stride increases so the utility of this faster runtime depends on the application. ## 5 Conclusion This paper presents a Shifted Non-Local Search module for space-time attention. We first observe the errors of offsets predicted from auxiliary networks require only small spatial corrections. Rather than train a large-scale network with millions of parameters, we propose using a small grid search to correct these errors. Our in-place implementation of the Shifted Non-Local Search avoids absurd memory spikes with a competitive runtime. Correcting the small spatial errors corresponds to over a 3 dB improvement when aligning adjacent frames. We show this translates to improved denoising quality within denoising networks. As this module is designed for learning temporal representations, future work can apply this method to additional computer vision tasks such as instance segmentation and video synthesis.
2309.09969
Prompt a Robot to Walk with Large Language Models
Large language models (LLMs) pre-trained on vast internet-scale data have showcased remarkable capabilities across diverse domains. Recently, there has been escalating interest in deploying LLMs for robotics, aiming to harness the power of foundation models in real-world settings. However, this approach faces significant challenges, particularly in grounding these models in the physical world and in generating dynamic robot motions. To address these issues, we introduce a novel paradigm in which we use few-shot prompts collected from the physical environment, enabling the LLM to autoregressively generate low-level control commands for robots without task-specific fine-tuning. Experiments across various robots and environments validate that our method can effectively prompt a robot to walk. We thus illustrate how LLMs can proficiently function as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. The project website and source code can be found at: https://prompt2walk.github.io/ .
Yen-Jen Wang, Bike Zhang, Jianyu Chen, Koushil Sreenath
2023-09-18T17:50:17Z
http://arxiv.org/abs/2309.09969v2
# Prompt a Robot to Walk with Large Language Models ###### Abstract Large language models (LLMs) pre-trained on vast internet-scale data have showcased remarkable capabilities across diverse domains. Recently, there has been escalating interest in deploying LLMs for robotics, aiming to harness the power of foundation models in real-world settings. However, this approach faces significant challenges, particularly in grounding these models in the physical world and in generating dynamic robot motions. To address these issues, we introduce a novel paradigm in which we use few-shot prompts collected from the physical environment, enabling the LLM to autoregressively generate low-level control commands for robots without task-specific fine-tuning. Experiments across various robots and environments validate that our method can effectively prompt a robot to walk. We thus illustrate how LLMs can proficiently function as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. The project website and source code can be found at: prompt2walk.github.io. ## I Introduction Large language models (LLMs) pre-trained on internet-scale data [5, 33, 32, 9, 45] have demonstrated impressive results in various fields, e.g., natural language processing [29, 28], computer vision [31], code generation [7], etc. Building upon the success of LLMs, there is a surging interest in utilizing LLMs for embodied agents [1, 46], aiming to harness the power of foundation models in the physical world [2]. Towards this goal, significant progress has been made [4, 3, 10]. However, there are some remaining challenges. 1) Even though LLMs are trained with broad data at scale, the dataset does not incorporate data from the physical world, making it challenging to ground LLMs in robot control. 2) While foundation models have been widely used in a pre-training and fine-tuning paradigm for robotics applications, there could be a paradigm shift to few-shot learning in light of the progress of the natural language field [5]. 3) Most recent language-guided robot control research showcases mainly quasi-static robot motions. It remains uncertain whether LLMs can generate dynamic robot behaviors without a low-level controller interface or without relying on predefined motion primitives. In this paper, we want to raise the intriguing question of whether LLMs can function as low-level controllers for achieving dynamic tasks like robot walking? This requires us to address the challenges mentioned above. We do this by exploring a new paradigm that leverages few-shot prompts with a large language model, i.e., GPT-4, to directly output robot control actions. We hypothesize that, given prompts collected from the physical environment, LLMs can learn to interact with it in context, even though they are purely trained on text data. Moreover, we do not perform any fine-tuning of the LLM with task-specific robot data. We adopt a few-shot prompt approach as widely adopted in the natural language field. Furthermore, we consider a dynamic control task of robot walking. A visualization of the paradigm is illustrated in Fig. 1. We term this paradigm as prompting a robot to walk. Grounded in a physical environment, LLMs output target joint positions to allow a robot to walk given a designed text prompt, which includes a description prompt and an observation and action prompt. Consequently, the robot is able to interact with the physical world through the generated control actions and get the observations from the environment. As a summary, the contributions of our work are as follows. * Our main contribution is a framework for prompting a robot to walk with LLMs, where LLMs act as a feedback policy. * We propose and systematically analyze a text prompt design that enables LLMs to in-context learn robot walking behaviors. * We extensively validate our framework on different robots, various terrains, and multiple simulators. ### _Related Work_ **Large Language Models for Robotics.** Large language models have recently become a popular tool for robotics including manipulation [1, 10, 3, 23, 17, 53, 16], locomotion [41, 54], navigation [15, 37, 14, 12], etc. Additionally, there are some recent research efforts to develop language agents [52, 39] using LLMs as the core. Fig. 1: **Prompt a Robot to Walk.** Grounded in a physics-based simulator, LLMs output target joint positions to enable a robot to walk given a text prompt, which consists of a description prompt and an observation and action prompt. With a focus on the intersection between LLMs and low-level robot control, [47] trains a specialized GPT model using robot data to make a robot walk. However, our work directly uses the standard GPT-4 model without any fine-tuning. More interestingly, [26] instructs LLMs as general pattern machines and demonstrates a stabilizing controller for a cartpole in a sequence improvement manner [55]. Inspired by this work, we prompt LLMs to serve as a feedback policy for high dimensional robot walking. Note that our work prompts a feedback policy without iterative improvement, whereas the cartpole controller in [26] is gradually improved as a return-conditioned policy. In addition, we explore textual descriptions to enhance the policy. **Learning Robot Walking.** Learning-based approaches have become promising methods to enable robots to walk. Deep reinforcement learning (RL) has been successfully applied to real-world robot walking [40, 19]. In [30], agile walking behavior is attained by imitating animals. To deploy a robot in complex environments, a teacher-student framework is proposed in [21, 20]. Moreover, a robot can learn to walk in the real world [38, 49]. Furthermore, the learning-based approach can enable dynamic walking behaviors [50, 22, 25, 51, 6, 56]. More recently, LLMs have emerged as a useful tool for helping create learning-based policies for robot walking. In [41], contact patterns are instructed by human commands through LLMs. In [54], LLMs are utilized to define reward parameters for robot walking. In contrast to previous LLM-based robot walking work, we use LLMs to directly output low-level target joint positions. ## II Method In this section, we present our method of prompting a robot to walk with large language models. The overall framework is summarized in Fig. 2. ### _Data Collection_ A proper text prompt is one of the keys to utilizing LLMs for robot walking. We initialize the prompt based on an existing controller, which could be either model-based or learning-based. From the existing controller, we collect observation and action pairs. The observation consists of sensor readings, e.g., IMU and joint encoders, while the action represents the target joint positions. It is important to note that the collected data serves as an initial input for LLM inference. As the robot begins to interact with the environment and acquire new observations, the initial offline data will be replaced by LLM outputs. Thus, we consider this data collection phase as an initialization step. ### _Prompt Engineering_ Directly feeding observation and action pairs to LLMs often results in actions that do not achieve a stable walking gait. We next illustrate the prompt engineering step to guide LLMs to function as a feedback policy. Our prompt design, as shown in Fig. 3, can be classified into two categories: description prompt and observation and action prompt. **Description Prompt.** The description prompt begins with \(P_{TD}\), a precise description of the robot walking task. This is then followed by control design details, e.g., the policy's operating frequency, ensuring that the LLM aligns the actions to this frequency. Next, we specify the format and meaning of both observations and actions in \(P_{IO}\), allowing LLMs Fig. 2: **LLM Policy Overview.** We first collect data from an existing controller to initialize the LLM policy. Then, we design a text prompt including a description prompt and an observation and action prompt. The LLM outputs normalized target joint positions that are then tracked by a PD controller. After each LLM inference loop, the prompt is updated with the historical observations and actions. In our experiment, the LLM is supposed to run at \(10\) Hz although the simulation has to be paused to wait for LLM inference, and the PD controller executes at \(200\) Hz. to understand the context of the inputs and actions. Then, an explicit enumeration of the joint order of our robot is provided in \(P_{JO}\) to guide the LLM to comprehend the robot configuration. Additionally, we specify in the prompt \(P_{AI}\) that all the values LLMs encounter are not raw data. Instead, these numerical values have been normalized. Lastly, the prompt offers an overview of the entire control pipeline in \(P_{CP}\), granting the LLM a macro perspective on how individual components enabling it to process and interlink. It is crucial to highlight that, unlike classic learning-based and model-based walking controllers, text serves an important role in the LLM policy. **Observation and Action Prompt.** A sequence of observation and action pairs \(P_{Hist}\) are used as prompts. These pairs are generated from the recent history of the robot walking trajectory. This procedure is widely used in RL-based robot walking controllers, where it allows the neural network to infer the dynamics as well as the privileged environment information. With a sequence of observation and action prompts, LLMs can in-context learn the dynamics and infer a reactive control action, where the observation prompt serves as the feedback signal. Note that both observation and action are converted to text format to interface with LLMs. LLMs often struggle to comprehend the significance of numeric values, particularly floating point and negative numbers. Inspired by the prompt design in [26], we adopt a normalization approach for numerical values. Specifically, we use a linear transformation to map all the potential numeric values into non-negative integers, ranging from \(0\) to \(200\). We hypothesize that LLMs are mostly trained with text tokens, thus they are not sensitive enough to numerical values for robot control. ### _Grounding LLMs_ In order to make LLMs useful for robot walking control, we need to ground them in a physical environment. We now introduce the pipeline to allow LLMs to interact with a robot and an environment. We use a physics-based simulator where LLMs can get observations and send actions. The observations are from the physics-based simulation. The output of the LLM is the target joint positions, which are tracked by a set of joint Proportional-Derivative (PD) controllers running at a higher frequency. This joint-level PD control design is standard for learning-based robot walking control. While this pipeline is entirely done in simulation in this work, it has the potential to be implemented on hardware if the inference speed of LLMs is fast enough. ## III Results Having introduced the methodology for prompting a robot to walk, we next detail our experiments for validation. Moreover, through these experiments, we aim to answer the following questions: 1. Can we prompt a robot to walk with LLMs? 2. The robot walking control is performed by a set of 1 Q2: How should we design prompts for robot walking? Q3: Does the proposed approach generalize to different robots and environments? ### _Setup_ We choose an A1 quadruped robot as our testbed [34]. It is a high-dimensional system with 12 actuated joints. To initialize the LLM policy, we train an RL policy in Isaac Gym [24] using Proximal Policy Optimization (PPO) [36]. This training is based on the training recipe from [35]. Subsequently, we ground the LLM in Mujoco [43], a high-fidelity, physics-based simulator. Our LLM policy operates at \(10\) Hz [11] and is then tracked by a low-level joint PD controller at \(200\) Hz. The PD gains for this controller are set at \(20\) and \(0.5\), respectively. After evaluating various LLMs including GPT-4 [28], GPT-3.5-Turbo, text-davinci-003 [27], Alpaca [42], Vicuna 2 [8], Llama 2 [44], we found that only GPT-4 is powerful enough to in-context learn a robot walking behavior using our designed prompt. During the experiments, we set GPT-4's temperature to \(0\) to minimize the variance. ### _Robot Walking_ Utilizing the proposed approach, we successfully prompt an A1 quadruped robot to walk with GPT-4. The LLM policy cannot only enable walking on flat ground but can also allow the robot to walk over uneven terrain as shown in Fig. 8. Due to the unexpected roughness, the robot almost falls over but the LLM policy makes it recover to a normal posture and then keeps walking forward. Due to the need to balance the token limit of the LLM and the size of \(P_{Hist}\), we execute the policy at \(10\) Hz. However, this leads to a walking gait that becomes reasonably worse compared to many RL-based walking policies running at \(50\) Hz or even higher. Fig. 4 demonstrates target joint trajectories for the front left leg when a robot is walking on uneven terrain for \(10\) seconds. The blue lines depict the trajectories produced by the LLM policy. As a comparison, the orange lines show the trajectories generated by an RL policy. Note that both trajectories take the same observation as input. The robot acts with the action generated by the LLM and then gets the next observation from the environment. Although the LLM policy is initialized with the RL policy, the resulting joint trajectories are noticeably different. One prompt example for A1 robot walking is shown in Fig. 3, where we use historical observations and actions for the past \(50\) steps. The prompt is specially designed and normalized as described in Sec. II-B. Based on this A1 robot walking experiment, we can answer Question Q1 that a robot can be prompted to walk with LLMs. ### _Description Prompt_ We perform \(5\) experiments to analyze the impact of individual components in the description prompt. In each experiment, we provide observation and action prompts (\(P_{Hist}\)). For evaluation, we consider two metrics: normalized walking time and success rate. To clarify, the term "normalized walking time" denotes the proportion of time a robot can walk before it falls. The success rate is measured by the percentage of the trials that the robot is able to finish, where each trial lasts for \(10\) seconds and we have \(5\) trials for each experiment. In the design of the first experiment (E1), we exclude the description prompt entirely (only \(P_{Hist}\)). In the Fig. 4: **Target Joint Position Trajectories.** The LLM and RL-based target joint position trajectories for the front left leg, including hip, thigh, and calf joints. The LLM trajectory is depicted in blue and the RL trajectory is shown in orange. Fig. 5: **Description Prompt Comparison.** E1. No description prompt. (i.e. only \(P_{Hist}\)) E2. \(P_{IO}\): meaning of input and output space. E3. \(P_{IO}+P_{JO}\): meaning of input and output space and joint order. E4. \(P_{TD}+P_{IO}+P_{JO}+P_{CP}\): task description, meaning of input and output space, joint order, and full control pipeline. E5. Full description prompt. second experiment (E2), we only provide the meaning of input and output space (\(P_{IO}\)). Additionally, we include the joint order (\(P_{IO}+P_{JO}\)) in the third experiment (E3). In the fourth experiment (E4), we incorporate prompts such as task description, meaning of input and output space, joint order, and the full control pipeline (\(P_{TD}+P_{IO}+P_{JO}+P_{CP}\)). For the fifth experiment (E5), we employed a complete description prompt. The experimental result is demonstrated in Fig. 5, where we can see that the full description prompt has the highest normalized walking time and success rate. Based on the results from the first experiment, without a description prompt (E1), there is a minimal likelihood of LLMs prompting a robot to walk. ### _Observation and Action Prompt_ In our subsequent investigation, we assess the influence of the observation and action prompt \(P_{Hist}\) on walking performance. Inspired by the RL-based walking control design, we first study how historical observations and actions affect the performance. We conduct a series of experiments, testing observation and action lengths of \(0,10,30\), and \(50\), all while using the description prompt. To clarify, a length of \(0\) means only a description prompt. In our experiments, the LLM is queried at \(10\) Hz, so a length of \(50\) means \(5\) seconds in wall time that covers several walking steps for a quadruped robot. The experimental result is shown in Fig. 6. It is evident that increased lengths of observations and actions correlate with enhanced performance, both in terms of normalized walking time and success rate. With lengths ranging from \(0\) to \(50\), the LLM token consumptions are approximately \(348,1738,4518\), and \(7298\), respectively. As we use the GPT-4 model with an 8k token length, we are not able to explore longer lengths of observations and actions. In addition to comparing various lengths for observation and action prompts, we also investigate the effect of different observation prompts. Our choices for observations are influenced by the RL policy, as we initialize our LLM policy using a reinforcement learning-based approach. We evaluated five scenarios: (E1) no observation; (E2) only base linear velocity and angular velocity; (E3) only joint position and joint velocity; (E4) a combination of base linear velocity, angular velocity, joint position, and joint velocity; (E5) full observation. The comparison result is shown in Fig. 7. The full observation listed in Fig. 3 achieves the best performance. However, it remains unclear which specific observation component is the most influential. It is noteworthy that the observation in the LLM policy has a dimension of \(33\) while the observation space in the RL policy has a dimension of \(48\), which indicates that the LLM policy can use less information to make a robot walk compared to an RL policy. Furthermore, we study the effect of how we normalize the observation and action prompt. We benchmark \(5\) different normalization methods: (E1) original values without any normalization; (E2) normalize to positive values; (E3) normalize to integers; (E4) discard the decimal part and then normalize the integer part to positive integer values; (E5) normalize to positive integer values. Due to the limited token size of GPT-4, we opt for a compact observation prompt consisting of base linear and angular velocities. The benchmark result is summarized in TABLE I. Unlike other experiments, to emphasize the performance in different normalization methods, we extend the walking time to \(20\) seconds. We found that the normalization of the observation and action prompt is crucial as LLMs might parse a value of observation or action into several text tokens. Based on the investigation of the text prompt, we can answer Question Q2: how should we design prompts for robot walking? We believe a synergy between description prompt and observation and action prompt is the key to utilizing LLMs to prompt a robot to walk. Fig. 6: **Observation and Action Length Comparison.** We conduct experiments for historical observation and action lengths as \(0,10,30\), and \(50\). With lengths ranging from \(0\) to \(50\), the LLM token consumptions are approximately \(348,1738,4518\), and \(7298\), respectively. Fig. 7: **Observation Choice Comparison.** E1. No observation. E2. Base linear velocity and angular velocity. E3. Joint position and joint velocity. E4. Combine observations from experiments 2 and 3. E5. Full observation. ### _Different Robots_ In addition to the A1 robot, we further validate our approach with a different robot: the ANYmal robot [18]. It is different from the A1 robot in terms of size, mass, mechanical design, etc. In this experiment, we use Isaac Gym instead of MuJoCo as our simulator to see the effect of change in the simulation environment. Following the same approach, we train a \(10\) Hz RL policy for initialization. With the proposed text prompt, we successfully prompt the ANYmal robot to walk on flat ground. Snapshots of ANYmal walking are shown in Fig. 8. Having been validated by the A1 and ANYmal experiments over various terrains, we believe that the proposed method generalizes to different robots and environments, which is our answer to Question Q3. ## IV Discussion After validating our approach with experimental results, we provide a discussion on what we learned in this study and the limitations of the current approach. ### _Text Is Another Interface for Control_ It is interesting to note that the description prompt plays a crucial role in utilizing LLMs to prompt a robot to walk, which indicates that text is another interface for control. The existing control approaches for robot walking do not rely on any task description in textual form. If we follow the convention of RL or model-based control that uses numerical values such as observations and actions, LLMs have a low chance of making a robot walk, as demonstrated in Fig. 5. Instead, with a proper design of the description prompt, LLMs can achieve a high success rate for walking. We hypothesize that a description prompt provides a context for LLMs to interpret the observations and actions properly. While we provide a prompt example for robot walking, the prompt design for robot motions is still under-explored. ### _LLMs In-Context Learn Differently_ Our experiments demonstrate that LLMs in-context learn to prompt a robot to walk. Initially, we hypothesized that LLMs might learn a robot walking behavior in a manner akin to behavior cloning [48]. However, as shown in Fig. 4, the joint trajectories generated by the LLM policy are sufficiently different from those generated by an RL policy. Moreover, the LLM policy shows a more regular pattern which is not present in the RL policy. If we pay attention to the left calf joint trajectory, the pattern coincides with the biomechanics study of animal walking [13]. Thus, we believe that LLMs in-context learn differently to enable a robot to walk. ### _Limitations_ While this work takes us closer towards utilizing LLMs for robot walking control, there are some limitations in the current framework. First, the current prompt design is fragile. Minor alterations in the prompt can dramatically affect the walking performance, as described in our experiments. In general, we still lack a good understanding of how to design a reliable prompt for robot walking. Secondly, as we design and test the prompt based on a specific initialization policy, our prompt design inevitably becomes biased towards this policy. Although we have tested our framework with several different RL initialization policies, it is possible that some initialization policies do not work with our prompt. Another major limitation is that we are only able to carry out simulation experiments instead of hardware experiments. One reason is the low inference speed of GPT-4. Our pipeline requires LLMs to be queried at \(10\) Hz, which is much faster than the actual inference speed through OpenAI API. Thus, we have to pause the simulation to wait for the output of GPT-4. Furthermore, due to the limited token size, we have to choose a low-frequency policy, i.e., \(10\) Hz, to maximize the time horizon of the context. As a side note for future research, this work is expensive and roughly costed \(\$2,000\) US dollars for querying OpenAI API to test the prompt. ## V Conclusions In this paper, we presented an approach for prompting a robot to walk. We use LLMs with text prompts, consisting of a description prompt and an observation and action prompt collected from the physical environment, without any task-specific fine-tuning. Our experiments demonstrate that LLMs can serve as low-level feedback controllers for dynamic motion control even in high-dimensional robotic systems. We further systematically analyzed the text prompt with extensive experiments. Furthermore, we validated this method across various robotic platforms, terrains, and simulators. \begin{table} \begin{tabular}{l c c c c c} Experiment & E1 & E2 & E3 & E4 & E5 \\ \hline NWT(\(\uparrow\)) [\%] & 0.137 & 0.086 & 0.700 & 0.504 & **0.721** \\ Success Rate(\(\uparrow\)) [\%] & 0.0 & 0.0 & **0.6** & 0.2 & **0.6** \\ No. Input Tokens(\(\downarrow\)) & 4947 & 5117 & **3135** & **3135** & **3135** \\ No. Output Tokens(\(\downarrow\)) & 62 & 62 & **38** & **38** & **38** \\ \hline \end{tabular} \end{table} TABLE I: **Normalization Method Benchmark. E1. Original values. E2. Normalize to positive values. E3. Normalize to integer values. E4. Discard the decimal and then normalize the integer to positive integer values. E5. Normalize to positive integer values. NWT is normalized walking time.** Fig. 8: **Robot Walking Visualization. Top: A1 robot is prompted to walk on uneven terrain in MuJoCo, where the LLM policy can make it recover from terrain disturbance. Bottom: ANYmal robot is prompted to walk on flat ground in Isaac Gym using the same approach.**
2309.14086
A selection of PID type controller settings via LQR approach for two-wheeled balancing robot
The problem of PID type controller tuning has been addressed in this paper. In particular, a method of selection of PD settings based on the solution of linear-quadratic optimisation problem using the energy criterion has been investigated. Thus, the possibility of transforming optimal settings of the linear-quadratic regulator into the settings of the controller in the classical control system has been given. The presented methodology has been used during synthesis of control system for a two-wheeled balancing robot. Finally, the performance of the proposed control system has been validated by simulation in Matlab-Simulink environment with the use of a two-wheeled balancing robot model.
Krzysztof Laddach, Mateusz Czyżniewski, Rafał Łangowski
2023-09-25T12:29:53Z
http://arxiv.org/abs/2309.14086v1
# A selection of PID type controller settings via LQR approach for two-wheeled balancing robot ###### Abstract The problem of PID type controller tuning has been addressed in this paper. In particular, a method of selection of PD settings based on the solution of linear-quadratic optimisation problem using the energy criterion has been investigated. Thus, the possibility of transforming optimal settings of the linear-quadratic regulator into the settings of the controller in the classical control system has been given. The presented methodology has been used during synthesis of control system for a two-wheeled balancing robot. Finally, the performance of the proposed control system has been validated by simulation in Matlab/Simulink environment with the use of a two-wheeled balancing robot model. ## I Introduction A control problem of a non-linear and unstable plant is one of the important challenges in control engineering. A lot of mechanical dynamic systems are the example of such plants. Moreover, these plants often are under-actuated, i.e. they have more controlled variables than the number of control inputs. Hence, from this point of view they belong to SIMO (single input multiple output) systems. This class of systems is widely used for modelling plants like, e.g., different kind of pendulums, e.g., [1, 2, 3, 4] and balancing robots and vehicles, e.g., [5, 6, 7]. It is known that the one of main aims of control of these systems is their stabilisation at a given equilibrium point. Due to, i.a., the non-linear dynamics of considered plant, it is possible to distinguish two main approach to stabilisation (control) problem. The first approach is based on non-linear control algorithms, e.g., sliding mode control and fuzzy logic controller [8, 9]. Whereas the second approach uses linear controllers based on a negative feedback loop from controlled or state variables [10, 11, 4, 12]. A widespread approach using controlled variables feedback loop is based on PID type controllers. Therefore, different kind of methods are addressed to selection of PID controller settings. Starting from experimental solutions, through analytical and engineering (based on concepts such as Ziegler-Nichols) selection of controller settings to methods based on solving optimisation tasks [13]. The goal of this work is to provide an alternative method of PID tuning, that uses optimum setting values (gains) of the controller based on the state feedback [14]. Clearly, the selection of PID type controller settings is based on the solution of the linear-quadratic optimisation task. Such an approach can be found in the literature, primarily for given applications of SISO (single input single output) plants, e.g., [15]. Whereas, in this paper the general considerations allow using this technique for a certain class of SIMO plants are presented. The presented work is an extension of the considerations contained in [16]. Firstly, an alternative approach to model linearisation has been used. Secondly, the devised methodology has been used for control system design purposes of a two-wheeled balancing robot. In this paper, the considered robot can be described as a mobile (passive) single-arm inverted pendulum and it is shown in Fig. 1. The derived control systems with PD controllers as well as linear-quadratic regulator (LQR) have been implemented and validated in Matlab/Simulink environment using a balancing robot model [17]. ## II Problem formulation The general class of considered SIMO systems represents mechanical dynamic systems which can be modelled using Newtonian (classical), Lagrangian or Hamiltonian mechanics and it can be given as follows [18]: \[\begin{cases}\tilde{\mathbf{\Theta}}(t)&=-\mathbf{C}(\dot{\mathbf{\Theta}}(t),\mathbf{\Theta}( t))\dot{\mathbf{\Theta}}(t)-\mathbf{G}(\mathbf{\Theta}(t))\\ &+Q(\mathbf{\Theta}(t))u(t),\\ \dot{\mathbf{\Theta}}(t_{0})&=\dot{\mathbf{\Theta}}_{0}\\ \mathbf{\Theta}(t_{0})&=\dot{\mathbf{\Theta}}_{0},\end{cases} \tag{1}\] where: \((\cdot)\) and \((\cdot)\) are the first and second derivative with respect to \(t\), respectively; \(t\in\mathbb{R}_{+}\cup\{0\}\) is time, \(\mathbb{R}\), \(\mathbb{R}_{+}\) are a real number field and its positive part, respectively; \(\mathbf{\Theta}(t)\in\mathbb{R}^{q}\) denotes a vector of linear and angular displacements, where Fig. 1: The considered two-wheeled balancing robot. \(q\in\mathbb{N}_{+}\) is a number of controlled outputs and \(\mathbb{N}_{+}\) is a positive part of a natural number set; \(u(t)\in\mathbb{R}\) denotes a control input; \(\mathbf{C}(\hat{\mathbf{\Theta}}(t),\mathbf{\Theta}(t))=\left[\mathbf{C}_{1}(\cdot)^{\mathrm{T}} \ldots\mathbf{C}_{q}(\cdot)^{\mathrm{T}}\right]^{\mathrm{T}}\), \(\mathbf{G}(\mathbf{\Theta}(t))=\left[G_{1}(\cdot)\ldots G_{q}(\cdot)\right]^{\mathrm{T}}\) and \(\mathbf{Q}(\mathbf{\Theta}(t))=\left[Q_{1}(\cdot)\ldots Q_{q}(\cdot)\right]^{\mathrm{T}}\) signify non-linear matrix and vector value smooth functions which arguments are \(\mathbf{\Theta}(t)\) and \(\hat{\mathbf{\Theta}}(t)\); \(\mathbf{\Theta}_{0}\), \(\hat{\mathbf{\Theta}}_{0}\) denote initial conditions. It should be added that, from a physical construction point of view, the systems determined through (1), which are considered in this paper, have the following features [3, 19]: * The considered plant is composed of rigid body elements called 'links' which are interconnected by particular 'joints'. * Movement is considered in \(q\) dimensions. * The linear motion of the plant is strictly connected with a proper state variable and it is not constrained. * The angular motion of the plant is strictly connected with a proper state variable and it is constrained to the set \([-180\,\ 180]\)\([^{\circ}]\). * The movement between plant joints or between plant and surface can be considered as either frictional or not smooth. Moreover, the following assumption is formulated for control system design purposes: _Assumption 1_.: Actuator system is located only at the base joint, i.e. the one, which is attached to the either stationary surface or plant moving element, e.g., a cart or wheels. It is well-known, due to the fact that linear control algorithm can operate in a given operating (equilibrium) point of the non-linear process, it is needed to derive a new, affine (quasi linear) model of the considered system. In general, this model is an approximation of the non-linear vector field in any point \(P(\mathbf{x}_{\mathrm{e}},\mathbf{u}_{\mathrm{e}})\), which transverse an integral curve assigned to the particular state initial condition \(\mathbf{x}_{0}\). It is possible to distinguish at least two approaches to devise of linear model [18]. In this work, the approach introduced in [20] has been used. This methodology is based on solving a quadratic optimisation problem. Thereby, problem of applying of incremental variables, which are used in the second main approach basing on the Taylor series expansion, can be omitted and centre of coordinate system is not relocated (redeployed from zero to operating point). Hence, this approach is very convenient due to need of stabilising the considered SIMO class of mechanical dynamical system in one of the equilibrium points. ### _Affine form of linearised model_ Taking into account that a number of elements of vectors \(\mathbf{\Theta}(t)\) and \(\hat{\mathbf{\Theta}}(t)\) is equal to \(n\triangleq 2q\in\mathbb{N}_{+}\), the state space vector \(\mathbf{x}(t)\in\mathbb{X}\), where \(\mathbb{X}\subset\mathbb{R}^{n}\), can be defined as follows: \[\mathbf{x}(t)\triangleq\left[\mathbf{\Theta}^{\mathrm{T}}(t)\quad\dot{\mathbf{\Theta}}^{ \mathrm{T}}(t)\right]^{\mathrm{T}},\ \mathbf{x}_{0}\triangleq\left[\mathbf{\Theta}_{0}^{\mathrm{T}}\quad\dot{\mathbf{\Theta}}_{ 0}^{\mathrm{T}}\right]^{\mathrm{T}}. \tag{2}\] Therefore, considering (2), the model (1) yields [21, 22]: \[\begin{cases}\dot{\mathbf{x}}(t)&=\mathbf{F}(\mathbf{x}(t))+\mathbf{G}(\mathbf{x}(t))u(t)\\ \mathbf{x}(t_{0})&=\mathbf{x}_{0}\end{cases}, \tag{3}\] where \(\mathbf{F}(\mathbf{x}(t))\) is an affine (drift) component of non-linear dynamics and \(\mathbf{G}(\mathbf{x}(t))\) is a non-linear component associated with control input \(u(t)\). Taking into account, a particular selection of state variables ensures to the following variant of non-linear affine form (3): \[\begin{cases}\dot{x}_{1}(t)&=x_{q+1}(t)\\ &\vdots\\ \dot{x}_{q}(t)&=x_{n}(t)\\ \dot{x}_{q+1}(t)&=-\mathbf{C}_{1}(\mathbf{x}(t))\left[x_{q+1}(t)\ \ \ldots\ \ x_{n}(t)\right]^{\mathrm{T}}\\ &-G_{1}\left(\left[x_{1}(t)\ \ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}}\right)\\ &+Q_{1}\left(\left[x_{1}(t)\ \ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}} \right)u(t)\\ &\vdots\\ \dot{x}_{n}(t)&=-\mathbf{C}_{q}(\mathbf{x}(t))\left[x_{q+1}(t)\ \ \ldots\ \ x_{n}(t)\right]^{\mathrm{T}}\\ &-G_{q}\left(\left[x_{1}(t)\ \ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}} \right)\\ &+Q_{q}\left(\left[x_{1}(t)\ \ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}} \right)u(t)\\ \end{cases}. \tag{4}\] Regarding to general form from (3), for the first \(q\) state equations of (4), the particular components are equal to: \(F_{j}(\mathbf{x}(t))=x_{q+j}(t)\) and \(G_{j}(\mathbf{x}(t))=0\). In turn, for the rest of state equations of (4), the particular components are equal to: \(F_{q+j}(\mathbf{x}(t))=-\mathbf{C}_{j}(\mathbf{x}(t))\left[x_{q+1}(t)\ \ \ldots\ \ x_{n}(t) \right]^{\mathrm{T}}-G_{j}\left(\left[x_{1}(t)\ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}}\right)\) and \(G_{q+j}(\mathbf{x}(t))=Q_{j}\left(\left[x_{1}(t)\ \ \ldots\ \ x_{q}(t)\right]^{\mathrm{T}}\right)\), where \(j=\overline{1,q}\). It is worth noting that (4) can be decomposed to the linear part and non-linear affine part. The linear approximation of (4) can be derived as follows. For any equilibrium point \(\mathbf{x}_{\mathrm{e}}=\left[x_{\mathrm{e}_{1}}\ldots x_{\mathrm{e}_{n}}\right]^{ \mathrm{T}}\in\mathbf{\mathcal{X}}_{\mathrm{e}}\) of the non-linear system (4), where \(\mathbf{\mathcal{X}}_{\mathrm{e}}\) is a set of all possible equilibrium points such that \(\mathbf{\mathcal{X}}_{\mathrm{e}}=\left\{\mathbf{x}_{\mathrm{e}}\in\mathbb{X}\colon x _{\mathrm{e}_{j}}=0,\ x_{\mathrm{e}_{q+j}}\neq 0\right\}\), which is an component of the operating point \(P(\mathbf{x}_{\mathrm{e}},u_{\mathrm{e}})\), where always \(u_{\mathrm{e}}=0\), matrices \(\mathbf{A}=\left[\mathbf{a}_{1}\ \ \ldots\ \ \mathbf{a}_{n}\right]^{\mathrm{T}}\in\mathbb{R}^{n\times n}\) and \(\mathbf{B}\in\mathbb{R}^{n\times 1}\) are derived by applying the following formulas [20, 23]: \[\mathbf{B}=\left.\begin{array}{l}\mathbf{G}(\mathbf{x}(t))\end{array}\right|_{\mathbf{x}_{ \mathrm{e}}}, \tag{5}\] \[\begin{split}\mathbf{a}_{\mathrm{i}}&=\left.\begin{bmatrix}\nabla F_{i}( \mathbf{x}(t))\end{bmatrix}\right|_{\mathbf{x}_{\mathrm{e}}}\\ &+\left.\frac{\left[F_{i}(\mathbf{x}(t))\right]\right|_{\mathbf{x}_{\mathrm{e}}}-\mathbf{x} _{\mathrm{e}}^{\mathrm{T}}\left[\nabla F_{i}(\mathbf{x}(t))\right]\right|_{\mathbf{x}_{ \mathrm{e}}}}{||\mathbf{x}_{\mathrm{e}}||_{2}}\end{bmatrix},\end{split} \tag{6}\] where: \(\nabla(\cdot)\) denotes gradient of the function; \(||\cdot||_{2}\) is an Euclidean norm of the vector; \(i=\overline{1,n}\). Thus, using (5) and (6), the following linear model of the non-linear mechanical affine system (4) can be obtained: \[\begin{cases}\dot{\mathbf{x}}(t)&=\underbrace{\begin{bmatrix}\mathbf{N}\\ \mathbf{A}_{1}\end{bmatrix}}_{\mathbf{A}}\mathbf{x}(t)+\underbrace{\begin{bmatrix}\mathbf{0}^{q \times 1}\\ \mathbf{B}_{1}\end{bmatrix}}_{\mathbf{B}}u(t)\\ \mathbf{c}(t)&=\underbrace{\begin{bmatrix}\mathbf{I}^{q\times q}&\mathbf{0}^{q\times q} \end{bmatrix}}_{\mathbf{E}}\mathbf{x}(t)\\ \mathbf{x}(t_{0})&=\mathbf{x}_{0}\end{cases}, \tag{7}\] where: \(\mathbf{A}_{1}\in\mathbb{R}^{q\times n}\), \(\mathbf{N}=\begin{bmatrix}\mathbf{0}^{q\times q}&\mathbf{I}^{q\times q}\end{bmatrix}\in \mathbb{R}^{q\times n}\) are relevant parts of \(\mathbf{A}\) matrix; \(\mathbf{B}_{1}\in\mathbb{R}^{q\times 1}\) is a part of \(\mathbf{B}\) matrix; \(\mathbf{I}^{q\times q}\) is an identity matrix of \(q\times q\) size; \(\mathbf{0}^{(\cdot)\times(\cdot)}\) is a zero matrix of an adequate size; \(\mathbf{E}\in\mathbb{R}^{q\times n}\) denotes an output matrix. Clearly, taking into account that the first \(q\) state equations of (4) are not directly influenced by control input (signal) \(u(t)\) and their dynamical character is strictly linear, the appropriate part of matrix \(\mathbf{B}\) equals \(\mathbf{0}^{q\times 1}\). Therefore, matrix \(\mathbf{B}\) of linearised system (7) must be equal to: \[\mathbf{B}=\left.\begin{bmatrix}\mathbf{G}(\mathbf{x}(t))\end{bmatrix}\right|_{\mathbf{x}_{ \text{e}}}=\begin{bmatrix}\mathbf{0}^{q\times 1}\\ \mathbf{B}_{1}\end{bmatrix}, \tag{8}\] where \(\mathbf{B}_{1}=\left.\begin{bmatrix}\mathbf{Q}\left(\begin{bmatrix}x_{1}(t)&\ldots&x_{ q}(t)\end{bmatrix}^{\mathrm{T}}\right)\end{bmatrix}\right|_{\mathbf{x}_{\text{e}}}\). The similar considerations for matrix \(\mathbf{A}\) are as follows. For the first \(q\) state equations of (4), first \(q\) elements \(\mathbf{a}_{i}\) of matrix \(\mathbf{A}\) are equal to the particular base vectors \(\mathbf{e}_{q+j}\in\mathbb{R}^{n}\) belong to the Euclidean vector space equipped with a Cartesian coordinate system. Hence, for every pair of \(i\) and \(j\), which fulfil \(i=j\) the following holds: \[\begin{split}\mathbf{a}_{i}&=\mathbf{e}_{q+j}+\frac{\left.\left[F_{j} (\mathbf{x}(t))\right]\right|_{\mathbf{x}_{\text{e}}}-\mathbf{x}_{\text{e}}^{\mathrm{T}} \mathbf{e}_{q+j}}{\mathbf{x}_{\text{e}}^{\mathrm{T}}\mathbf{x}_{\text{e}}}\mathbf{x}_{\text{e} }\\ &=\mathbf{e}_{q+j}+\frac{x_{\text{e}_{q+j}}-x_{\text{e}_{q+j}}}{\mathbf{x}_{ \text{e}}^{\mathrm{T}}\mathbf{x}_{\text{e}}}\mathbf{x}_{\text{e}}=\mathbf{e}_{q+j}\end{split}, \end{split} \tag{9}\] where \(x_{\text{e}_{q+j}}\) is the \(q+j\)-th element of the \(\mathbf{x}_{\text{e}}\) vector. Therefore, it is certain that the 'upper' part of the matrix \(\mathbf{A}\) is equal to \(\mathbf{N}=\begin{bmatrix}\mathbf{0}^{q\times q}&\mathbf{I}^{q\times q}\end{bmatrix}\). For another \(q\) state variables (6) is used explicitly and leading to \(\mathbf{A}_{1}=\begin{bmatrix}\mathbf{a}_{q+1}&\ldots&\mathbf{a}_{n}\end{bmatrix}^{ \mathrm{T}}\). Finally, obtained matrices \(\mathbf{N}\) and \(\mathbf{A}_{1}\) are aggregated and the matrix \(\mathbf{A}\) of (7) is ensured. _Remark 1_.: It is worth noting that the above-presented linear approximation of (4) leads to the same form of \(\mathbf{A}\) and \(\mathbf{B}\) matrices as the Taylor series expansion. However, above method needs putting some general assumptions. If for any equilibrium point \(||\mathbf{x}_{\text{e}}||_{2}=0\), \(\mathbf{x}_{\text{e}}\) has to be transformed to the new \(\mathbf{x}_{\text{e}}\triangleq\mathbf{x}_{\text{e}}+\mathbf{e}_{\text{e}}\), where \(\mathbf{e}_{\text{e}}\in\mathbb{R}^{n}\) is a vector with Euclidean norm: \(||\mathbf{e}_{\text{e}}||_{2}\to 0\). This transformation provides to \(||\mathbf{x}_{\text{e}}||_{2}\neq 0\) and its justification is straightforward. _Remark 2_.: Due to the fact that the model (7) is derived for control system design purposes, it is obvious that the pair \((\mathbf{A},\mathbf{B})\) has to be controllable [14]. Taking into account, that considered class of systems, has always the same structure of mathematical linear model (7), Kalman controllability matrix is a square matrix denoted by \(\mathbf{M}_{\text{c}}\in\mathbb{R}^{n\times n}\). It should be noticed that for some similar class of systems to (7), pair \((\mathbf{A},\mathbf{B})\) is structurally controllable, if certain particular conditions are fulfilled [19]. These conditions are ensured by the assumption 1. Clearly, due to fact that matrix \(\mathbf{A}_{1}\) has non zero elements, using inductive understanding, it is easy to shown that the system (7) is structurally controllable. ## III Synthesis of the control system Taking into account that linear model of system (7) can be used for synthesis of the controller based on the state feedback, according to the topic of this work, it is needed to prove that exist equivalence between structure of the PID type controller and above-mentioned regulator state space. More specifically, the possibility of selection of \(q\) PD controllers settings via the State Feedback Regulator (SFR) and Feed Forward Regulator (FFR) is proved in this section. The general structure of SFR and FFR and the proposed equivalent structure of \(q\) PD controllers are presented in Fig. 2. The aggregated control law of SFR and FFR regulators is as follows [14]: \[\begin{split} u(t)&\triangleq u_{\text{x}}(t)+u_{\text{ ref}}(t);\\ u_{\text{x}}(t)&\triangleq-\mathbf{K}\mathbf{x}(t);\ u_{\text{ ref}}(t)\triangleq\mathbf{K}_{\text{ref}}\mathbf{c}_{\text{ref}}(t),\end{split} \tag{10}\] where: \(u_{\text{x}}(t)\), \(u_{\text{ref}}(t)\) are the SFR's and FFR's control signals; \(\mathbf{K}\in\mathbb{R}^{1\times n}\) is the SFR's gain matrix; \(\mathbf{K}_{\text{ref}}\in\mathbb{R}^{1\times q}\) is the FFR's gain matrix; \(\mathbf{c}_{\text{ref}}(t)\in\mathbb{R}^{q}\) is a vector of the reference signals. The control law (10) ensures that in the steady state, which in this case of control problem it can be understood as reaching an equilibrium point: \(\lim_{t\to\infty}\mathbf{c}(t)\to\mathbf{c}_{\text{ref}}(t)\). In turn, the structure of \(q\) PD controllers have to generate one dimensional control signal \(u(t)\) and the control law can be written as [13]: \[u(t)=\mathbf{K}_{\text{p}}\mathbf{e}(t)+\mathbf{K}_{\text{d}}\dot{\mathbf{e}}(t)=\sum_{j=1}^{q} \left[K_{\text{p}_{j}}e_{j}(t)+K_{\text{d}_{j}}\dot{e}_{j}(t)\right], \tag{11}\] where: \(\mathbf{e}(t)\triangleq\mathbf{c}_{\text{ref}}(t)-\mathbf{c}(t)\in\mathbb{R}^{q}\) is a control error; \(\mathbf{K}_{\text{p}}=\begin{bmatrix}K_{\text{p}_{1}}&\ldots&K_{\text{p}_{q}} \end{bmatrix}\in\mathbb{R}^{1\times q}\) and \(\mathbf{K}_{\text{d}}=\begin{bmatrix}K_{\text{d}_{1}}&\ldots&K_{\text{d}_{q}} \end{bmatrix}\in\mathbb{R}^{1\times q}\) denote the vectors of proportional and derivative gains of \(q\) PD controllers, respectively. Fig. 2: The general structures of: a) PD controllers, b) SFR and FFR. The control error derivative with respect to \(t\) is equal to: \[\dot{\mathbf{e}}(t)=\dot{\mathbf{e}}_{\rm ref}(t)-\dot{\mathbf{c}}(t)=\dot{\mathbf{c}}_{\rm ref}(t )-\mathbf{E}\dot{\mathbf{x}}(t). \tag{12}\] Taking into account, that reference trajectories are constant over time, the vector \(\dot{\mathbf{c}}_{\rm ref}(t)\) equals zero vector. Therefore, by substituting (12) into (11), the control law is given as: \[\begin{split} u(t)&=\mathbf{K}_{\rm p}\mathbf{c}_{\rm ref} (t)-\mathbf{K}_{\rm p}\mathbf{E}\mathbf{x}(t)-\mathbf{K}_{\rm d}\mathbf{E}\dot{\mathbf{x}}(t)\\ &=\mathbf{K}_{\rm p}\mathbf{c}_{\rm ref}(t)-\begin{bmatrix}\mathbf{K}_{\rm p }&\mathbf{K}_{\rm d}\end{bmatrix}\mathbf{E}\begin{bmatrix}\mathbf{x}^{\rm T}(t)&\dot{\mathbf{x }}^{\rm T}(t)\end{bmatrix}^{\rm T}.\end{split} \tag{13}\] Taking into account the definition of state variable vector and also general form of linear model (7), it is obvious that \(\mathbf{E}\mathbf{x}(t)=\begin{bmatrix}x_{1}(t)&\ldots&x_{q}(t)\end{bmatrix}^{\rm T}\) and \(\mathbf{E}\dot{\mathbf{x}}(t)=\begin{bmatrix}x_{q+1}(t)&\ldots&x_{n}(t)\end{bmatrix}^{ \rm T}\). Thus, (13) can be rewritten as: \[u(t)=\mathbf{K}_{\rm p}\mathbf{c}_{\rm ref}(t)-\begin{bmatrix}\mathbf{K}_{\rm p}&\mathbf{K}_{ \rm d}\end{bmatrix}\mathbf{x}(t). \tag{14}\] By comparing control laws (14) and (10), the settings of PD controllers structure is obtained from the following equations: \[\mathbf{K}_{\rm ref}=\mathbf{K}_{\rm p},\ \mathbf{K}=\begin{bmatrix}\mathbf{K}_{\rm p}&\mathbf{K}_{ \rm d}\end{bmatrix}. \tag{15}\] Hence, the equivalence between the controller based on the state feedback and the PID type controller has been proved. ## IV Case study - a two-wheeled balancing robot A balancing robot is a single-axle, twin-track vehicle which centre of mass is above the axis of the road wheels (see Fig. 1). It is an example of autonomous mobile constructions, which belong to the class of considered SIMO systems. ### _Model of the two-wheeled balancing robot_ A non-linear state-space mathematical model of a two-wheeled balancing robot which was derived in [17] is given as follows: \[\begin{cases}\dot{x}_{1}(t)=x_{3}(t),\dot{x}_{2}(t)=x_{4}(t),\\ (I_{\rm n}+m_{\rm n}l^{2})\dot{x}_{3}(t)=-m_{\rm n}l\dot{x}_{4}(t)\cos(x_{1}(t ))+\frac{2kk_{\rm n}k_{\rm n}}{Rr}x_{4}(t)\\ -\frac{2k_{\rm n}k_{\rm n}}{R}x_{3}(t)+m_{\rm n}gl\sin(x_{1}(t))-\frac{2k_{\rm n }}{R}u(t),\\ (\frac{2lk}{r^{2}}+2m_{\rm k}+m_{\rm n})\dot{x}_{4}(t)=\frac{2kk_{\rm n}k_{\rm n }}{Rr}x_{3}(t)-\frac{2k^{2}k_{\rm n}k_{\rm n}}{Rr^{2}}x_{4}(t)\\ -m_{\rm n}l\dot{x}_{3}(t)\cos(x_{1}(t))+m_{\rm n}lx_{3}^{2}(t)\sin(x_{1}(t))+ \frac{2lk_{\rm n}}{Rr}u(t),\end{cases} \tag{16}\] where: \(x_{1}(t)\) [\({}^{\circ}\)] is the angular displacement (tilt); \(x_{2}(t)\) [m] denotes the linear displacement; \(x_{3}(t)\) [\({}^{\circ}\)/s] stands for the angular velocity; \(x_{4}(t)\) [m/s] signifies the linear velocity; \(u(t)\) [V] is the voltage applied to the DC motor. Using the methodology presented in [3], model (16) can be rewritten into the ODE form resembling to (3): \[\begin{cases}\dot{x}_{1}(t)&=x_{3}(t)\\ \dot{x}_{2}(t)&=x_{4}(t)\\ \dot{x}_{3}(t)&=\frac{H_{3}(\mathbf{x}(t))}{M(\mathbf{x}(t))}+\frac{P_{3}(\mathbf{x}(t))}{ M(\mathbf{x}(t))}u(t)\\ \dot{x}_{4}(t)&=\frac{H_{4}(\mathbf{x}(t))}{M(\mathbf{x}(t))}+\frac{P_{4}(\mathbf{x}(t))}{ M(\mathbf{x}(t))}u(t)\\ \mathbf{x}(t_{0})&=\mathbf{x}_{0},\\ \mathbf{c}(t)&=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\end{bmatrix}\mathbf{x}(t).\end{cases} \tag{17}\] The elements \(H_{3}(\mathbf{x}(t))\), \(H_{4}(\mathbf{x}(t))\), \(P_{3}(\mathbf{x}(t))\), \(P_{4}(\mathbf{x}(t))\) and \(\forall t\ M(\mathbf{x}(t))\neq 0\) are defined as: \[\begin{split}& H_{3}(\mathbf{x}(t))=\alpha_{6}\alpha_{1}+\alpha_{7} \alpha_{3},\ H_{4}(\mathbf{x}(t))=\alpha_{6}\alpha_{3}+\alpha_{7}\alpha_{2},\\ & P_{3}(\mathbf{x}(t))=\alpha_{4}\alpha_{1}+\alpha_{5}\alpha_{3},\ P_{4}(\mathbf{x}(t))= \alpha_{4}\alpha_{3}+\alpha_{5}\alpha_{2},\\ & M(\mathbf{x}(t))=\alpha_{1}\alpha_{2}-\alpha_{3}^{2},\end{split} \tag{18}\] where all of the \(\alpha_{(\cdot)}\) are as follows: \[\begin{split}&\alpha_{1}=I_{\rm n}+m_{\rm n}l^{2},\ \alpha_{2}=2\frac{I_{\rm k}}{r^{2}}+2m_{\rm k}+m_{\rm n},\\ &\alpha_{3}=-m_{\rm n}l\cos(x_{1}(t)),\ \alpha_{4}=-2\frac{k_{\rm m}}{R},\ \alpha_{5}=2\frac{kk_{\rm m}}{Rr},\\ &\alpha_{6}=2x_{4}(t)\frac{kk_{\rm e}k_{\rm m}}{Rr}-2x_{3}(t) \frac{k_{\rm e}k_{\rm m}}{R}+m_{\rm n}gl\sin(x_{1}(t)),\\ &\alpha_{7}=2x_{3}(t)\frac{kk_{\rm e}k_{\rm m}}{Rr}-2x_{4}(t) \frac{k^{2}k_{\rm e}k_{\rm m}}{Rr^{2}}+m_{\rm n}l\sin(x_{3}(t))^{2}.\end{split} \tag{19}\] The parameter values of the model (19) are given as: * the moment of inertia of the robot construction, * the moment of inertia of the robot wheel, * the mass of the robot, * the mass of the robot wheel, * the distance to the robot centre of gravity, * the winding resistance, * the wheel radius, * the gear ratio, * the electro-mechanical constant, * the torque constant, * the gravitational acceleration. The detailed derivation of model (19) can be found in [17]. ### _The control problem_ As it has been mentioned above, the aim of the control system of the two-wheeled balancing robot is its stabilisation at the given equilibrium point, which equals \(\mathbf{x}_{\rm e}=\begin{bmatrix}0&0&0&0\end{bmatrix}^{\rm T}\) in the considered case. This control goal is fulfilled using the proposed linear control algorithms. Hence, taking into account remark 1 it is assumed that \(\mathbf{\varepsilon}_{\rm e}=\mathbf{1}^{4\times 1}\times 10^{-4}\), where \(\mathbf{1}^{(\cdot)\times(\cdot)}\) denotes a matrix with all elements equal one. Therefore, by the use of (5) and (6), model (17) can be rewritten in general form (7) with the matrices \(\mathbf{A}\) and \(\mathbf{B}\) given as: \[\begin{split}&\mathbf{A}=\begin{bmatrix}0&0&1&0\\ 0&0&0&1\\ 1.4188&5.7939\times 10^{-7}&-4.3319&3274.4\\ -0.1128&-6.6786\times 10^{-12}&0.8586&-648.99\end{bmatrix},\\ &\mathbf{B}=\begin{bmatrix}0&0&-628.4856&124.4993\end{bmatrix}^{\rm T}.\end{split} \tag{20}\] The Kalman controllability matrix \(\mathbf{M}_{\rm c}=\begin{bmatrix}\mathbf{B}&\mathbf{A}\mathbf{B}&\mathbf{A}^{2}\mathbf{B}&\mathbf{A}^{3}\mathbf{B} \end{bmatrix}\) can be used to show that the pair \((\mathbf{A},\mathbf{B})\) is controllable. It is ensured because \(\mathbf{M}_{\mathrm{c}}\) is non-singular matrix due to \(\det(\mathbf{M}_{\mathrm{c}})=-1.4517\times 10^{13}\). In order to design the SFR gain matrix, the optimisation approach - LQR has been used. Since the analysis of the selection of values in the diagonal matrices \(\mathbf{Q}_{\mathrm{LQR}}\in\mathbb{R}_{+}^{n\times n}\) and \(R_{\mathrm{LQR}}\in\mathbb{R}_{+}\) has not been the subject of the paper, they have been chosen arbitrarily, with the assumption that the first and second state variables are more significant as: \[\mathbf{Q}_{\mathrm{LQR}}=\text{diag}(100,100,1,1),\ R_{\mathrm{LQR}}=1. \tag{21}\] Hence, by solving the linear-quadratic optimisation problem the state feedback gain matrix has been obtained. Moreover, according to (15) the proportional and derivative gains of the \(q\) PD controllers have been ensured: \[\begin{split}\mathbf{K}&=\begin{bmatrix}\mathbf{K}_{ \mathrm{p}}&\mathbf{K}_{\mathrm{d}}\end{bmatrix}\\ &=\begin{bmatrix}-13.1881&-10.0&-9.3717&-45.1452\end{bmatrix}. \end{split} \tag{22}\] _Remark 3_.: The discrete control law is needed for the physical implementation of PD controllers. This control law includes a low-pass filter with sampling time equals \(T_{\mathrm{s}}=0.1\ [\mathrm{s}]\)[16]. Moreover, due to DC voltage saturation, the limitation of the control signal to \(u(t)\in[-12\,\ 12]\ [\mathrm{V}]\) has been taken into account [17]. ### _Simulation results_ The both control systems (see Fig. 2) with the set of parameters which is shown in subsection IV-A have been implemented and validated in Matlab/Simulink environment. The results of representative simulation experiments are presented in Figs. 3-8. These results present the performance of stabilisation regulators (SFR and PD) and it has been qualitatively assessed. The initial conditions have been arbitrarily selected as \(\mathbf{x}_{0}=\begin{bmatrix}10&0&0&0\end{bmatrix}^{\mathrm{T}}\), which represent 10-degrees tilt the robot from the considered equilibrium point \(\mathbf{x}_{\mathrm{e}}\). ## V Conclusion In this paper, the problem of selection of the PID type controller settings for the class of SIMO mechanical dynamical systems has been investigated. In particular, the possibility of transforming optimal settings of the linear-quadratic regulator (LQR) into the settings of the PD controller has been given. The equivalence of both structures has been shown at the design stage for the continuous realisation of the particular regulators. In turn, the discrete algorithms developed for hardware implementation needs the usage of low pass filtering and constrain of the control signal, due to peaking phenomena caused by numerical differentiation of control error. However, as it has been shown in the simulation way, the discrete controller provides insignificantly weaker performance with comparison to the continuous control law. Hence, the selection of State Feedback Regulator (SFR) as an optimal regulator (LQR) ensures that the equivalence PD controller has also optimal proportional and derivative gains. ## Acknowledgements The research work was done in accordance with funding from Polish MEiN under Young Researcher Support Program. The authors wish to express their thanks for support.
2309.04281
Controlled asymmetric Ising model implemented with parametric micromechanical oscillators
Asymmetric Ising model, in which coupled spins affect each other differently, plays an important role in diverse fields, from physics to biology to artificial intelligence. We show that coupled parametric oscillators provide a well-controlled and fully characterizable physical system to implement the model. Such oscillators are bistable. The coupling changes the rate of interstate switching of an oscillator depending on the state of other oscillators. Our experiment on two coupled micromechanical resonators reveals unusual features of asymmetric Ising systems, including the onset of a probability current that circulates in the stationary state. We relate the asymmetry to the exponentially strong effect of a periodic force on the switching rates of an individual parametric oscillator, which we measure. Our findings open the possibilities of constructing and exploring asymmetric Ising systems with controlled parameters and connectivity.
C. Han, M. Wang, B. Zhang, M. I. Dykman, H. B. Chan
2023-09-08T11:57:58Z
http://arxiv.org/abs/2309.04281v1
# Controlled asymmetric Ising model implemented with parametric micromechanical oscillators ###### Abstract Asymmetric Ising model, in which coupled spins affect each other differently, plays an important role in diverse fields, from physics to biology to artificial intelligence. We show that coupled parametric oscillators provide a well-controlled and fully characterizable physical system to implement the model. Such oscillators are bistable. The coupling changes the rate of interstate switching of an oscillator depending on the state of other oscillators. Our experiment on two coupled micromechanical resonators reveals unusual features of asymmetric Ising systems, including the onset of a probability current that circulates in the stationary state. We relate the asymmetry to the exponentially strong effect of a periodic force on the switching rates of an individual parametric oscillator, which we measure. Our findings open the possibilities of constructing and exploring asymmetric Ising systems with controlled parameters and connectivity. ## I Introduction Parametric oscillator is one of the best-known examples of a bistable system. It has two vibrational states with equal amplitudes and opposite phases [1]. These states emerge when the oscillator eigenfrequency is periodically modulated. They have a period equal to twice the modulation period and can be associated with classical bits or Ising spin states, providing a basis for classical logic operations [2; 3]. Superpositions of the opposite-phase coherent states of an oscillator can also encode a qubit [4; 5]. Coupled parametric oscillators can serve as Ising machines for classical and quantum annealing [6; 7; 8; 9; 10; 11; 12; 13; 14]. Besides computation, various other applications of parametric oscillators have been studied, from force and mass sensing [15; 16] to rare events in classical and quantum systems far from thermal equilibrium [17; 18; 19; 20; 21; 22] and phase transitions into a time-symmetry-broken (time-crystal) state [23; 24; 25]. An important aspect of Ising systems pointed out by Hopfield [26] is the possibility to use coupled spins to model neural networks which memorize multiple patterns. This possibility has been attracting increasing interest over the years, particularly in view of the progress in machine learning [27; 28]. In the Hopfield model the spin coupling energy has the conventional form of \(J_{ij}\sigma_{i}\sigma_{j}\), where \(\sigma_{i},\sigma_{j}\) take on values \(\pm 1\) and \(J_{ij}=J_{ji}\), and the network dynamics can be analyzed using the methods of statistical physics. The model is symmetric in the sense that the effect of spin \(i\) on spin \(j\) is the same as the effect of spin \(j\) on spin \(i\). However, most neuron networks are presumably asymmetric: neuron \(i\) can affect neuron \(j\) stronger than neuron \(j\) affects neuron \(i\). If neurons are associated with spins, one can think formally that \(J_{ij}\neq J_{ji}\) and then the coupling may not be described by the coupling energy. The corresponding model is called an asymmetric Ising model. It has attracted much attention as one of the leading models of neural networks [29; 30; 31; 32; 33; 34] and gene regulatory networks [35] and has been used to describe experiments on neurons, cf. [36; 37] and references therein. In spite of the importance of the asymmetric Ising model, there have been no studies that relate the spin coupling parameters to the parameters of the underlying system. Understanding the dynamics of this system enables one to examine to what extent the mapping on coupled spins is adequate, in the first place. Determining the relationship between the parameters of the system and the effective spins is essential for implementing and exploring asymmetric Ising models. In the present paper we demonstrate that coupled parametric oscillators in the presence of noise provide a system that can be described by an asymmetric Ising model. The description is based on the Glauber picture [38] in which the rate of switching between the states of a spin depends on the states of the spins to which it is coupled. In the case of oscillators, the relevant quantity is the rate of switching between the period-two vibrational states of an oscillator that depends on which vibrational states are occupied by other oscillators. We describe the mapping of the oscillators on spins and independently measure the parameters of the system that enter the model. In particular, we measure an important characteristic of driven oscillators in the presence of fluctuations, the logarithmic susceptibility [39], which describes the exponentially strong effect of a periodic force on the switching rates of an individual uncoupled parametric oscillator. The parametric oscillators we study are micro-electro-mechanical resonators modulated close to twice their eigenfrequencies. Such resonators enable exquisite con trol of their eigenfrequencies and the coupling. Since the decay rates of our resonators are small, the modulation needed to excite parametric vibrations is comparatively weak, so that the vibrations are nearly sinusoidal. With micromechnical resonators, we demonstrate that the asymmetric Ising model does not have detailed balance. An immediate consequence is the emergence of a probability current that circulates in the system in the stationary state. We measure this current for a system of two coupled non-identical parametric oscillators. The measurements are in excellent agreement with the theory. We consider the case where the coupling of the oscillators is weak, so that each oscillator still has two stable vibrational states, and their amplitudes and phases are only weakly changed by the coupling. However, the coupling can significantly change the rates of noise-induced switching between the states. To gain an intuitive understanding, consider a Brownian particle in a symmetric double-well potential. Because of thermal fluctuations, the particle switches between the wells with the rate \(W\propto\exp(-\Delta U/k_{B}T)\), where \(\Delta U\) is the barrier height and \(T\) is temperature [40]. If the potential is tilted, the barrier heights are incremented by \(\pm\delta U\) in the opposite wells, breaking the symmetry of the interwell switching rates. The rates acquire extra factors \(\exp(\pm\delta U/k_{B}T)\). Even for a small tilt, the ratio \(\delta U/k_{B}T\) can be large, for low temperatures. In that case the stationary populations of the wells become significantly different. Consider now a set of weakly interacting particles in double-well potentials. A particle exerts force on other particles that depends on which well it occupies. This force tilts the potentials of the other particles and breaks the symmetry of the interwell switching rates, reminiscent of the effect of the spin-spin coupling in the Glauber model. The change of the switching rates of the spins is fully determined by the coupling energy, which in turn depends only on the relative spin orientations. For example, for two coupled spins, the change \(\propto\exp(-J_{12}\sigma_{1}\sigma_{2}/k_{B}T)\) is the same for both of them. In other words, the two spins affect each other symmetrically. As we show, the picture extends to coupled parametric oscillators, even though there are no static double-well potentials. However, a major difference is that the oscillators can affect each other asymmetrically. If the oscillators are identical and the coupling is weak, the changes of the switching rates are equal within each pair of coupled oscillators. The system is mapped onto the symmetric Ising model. On the other hand, if the oscillators have different parameters, we show that the coupling-induced changes of the switching rates are different. The picture of a change in potential barriers no longer applies. Instead, the system is mapped onto the asymmetric Ising model. In our system, switching between the period-two vibrational states is activated by noise with controlled intensity, which allows us to fully characterize the switching rates. ## II Results We present experimental results for a system of two micromechanical torsional resonators. They are shown in Fig. 1a. Each resonator consists of a movable polysilicon top plate (\(200\,\mu\mathrm{m}\times 200\,\mu\mathrm{m}\times 3.5\,\mu\mathrm{m}\)) supported by two torsional rods, with two fixed electrodes underneath. The resonators are located side by side. Their vibrations can be excited and detected independently. For resonator \(i\) (\(i=1,2\)), dc voltages \(V_{\mathrm{L},i}^{\mathrm{dc}}\), \(V_{\mathrm{R},i}^{\mathrm{dc}}\) and \(V_{i}^{\mathrm{top}}\) are applied to the left electrode, the right electrode and the top plate respectively. Application of an ac voltage on the left electrode generates a periodic electrostatic torque that excites vibrations of the top plate. The vibrations are detected by measuring the current flowing out of the top plate induced by the capacitance change between the plate and the two underlying electrodes. In this study, only the fundamental modes of torsional vibrations are used. The eigenfrequencies of the resonators are almost identical, with \(\omega_{1}/2\pi\approx 15860.562\) Hz for resonator 1 and \(\omega_{2}/2\pi\approx 15860.598\) Hz for resonator 2; they can be fine tuned by adjusting dc potential difference \(\Delta V_{\mathrm{R},i}=V_{i}^{\mathrm{top}}-V_{\mathrm{R},i}^{\mathrm{dc}}\) between the plate and the corresponding right electrode (Supplementary Note 1). The damping constants are \(\Gamma_{1}/2\pi\approx 0.064\) Hz and \(\Gamma_{2}/2\pi\approx 0.063\) Hz for resonators 1 and 2 respectively. The spring constants of the both resonators are modulated electrostatically together at frequency near \(2\omega_{1}\approx 2\omega_{2}\), leading to parametric excitation of the vibrations. We also inject broadband Gaussian voltage noise for each resonator that leads to occasional switching between the period-two vibrational states. As shown in Fig. 1a, the adjacent edges of the plates form interdigitated comb-shaped electrodes to allow the Figure 1: **Two coupled parametric torsional resonators.****a.** Scanning electron micrograph of two torsional resonators located side-by-side. The scale bar measures \(100\,\,\mu\mathrm{m}\). **b.** Schematic of the actuation scheme and measurement circuitry. Voltages applied to the left electrodes generate the parametric modulation at \(\omega_{p}\), the drive at \(\omega_{p}/2\) and the noise. Voltages \(V_{\mathrm{R},i}^{\mathrm{dc}}\) applied to the fixed electrodes allow fine tuning of the resonant frequencies. The dc voltage differences between each top plate and the underlying electrodes leads to an ac current flowing out of the top plate as it rotates. Capacitive coupling between the two plates is controlled by the voltage difference \(V_{1}^{\mathrm{top}}-V_{2}^{\mathrm{top}}\). **c.** Vibration amplitude of resonators 1 (red) and 2 (blue) subjected to identical parametric modulation as functions of \(\omega_{p}/2\). There is no coupling between the resonators and the eigenfrequencies are tuned to be almost equal. plates to couple electrostatically when there is a potential difference \(V_{\text{cpl}}=V_{1}^{\text{top}}-V_{2}^{\text{top}}\) between them. When \(V_{\text{cpl}}\) = 0 V, we verify that there is no coupling between the plates. We keep \(V_{\text{cpl}}\) small as we focus on the regime of the weak coupling that only weakly perturbs the dynamics in the absence of noise. All measurements are performed at room temperature at pressure below 10 \(\mu\)torr. The eigenfrequencies and the coupling between the resonators can be tuned independently (Supplementary Note 1), which is crucial for revealing the features of the asymmetric Ising model. The equations of motion of coupled parametric oscillators have the form \[\ddot{q}_{i}+2\Gamma_{i}\dot{q}_{i}+\omega_{i}^{2}q_{i}+\gamma_{i }q_{i}^{3}+M_{i}^{-1}\sum\limits_{j}^{\prime}V_{ij}q_{j}\] \[=(F_{p}/M_{i})q_{i}\cos\omega_{p}t+\xi_{i}(t). \tag{1}\] For our pair of torsional resonators, \(i=1,2\). The coordinate \(q_{i}\) is the rotation angle of the \(i\)th resonator, \(M_{i}\) is its moment of inertia, \(\gamma_{i}\) is the Duffing nonlinearity parameter, \(F_{p}\) and \(\omega_{p}\) are the amplitude and frequency of the parametric modulation, respectively, and \(\xi_{i}(t)\) is zero-mean Gaussian noise of controlled intensity \(4D_{i}\Gamma_{i}\), \(\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=4D_{i}\Gamma_{i}\delta_{ij}\delta( t-t^{\prime})\). Parameters \(V_{ij}\) are the controlled parameters of the oscillator coupling, with \(V_{ij}\) = \(V_{ji}\). In the experiments on the effect of the coupling, \(D_{i}\) determines the effective temperature of the noise. We set \(D_{1}=D_{2}\). We use resonant modulation, \(|\omega_{p}-2\omega_{i}|\ll\omega_{i}\), which allows us to parametrically excite vibrations even with small \(F_{p}\). In the absence of resonator coupling and noise the two stable vibrational states of \(i\)th resonator are \[q_{i}(\sigma_{i};t)=A_{i}\sigma_{i}\cos[(\omega_{p}/2)t+\varphi_{i}], \tag{2}\] where \(A_{i}\) and \(\varphi_{i}\) are the vibration amplitude and phase, and \(\sigma_{i}=\pm 1\). The values of \(A_{i},\varphi_{i}\) depend on the resonator parameters; for small damping \(|\varphi_{i}|\ll 1\). For brevity, and where it may not cause confusion, we use \(\uparrow\) and \(\downarrow\) for \(\sigma_{i}=1\) and \(\sigma_{i}=-1\), respectively. In what follows we associate the vibrational states (2) with spin states. This association is justified provided the change of these states because of coupling the oscillators to each other, i.e., the change of the amplitudes \(A_{i}\) and phases \(\varphi_{i}\), is small. The weakness of the coupling is thus a major condition of the mapping of the system of oscillators on the system of coupled spins. Classical and quantum noise causes transitions between the states \(\sigma_{i}=\pm 1\) of an isolated oscillator. By symmetry, the rates \(W_{i}(\sigma_{i})\) of transitions \(\sigma_{i}\rightarrow-\sigma_{i}\) of the \(i\)th oscillator are the same for the both states. For weak noise, the transitions are rare, \(W_{i}(\sigma_{i})\ll\Gamma_{i}\), and the dependence of the switching rate on the noise intensity is given by the activation law [17; 19]. For classical noise \[W_{i}(\sigma_{i})=C_{i}\exp[-R_{i}(\sigma_{i})/D_{i}] \tag{3}\] where \(R_{i}(\sigma_{i})=R_{i}(-\sigma_{i})\) is the effective activation energy and \(C_{i}\sim\Gamma_{i}\). Activated switching in single parametric oscillators has been measured in a number of systems [20; 18; 23]. In our experiment, the switching rate of each resonator is extracted from the Poisson distribution of the residence times (Appendix B). Due to slight difference in the damping constants, the switching rates for the two resonators are measured to be different. We verify that, when the coupling is zero, in each resonator the two coexisting states with opposite phases are equally occupied and the populations of all 4 states \(\sigma_{1,2}=\pm 1\) are equal (Supplementary Note 3). As seen from Eqs. (1) and (2), if the noise and the coupling of the oscillators are weak, to describe the effect of the coupling to the \(j\)th oscillator on the dynamics of the \(i\)th oscillator, one can replace the coordinate of the \(j\)th oscillator \(q_{j}(t)\) in the equation of motion of the \(i\)th oscillator (1) by \(q_{j}(\sigma_{j};t)\). In this approximation, the \(i\)th oscillator is driven by a force at frequency \(\omega_{p}/2\) exerted by the oscillators to which it is coupled. The force changes when the \(j\)th oscillator switches between its vibrational states. The effect of weak coupling can be understood if one considers the dynamics of an isolated parametric oscillator driven by a weak extra force \(F_{d}\cos[(\omega_{p}/2)t+\phi_{d}]\) that mimics the force from other oscillators [24]. Such force breaks the symmetry of the vibrational states \(\sigma_{i}=\pm 1\). A major consequence of the symmetry lifting for weak force is the change of the switching rates \(W(\sigma_{i})\). To leading order in \(F_{d}\) this change has been predicted [41] to be described by an increment of the activation energy that is linear in \(F_{d}\), \[R_{i}(\sigma_{i})=\bar{R}_{i}+\Delta R_{i}(\sigma_{i}),\quad\Delta R_{i}( \sigma_{i})=\chi_{i}\sigma_{i}F_{d}\cos(\phi_{d}+\delta_{i}) \tag{4}\] Here \(\bar{R}_{i}\) is the value of \(R(\sigma_{i})\) in the absence of the drive. The parameters \(\chi_{i}\) and \(\delta_{i}\) are the magnitude and phase of the _logarithmic susceptibility_, i.e., the susceptibility of the logarithm of the switching rate. They strongly depend on the parameters of the oscillator and the parametric modulation, but are independent of \(F_{d}\) and \(\phi_{d}\)[41]. As seen from Eqs. (3) and (4), for small noise intensity even a weak drive can significantly change the switching rates. It therefore can significantly change the stationary populations of the states \(w_{\text{st}}(\sigma_{i})\): from the balance equation for these populations \(\dot{w}(\sigma_{i})=-W(\sigma_{i})w(\sigma_{i})+W(-\sigma_{i})w(-\sigma_{i})\) we obtain \(w_{\text{st}}(\sigma_{i})=W(-\sigma_{i})/[W(\sigma_{i})+W(-\sigma_{i})]\). A strong population change that periodically depends on phase \(\phi_{d}\) was indeed seen in experiments [42]. However, the general effect of the linear dependence of \(\log[W(\sigma_{i})]\) on the drive amplitude of a periodic force in bistable systems has not been demonstrated other than in simulations [43]. This effect may be responsible for the deviation of the escape rate from the expected quadratic dependence on the drive amplitude in Josephson junctions [44]. We measure the logarithmic susceptibility of each resonator in our two-resonator system. By setting \(V_{cpl}\) = 0 V we ensure there is no coupling between the two resonators. For each resonator, we apply a resonant drive \(F_{d}\cos[(\omega_{p}/2)t+\phi_{d}]\) on top of the parametric modulation at \(\omega_{p}\). The drive phase \(\phi_{d}\) is chosen to be \(3.3^{\circ}\) so that the results can be compared to the case of coupled oscillators when coupling is later re-introduced. Figure 2a shows the random switches of the phase of resonator 1 as a function of time at a constant \(F_{d}\) of \(1.04\times 10^{-17}\) Nm. The ratio of populations \(w_{\text{st}}(\sigma_{1}=+1)/w_{\text{st}}(\sigma_{1}=-1)\)\(\equiv w_{\text{st}}(\uparrow)/w_{\text{st}}(\downarrow)\) is obtained by measuring the residence time in the two states \(\sigma_{1}=\pm 1\). Figure 2b shows that this ratio deviates from 1 as \(F_{d}\) is increased. Next, the switching rates are measured by fitting to the Poisson distribution of the residence times (Appendix B). Figure 2c shows the effect of \(1/D\) (which mimics the inverse noise temperature) on the logarithm of the ratio of switching rates with the symmetry breaking drive turned on and off. The upper and lower branches represent decrease and increase of the activation energy respectively, corresponding to opposite signs of \(\sigma_{1}\) in Eq. (4). We obtain the increment \(|\Delta R_{1}|\) from the average of the magnitude of the slopes of the two linear fits through the origin. The linear dependence of \(\log[W_{1}(\sigma_{1})/\bar{W}_{1}]\) on \(1/D\) in Fig. 2c confirms that the effect of a weak symmetry-breaking drive is primarily a change \(\Delta R_{1}(\sigma_{1})\) of the activation energy of interstate switching. If \(D\) is small compared to \(|\Delta R_{1}|\), the change of the switching rate can be substantial. As shown in Fig. 2d, \(|\Delta R_{1}|\) is indeed linear in \(F_{d}\) for a weak drive. The factor \(\chi_{1}\cos(\phi_{d}+\delta_{1})\) for resonator 1 is given by the slope of the linear fit (solid red line). Measurements are then repeated for resonator 2 (Supplementary Note 4) to yield \(\chi_{2}\cos(\phi_{d}+\delta_{2})\). In Fig. 2d the measurements are compared with the results of simulations of the switching rate. There is excellent agreement between measurement and the general expressions (3) and (4). However, for stronger drive the dependence of \(\log[W_{i}(\sigma_{i})]\) on \(F_{d}\) becomes nonlinear (Supplementary Note 6). #### iii.1.1 Switching rates in the system of coupled oscillators The above results suggest that, if we now consider coupled oscillators, the rate of switching \(\sigma_{i}\rightarrow-\sigma_{i}\) of the \(i\)th oscillator depends on the states \(\{\sigma_{j}\}\) of the oscillators coupled to it. From Eqs. (2) - (4), for weak coupling it has the form \[W_{i}(\sigma_{i},\{\sigma_{j\neq i}\})=\bar{W}_{i}\exp[-\sum_{j \neq i}K_{ij}\sigma_{i}\sigma_{j}], \tag{5}\] \[K_{ij}=V_{ij}\chi_{i}A_{j}\cos(\phi_{j}+\delta_{i})/D_{i}, \tag{6}\] where \(\bar{W}_{i}=C_{i}\exp(-\bar{R}_{i}/D_{i})\) is the switching rate in the absence of coupling. The change of the activation energy \(\Delta R_{i}(\sigma_{i},\{\sigma_{j\neq i}\})\) is equal to \(\sum_{j\neq i}K_{ij}\sigma_{i}\sigma_{j}D_{i}\). Equation (5) has the form of the expression for the switching rates of coupled Ising spins. In the standard Ising model \(K_{ij}\) is given by the ratio of the coupling energy \(J_{ij}\) to \(k_{B}T\)[38]. Therefore \(K_{ij}=K_{ji}\). In our case, if all oscillators are identical, we also have \(K_{ij}=K_{ji}\), as seen from Eq. (6). Therefore the system of coupled identical parametric oscillators maps onto the standard Ising model of coupled spins. Figure 2: **Measurement of the logarithmic susceptibility of a single resonator** Coupling between the two resonators is turned off. We present results for resonator 1 and indicate the states \(\sigma_{1}=1\) and \(\sigma_{1}=-1\) by \(\uparrow\) and \(\downarrow\), respectively. **a.** In the presence of noise, resonator 1 randomly switches between two coexisting vibration states with opposite phase. The two light grey lines are thresholds for identifying phase switches. The dark grey lines represent another choice of threshold. A drive at half the modulation frequency with amplitude \(F_{d}=1.04\times 10^{-17}\) Nm breaks the symmetry and renders the residence times, and thus the stationary populations of the states \(\uparrow\) and \(\downarrow\) different. **b.** The ratio \(w_{\text{st}}(\uparrow)/w_{\text{st}}(\downarrow)\) increases as \(F_{d}\) increases. Circles are measured results for the chosen drive phase \(\phi_{d}=3.3^{\circ}\). The solid line represents theory calculated using the simulated logarithmic susceptibility. Inset: same data shown in semilog scale. **c.** Logarithm of the ratio of the switching rates from states \(\uparrow\) and \(\downarrow\) with the resonant drive turned on, \(W_{1}(\uparrow)\) and \(W_{1}(\downarrow)\) (up and down triangles, respectively), to the rate with no drive \(\bar{W}_{1}=C_{1}\exp(-\bar{R}_{1}/D)\), plotted as a function of \(1/D\). The switching rates are modified by different amounts for the two states according to Eq. (4) The increments of the effective activation energies \(\Delta R_{1}(\uparrow)\) and \(\Delta R_{1}(\downarrow)\) are obtained from the slopes of the linear fits through the origin. **d.** Increment \(|\Delta R_{1}|\) as a function of \(F_{d}\) for resonator 1. The slope of the linear fit through the origin yields \(\chi_{1}\cos(\phi_{d}+\delta_{1})\) defined in Eq. (4). Measurements are shown in red. Numerical simulations are shown in pink. If the oscillators are different, \(K_{ij}\neq K_{ji}\). As \(V_{ij}=V_{ji}\) in Eq. (6), the difference originates from both the vibration amplitudes and logarithmic susceptibilities. For \(K_{ij}\neq K_{ji}\), the system is mapped onto the _asymmetric Ising model_. As seen from the known expressions for the vibration amplitudes and phases as well as the logarithmic susceptibilities (cf. [41]), the difference between \(K_{ij}\) and \(K_{ji}\) can be already large if, for example, the oscillator eigenfrequencies are slightly different: \(|\omega_{i}-\omega_{j}|\ll\omega_{i}\), but the ratio \(|\omega_{i}-\omega_{j}|/\Gamma_{i}\) is not small and, most importantly, the noise intensity is small. The stationary probability distribution \(w_{\rm st}(\{\sigma_{n}\})\) is generally not known for the asymmetric Ising model. An important feature of the model is the lack of detailed balance (Appendix D). It leads to the onset of a probability current in the stationary state. An elementary transition is a flip of a single spin, with the rate that depends on other spins. The current associated with a flip of the \(i\)th spin for a given configuration of other spins \(\{\sigma_{j\neq i}\}\) is, \[I(\sigma_{i},\{\sigma_{j\neq i}\}\rightarrow-\sigma_{i},\{ \sigma_{j\neq i}\})\] \[=w_{\rm st}(\sigma_{i},\{\sigma_{j\neq i}\})\,W_{i}(\sigma_{i}, \{\sigma_{j\neq i}\})\] \[-w_{\rm st}(-\sigma_{i},\{\sigma_{j\neq i}\})\,W_{i}(-\sigma_{i}, \{\sigma_{j\neq i}\}) \tag{7}\] For symmetric coupling, \(K_{ij}=K_{ji}\), the current (7) is zero (Supplementary Note 7). #### iii.1.2 Measurement of asymmetric coupling constant and probability current We demonstrate the asymmetry in the coupling coefficients and the existence of a probability current using our system of two coupled parametric oscillators (\(i=1\), 2). Weak coupling between the two resonators is introduced by applying \(V_{\rm cpl}\) = 0.3 V. We adjust \(\Delta V_{\rm R,1}\) and \(\Delta V_{\rm R,2}\) to tune the resonant frequencies to be close but non-identical, with \(\omega_{1}-\omega_{2}=0.4\) Hz. The two resonators are subjected to parametric modulation of the same amplitude and the same frequency \(\omega_{p}\). As shown in Fig. 3a, when \(\omega_{p}\) is swept up, resonator 2 undergoes a subcritical bifurcation first, followed by resonator 1. The electrostatic coupling between the two plates favors the configuration where the phases of the resonators are opposite to each other. In the absence of injected noise, resonator 1 adopts a vibration phase opposite to resonator 2 as \(\omega_{p}\) is increased. Correlations in the phase were previously observed in two nanomechanical parametric resonators [16] undergoing supercritical bifurcations. Unlike Ref. [16] where the amplitude increases from zero in a continuous fashion, in our measurement the amplitudes jump sharply from zero in a subcritical bifurcation. Next, we fix \(\omega_{p}\) at 2\(\omega_{2}\) and increase the noise intensity while maintaining the same effective temperatures in the two resonators, \(D_{1}=D_{2}=D\). The noise induces switching of each of the two resonators at random times. We measure the time intervals during which the 4 states are occupied, and obtain the stationary probability distributions \(w_{\rm st}(\sigma_{1},\sigma_{2})\). For brevity we indicate the states \(\sigma=1\) and \(\sigma=-1\) by \(\uparrow\) and \(\downarrow\), respectively, as we also did in Fig. 2. Therefore the 4 states are \(\uparrow\uparrow\), \(\uparrow\downarrow\), \(\downarrow\uparrow\) and \(\downarrow\downarrow\). The areas of the circles in Fig. 3b are proportional to the measured stationary probability distribution. We find that \(w_{\rm st}(\uparrow\downarrow)\) and \(w_{\rm st}(\downarrow\uparrow)\) exceed \(w_{\rm st}(\uparrow\uparrow)\) and \(w_{\rm st}(\downarrow\downarrow)\), consistent with notion that the electrostatic coupling favors opposite vibration phases in the two resonators, so that \(K_{12},K_{21}<0\). The measured \(w_{\rm st}\) are in good agreement with Eq. (11). The change of the state populations comes from the Figure 3: **Asymmetric Ising model implemented with two coupled parametric oscillators****a**. Vibration amplitudes of resonators 1 (red) and 2 (blue), with \(\Delta\omega/2\pi\) = -0.4 Hz and \(V_{cpl}\) = 0.3 V, under identical parametric modulation with no noise added. The arrow marks \(\omega_{p}/2\) for measuring noise-induced switching for the rest of the figure. **b.** Switchings between the four states of the two-resonator system in the presence of noise. The areas of the circles are proportional to the measured stationary populations \(w_{\rm st}(\sigma_{1},\sigma_{2})\) (the first arrow from the left refers to \(\sigma_{1}\) and the second arrow refers to \(\sigma_{2}\)). The lengths of the straight arrows between the circles are proportional to the products of the measured switching rates \(W_{i}(\sigma_{1},\sigma_{2})\) and the corresponding populations \(w_{\rm st}(\sigma_{1},\sigma_{2})\). The purple arrow represents the net probability current. **c.** Logarithm of the measured changes of the switching rates due to coupling as a function of \(1/D\). The values of \(\Delta R_{i}(\sigma_{i},\sigma_{j})\equiv-D\log[W_{i}(\sigma_{i},\sigma_{j})/ \bar{W_{i}}]\) are determined by the slopes of the linear fits. The difference between \(|\Delta R_{1}(\sigma_{1},\sigma_{2})|\) and \(|\Delta R_{2}(\sigma_{2},\sigma_{1})|\) for the same pairs \((\sigma_{1},\sigma_{2})\) is identified from the different magnitudes of the slopes. This difference determines the asymmetry of the Ising model. **d.** Dependence of \(|\Delta R_{1}|\) (red) and \(|\Delta R_{2}|\) (blue) on \(V_{cpl}^{2}\) that is proportional to the coupling constant. The values of \(|\Delta R_{i}|\) are the average values of \(|\Delta R_{i}(\sigma_{i},\sigma_{j})|\) for \(\sigma_{i}=\sigma_{j}\) and \(\sigma_{i}=-\sigma_{j}\). The pink and light blue lines are obtained from theory based on the independently simulated logarithmic susceptibilities of individual uncoupled resonators. change of the switching rates. From Eq. (5) applied to two resonators, the rate of switching from the state \(\sigma_{i}\) of resonator \(i\) is changed by \(\exp(-K_{ij})\) if \(\sigma_{j}=\sigma_{i}\), i.e., the phases of the two resonators are almost equal, and by \(\exp(K_{ij})\) if the phases are opposite. For two coupled resonators, there are a total of 8 transitions, as illustrated in Fig. 3b. In the experiment, each of the 8 switching rates is individually measured, by fitting to the Poisson distribution of the residence times. Measurements are performed both before and after the coupling is turned on to give \(\bar{W}_{i}\) and \(W_{i}(\sigma_{i},\sigma_{j})\) respectively [for two resonators, we use the notation \(W_{i}(\sigma_{i},\sigma_{j})\) rather than \(W_{i}(\sigma_{i},\{\sigma_{j}\})\)]. The ratio \(W_{i}(\sigma_{i},\sigma_{j})/\bar{W}_{i}\) represents the modification of the switching rate of resonator \(i\) due to coupling. Figure 3c plots the logarithm of the ratio \(W_{i}(\sigma_{i},\sigma_{j})/\bar{W}_{i}\) for the two resonators as a function of \(1/D\), where red and blue results correspond to switching of resonator 1 and 2 respectively. For the upper branches where the phases are identical, the switching rates are increased due to coupling, and vice versa for the lower branches. The lines are linear fits through the origin from which the change of the activation barriers \(\Delta R_{i}\) can be obtained by taking the negative values of the slopes. We observe that, in agreement with Eq. (5), \[W_{i}(\sigma_{i},\sigma_{j})=W_{i}(-\sigma_{i},-\sigma_{j}) \tag{8}\] within the measurement uncertainty. Therefore in Fig. 3c we show the logarithm of the ratio of the average values of \(W_{i}(\sigma_{i},\sigma_{j})\) and \(W_{i}(-\sigma_{i},-\sigma_{j})\) to \(\bar{W}_{i}\). There is a clear difference between the measured values of \(|\Delta R_{1}|\) and \(|\Delta R_{2}|\) for the same sets \((\sigma_{1},\sigma_{2})\). For resonator 1, the slopes measured in Fig. 3c are \(3.6\times 10^{-7}\) N\({}^{2}\)kg\({}^{-2}\)Hz\({}^{-1}\) and \(-3.0\times 10^{-7}\) N\({}^{2}\)kg\({}^{-2}\)Hz\({}^{-1}\) for \(\sigma_{1}=\sigma_{2}\) and \(\sigma_{1}=-\sigma_{2}\), respectively, whereas those for resonator 2 are \(6.6\times 10^{-7}\) N\({}^{2}\)kg\({}^{-2}\)Hz\({}^{-1}\) and \(-5.8\times 10^{-7}\) N\({}^{2}\)kg\({}^{-2}\)Hz\({}^{-1}\) for \(\sigma_{1}=\sigma_{2}\) and \(\sigma_{1}=-\sigma_{2}\), respectively. Averaging the magnitude of the slopes \(\Delta R_{i}(\sigma_{i},\sigma_{j})\) for \(\sigma_{i}=\sigma_{j}\) and \(\sigma_{i}=-\sigma_{j}\) yields \(|\Delta R_{1}|\) exceeding \(|\Delta R_{2}|\) by a factor of 1.7. The difference between \(|\Delta R_{1}|\) and \(|\Delta R_{2}|\) demonstrates that our system of two parametric resonators with different resonant frequencies maps onto the asymmetric Ising model. Figure 3d shows that \(|\Delta R_{1}|\) and \(|\Delta R_{2}|\) are proportional to the square of potential difference \(V_{\rm cpl}\) between the two vibrating plates, with different proportionality constants for the two resonators. The measured values of \(|\Delta R_{i}|\) are compared in Fig. 3d with Eq. (6) evaluated with the numerically simulated values of the logarithmic susceptibility and the vibration amplitudes and phases \(A_{j},\phi_{j}\) independently calculated for each resonator in the absence of coupling. There is good agreement between the entirely independent measurements with the coupling (circles) and the simulations with no coupling [the lines based on Eq. (6)]. In turn, the simulations with no coupling are in excellent agreement with the measurement of the logarithmic susceptibility with no coupling, as seen from Fig. 2. The linear dependence of \(\log[W_{i}(\sigma_{i},\sigma_{j})/\bar{W}_{i}]\) on \(1/D\) in Fig. 3c confirms the proposed mechanism of the strong effect of even weak coupling, provided the noise is also weak. As discussed earlier, a difference between \(|\Delta R_{1}|\) and \(|\Delta R_{2}|\), and hence \(K_{12}\) and \(K_{21}\), implies that detailed balance is broken, giving rise to a net probability current. For two resonators, the stationary probability distribution can be calculated (Appendix C), and then Eq. (7) gives \[I(\uparrow\uparrow\rightarrow\uparrow\downarrow)=\frac{\bar{W}_{1}\bar{W}_{2} }{2}\frac{\sinh(K_{12}-K_{21})}{\bar{W}_{1}\cosh(K_{12})+\bar{W}_{2}\cosh(K_{2 1})}. \tag{9}\] In the experiment, the probability currents are obtained by taking the product of the measured stationary probability distribution and the measured switching rate out of the specific state. They are represented by block arrows in Fig. 3b. The lengths of the arrows are chosen to be proportional to the product of the measured stationary probability distribution and the measured switching rate. Our measurement demonstrates the lack of detailed balance, as evident from the difference in length of each pair of arrows. The magnitude of the net probability current for the four branches are identical within the measurement uncertainty (Supplementary Note 5). As denoted by the purple arrow in Fig. 3b, the net probability current flows in the clockwise direction for \(\omega_{2}-\omega_{1}\) = -0.4 Hz. In our system of two coupled resonators with near identical damping, the sign and magnitude of the probability current are largely determined by the frequency mismatch \(\Delta\omega=\omega_{2}-\omega_{1}\) if the coupling and the noise intensity are fixed. When \(\Delta\omega\) is changed to 0.4 Hz by adjusting \(V_{\rm R,2}\), we find that the sign of the net probability current is reversed. Figure 4a plots the net probability current averaged over the four branches as a function of \(\Delta\omega\). The line represents the probability current predicted by Eq. (9) with \(K_{12}\) and \(K_{21}\) given by the simulated value of the logarithmic susceptibilty of a single resonator using Eq. (6). The difference between \(K_{12}\) and \(K_{21}\), and hence the probability current, can be tuned to zero by choosing \(\Delta\omega\). In our system, choosing \(\Delta\omega\) equal to zero makes the probability current vanish within measurement uncertainty. Detail balance is restored. The two resonators therefore map to the symmetric Ising model. The stationary distribution \(w_{\rm st}\) found in the experiment in this case coincides with the standard expression \(w_{\rm st}(\{\sigma_{i}\})\propto\exp(\sum K_{ij}\sigma_{i}\sigma_{j}/2)\) (Supplementary Note 8). We further show in the SM that while \(w_{\rm st}(\uparrow\downarrow)=w_{\rm st}(\downarrow\uparrow)\) exceed \(w_{\rm st}(\uparrow\uparrow)=w_{\rm st}(\downarrow\downarrow)\) due to the coupling, the switching rates given by Eq. (7) lead to vanishing of the net probability current. ## III Discussion Our results demonstrate that a system of slightly different parametric oscillators provides a long-sought inorganic implementation of an asymmetric Ising model. The parameters of the model are determined by the oscillator parameters, including the eigenfrequencies and the coupling, as well as the amplitude and frequency of the parametric modulation. These parameters can be controlled in a broad range. For oscillators based on micro- and nanomechanical resonators, this opens a way of creating asymmetric Ising networks with variable coupling strength and variable connectivity, which is the problem of interest for diverse disciplines, from biology to artificial intelligence. Besides these applications, such networks provide a conceptually simple setting for studying features of many-body dynamics away from thermal equilibrium. One of the major generic features is the lack of detailed balance, which leads to the onset of a probability current in the stationary state. We measured the stationary probability current in an asymmetric Ising system. The magnitude of the current depends exponentially strongly on the interrelation between the coupling of the oscillators and the intensity of the noise in the range where both are small. Our analysis and measurement are done in the regime where the coupling-induced change of the oscillator frequencies is much smaller than the frequencies themselves, and the noise-induced spread of the vibration amplitudes is much smaller than the amplitudes themselves. Yet the ratio of the properly scaled coupling and noise intensity can be arbitrary. We note that, for a parametrically excited oscillator, noise necessarily comes along with relaxation, so that it is present even in the quantum regime. The experiment shows that the effect of weak coupling of parametric oscillators can be quantitatively described in terms of an entirely independent effect of an additional drive at half the modulation frequency applied to an individual oscillator. It is demonstrated that, in a broad range of the drive amplitudes, the drive leads to a change of the _logarithm_ of the rate of switching between the vibrational states of the oscillator, which is linear in the drive amplitude. The corresponding logarithmic susceptibility was measured and found to be in excellent agreement with simulations. The stationary state of an asymmetric Ising model is not known, generally. This is not a consequence of disorder. A simple "ordered" system that maps onto an asymmetric Ising system is a chain of parametric oscillators where the coupling to the nearest neighbors for the oscillators on even and odd sites is different. The coupling parameters take on two values, \(K_{\mathrm{e}}=K_{2n\,2n\pm 1}\) and \(K_{\mathrm{o}}=K_{2n+1\,2n+1\pm 1}\). For small \(|K_{\mathrm{e,o}}|\) one can analyze the spin dynamics similar to how it was done by Glauber [38] for a symmetric chain (Supplementary Note 9). In particular, we find that there are two spin-diffusion waves for a periodic chain; in an asymmetric model the wave frequencies, rather than being purely imaginary, can be complex, generally. The probability current in the stationary state is \(\propto K_{\mathrm{o}}-K_{\mathrm{e}}\). This model immediately extends to a square lattice, which can address the question of the possibility of an Onsager-type transition for an asymmetric Ising model with nearest-neighbor coupling. We note that, for an asymmetric Ising model, the eigenvalues of the balance equation can be complex in the general case, in contrast to a symmetric Ising model. ## IV Acknowledgement This work is supported by the Research Grants Council of Hong Kong SAR (Project No. 16304620) and partially supported by Project No. HKUST C6008-20E. MID was supported in part by the National Science Foundation through Grant No. DMR-2003815. ## Appendix A Excitation and Detection Schemes. For each resonator \(i\) (\(i=1,2\)) shown in Fig. 1b, the top plate is subjected to electrostatic torques exerted by the left and right electrodes underneath. If the potential difference between the two top plates \(V_{cpl}=V_{1}^{\mathrm{top}}-V_{2}^{\mathrm{top}}\) is non-zero, there is also an electrostatic attraction between Figure 4: **Probability current for two non-identical coupled parametric resonators** Dependence of probability current on the frequency mismatch \(\Delta\omega\) between the two resonators at \(V_{\mathrm{cpl}}=0.3\) V. Purple circles are measurement. Calculations for two resonators based on the simulated logarithmic susceptibility of individual units are plotted in black. The straight line is a linear fit through the origin. Inset: For the considered weak coupling the frequency anticrossing as a function of \(\omega_{2}-\omega_{1}\) is undetectable. The color represents the sum of the amplitudes of forced vibrations of the two modes in nm; \(x\)-axis is the bias \(V_{\mathrm{R,1}}\) (V) that controls \(\omega_{1}\), whereas \(y\)-axis is the frequency of resonant drive (Hz) applied to both resonators. Red squares and blue circles mark the values of \(\omega_{1}\) and \(\omega_{2}\) used in the main figure. the two resonators. Each top plate is connected to the input of an amplifier that is a virtual ground for ac voltages. On the left electrodes, the ac component \(V_{p,i}\cos(\omega_{p}t)\) controls the modulation of the spring constant via electrostatic springs softening. When a symmetry breaking torque is needed to measure the logarithmic susceptibility, a second ac component \(V_{d,i}\cos[(\omega_{p}/2)t+\phi_{i}]\) is added. The noise voltage \(V_{n,i}(t)\) generates the noise torque to induce transitions between the two states. Voltages on the right electrodes only contain dc components. They are adjusted for fine tuning of the resonant frequencies of the two resonators in order to maintain the desirable difference of the oscillator frequencies \(\Delta\omega\). Coupling between the two resonators is controlled by the potential difference between the top plates via \(V_{\mathrm{cpl}}\) (Supplementary Note 1). Vibrations in each resonator are detected by measuring the change of capacitance between the top plate and the two underlying electrodes. The dc voltages described above lead to build up of charges on the top plates. As each of the top plates rotates, the capacitances with the two underlying electrodes change. Charge flowing out of the two top plates are detected independently by two separate amplifiers. The outputs of each amplifier is fed into a lock-in amplifier referenced at \(\omega_{p}/2\). ## Appendix B: Measurement of switching rates. To measure the switching rate of an individual resonator, its oscillation phase \(\varphi\) is recorded as a function of time using a lockin amplifier. Figure 2a shows part of a record for resonator 1. If the resonator initially resides in the state \(\sigma=-1\) with \(\varphi\approx\pi\), we identify that it has switched to the \(\sigma=+1\) state with \(\varphi\approx 0\) when the phase goes over the threshold \(\varepsilon\), where \(\pi/4<\varepsilon<\pi/2\). In switching from the initial state \(\sigma=+1\) with \(\varphi\approx 0\) the phase with overwhelming probability jumps to \(-\pi\equiv\pi(\mathrm{mod}2\pi)\). In this case the threshold is \(-\pi+\varepsilon\). As the resonator switches back and forth between the two states, we record two sequences of residence times for the two states separately. The residence times in each state are plotted as a histogram. A typical histogram is shown in Fig. 5. The exponential decrease in the histogram confirms that the transitions are random and uncorrelated in time. An exponential fit to the histograms yields the switching rate. Fitting to a separate histogram gives the rate of switching from another state. We check that the measured switching rate does not depend on the choice of \(\varepsilon\). For example, in Fig. 2a, the dark and light lines indicates two difference choices of threshold \(\varepsilon\). They yield measured switching rates that are equal within the error bar of the fitting. For uncoupled oscillators in the absence of the symmetry breaking drive, the measured switching rates out of the two states of each resonator are identical to within experimental uncertainty. Their value gives \(\bar{W}_{i}\) for resonator \(i\). Moreover, the stationary probability distributions \(w_{\mathrm{st}}(\uparrow\downarrow)\), \(w_{\mathrm{st}}(\downarrow\uparrow)\), \(w_{\mathrm{st}}(\uparrow\uparrow)\) and \(w_{\mathrm{st}}(\downarrow\downarrow)\) are measured to be equal to within measurement uncertainty (Supplementary Note 3). To measure the logarithmic susceptibility of a single resonator, the switching rates are measured after the symmetry breaking drive is turned on. The fractional change of the switching rates for the two states are opposite in sign, as illustrated for resonator 1 in Fig. 2c. Logarithmic susceptibility can be calculated using the method of optimal fluctuation [39] or found from simulations [43]. The results have been established to be in excellent agreement. Therefore here we directly used simulations to find the magnitude \(\chi\) and the phase \(\delta\) of the logarithmic susceptibility. To do this we incorporated the drive \(F_{d}\cos(\omega_{p}t/2+\phi_{d})\) into the equation of motion (1) of resonator 1 and set the coupling parameters \(V_{ij}\) equal to zero. We then switched to the rotating frame and used the standard rotating wave approximation to reduce the problem to a set of equations for the quadratures of \(q_{1}(t)\). Forced vibrations at frequency \(\omega_{p}/2\) in the lab frame correspond to stable stationary solutions of the equations for the quadratures in the absence of noise. Noise causes switching between these states. The residence times are identified and used to calculate the switching rate in a manner similar to the measurement procedure described above. This allowed us to avoid simulating multiple (\(\gtrsim 10^{7}-10^{9}\) in our case) oscillations of the parametric oscillator in the lab frame. To measure the Ising model parameters \(K_{12}\) and \(K_{21}\), the switching rates are measured before and after the coupling is turned on. The fractional change of the switching rates for resonators 1 and 2 are plotted in red and blue respectively in Fig. 3c. Figure 5: **Histogram of the residence times.** The residence times are recorded for resonator 1 switching out of the \(\sigma_{1}=+1\) state at \(F_{d}=0\), \(D=3.01\times 10^{-6}\) N\({}^{2}\) kg\({}^{-2}\) Hz\({}^{-1}\) and \(\omega_{p}/2=\omega_{1}\). The slope of the linear fit gives the rate of switching out of this state. ## Appendix C The balance equation. The dynamics of the chain of coupled parametric oscillators is mapped on the dynamics of Ising spins by associating the stable vibrational states of the oscillators with spin-1/2 states. Fluctuations lead to random switching of the spins. The evolution of the distribution \(w(\sigma_{1},\sigma_{2},...)\equiv w(\{\sigma_{i}\})\) over the spin states is described by the balance equation, which can be written in the form \[\dot{w}(\{\sigma_{i}\})=-\sum_{i}\sigma_{i}\sum_{\sigma^{\prime}_{i}}\sigma^{ \prime}_{i}\left[W_{i}(\sigma^{\prime}_{i},\{\sigma_{j\neq i}\})w(\sigma^{ \prime}_{i},\{\sigma_{j\neq i}\})\right] \tag{10}\] with the switching rates given by Eq. (5). We note that, even if the rates \(\bar{W}_{i}\) are different for different spins (different parametric oscillators), but the model is symmetric, \(K_{ij}=K_{ji}\), Eq. (10) has the stationary solution \(w_{\rm st}(\{\sigma_{i}\})={\rm const}\times\exp[\frac{1}{2}\sum_{i,j}K_{ij} \sigma_{i}\sigma_{j}]\), which is just the thermal distribution of the conventional symmetric Ising model. For a system of \(N\) spins (oscillators) Eq. (10) is a system of \(2^{N}\) equations. For the case of 2 oscillators it can be solved (see Supplementary Note 7 for more details). The stationary probability distribution is \[w_{\rm st}(1,1)=w_{\rm st}(-1,-1) =\frac{1}{4}\frac{\bar{W}_{1}\exp(K_{12})+\bar{W}_{2}\exp(K_{21}) }{\bar{W}_{1}\cosh(K_{12})+\bar{W}_{2}\cosh(K_{21})},\] \[w_{\rm st}(1,-1)=w_{\rm st}(-1,1) =\frac{1}{4}\frac{\bar{W}_{1}\exp(-K_{12})+\bar{W}_{2}\exp(-K_{21 })}{\bar{W}_{1}\cosh(K_{12})+\bar{W}_{2}\cosh(K_{21})}. \tag{11}\] This expression was used to obtain Eq. (8) for the probability current and also in Fig. 3. For a symmetric system, \(K_{12}=K_{21}=K\), we have \(w_{\rm st}(1,1)/w_{\rm st}(1,-1)=\exp(-2K)\) independent of the values of \(\bar{W}_{1,2}\), whereas for an asymmetric system the populations depend on the interrelation between \(\bar{W}_{1}\) and \(\bar{W}_{2}\). ## Appendix D Breaking of the detailed balance. The lack of detailed balance, and thus the onset of the probability current in the asymmetric Ising model can be shown without knowing the stationary distribution. One has to compare the ratio of flipping an \(i\)th spin back and forth directly or with a \(k\)th spin flipped back and forth on the way. For a system with detailed balance the result should be the same. We now compare these ratios. To shorten the notations, we keep in the expressions for the rates only the spins \(\sigma_{i}\) and \(\sigma_{k}\) and explicitly indicate which of them is flipped; other spins are not flipped. The detailed balance condition reads \[\frac{W(\sigma_{i},\sigma_{k}\rightarrow-\sigma_{i},\sigma_{k})}{ W(-\sigma_{i},\sigma_{k}\rightarrow\sigma_{i},\sigma_{k})}=\frac{W( \sigma_{i},\sigma_{k}\rightarrow\sigma_{i},-\sigma_{k})}{W(-\sigma_{i},\sigma _{k}\rightarrow-\sigma_{i},-\sigma_{k})}\] \[\times\frac{W(\sigma_{i},-\sigma_{k}\rightarrow-\sigma_{i},- \sigma_{k})}{W(-\sigma_{i},-\sigma_{k}\rightarrow\sigma_{i},-\sigma_{k})} \times\frac{W(-\sigma_{i},-\sigma_{k}\rightarrow-\sigma_{i},\sigma_{k})}{W( \sigma_{i},-\sigma_{k}\rightarrow\sigma_{i},\sigma_{k})} \tag{12}\] For asymmetric Ising model the equality does not hold: the right-hand side has an **extra factor**\(\exp\left[4(K_{ik}-K_{ki})\sigma_{i}\sigma_{k}\right]\). We note that the result is independent of the switching rates \(\bar{W}_{i},\bar{W}_{k}\) in the absence of coupling.
2310.00184
NASU -- Novel Actuating Screw Unit: Origami-inspired Screw-based Propulsion on Mobile Ground Robots
Screw-based locomotion is a robust method of locomotion across a wide range of media including water, sand, and gravel. A challenge with screws is their significant number of impactful design parameters that affect locomotion performance. One crucial parameter is the angle of attack (also called the lead angle), which has been shown to significantly impact the performance of screw propellers in terms of traveling velocity, force produced, degree of slip, and sinkage. As a result, the optimal design choice may vary significantly depending on application and mission objectives. In this work, we present the Novel Actuating Screw Unit (NASU). It is the first screw-based propulsion design that enables dynamic reconfiguration of the angle of attack for optimized locomotion across multiple media and use cases. The design is inspired by the kresling unit, a mechanism from origami robotics, and the angle of attack is adjusted with a linear actuator, while the entire unit is spun on its axis to generate propulsion. NASU is integrated into a mobile test bed and experiments are conducted in various media including gravel, grass, and sand. Our experiment results indicate a trade-off between locomotive efficiency and velocity exists in regards to angle of attack, and the proposed design is a promising direction for reconfigurable screws by allowing control to optimize for efficiency or velocity.
Calvin Joyce, Jason Lim, Roger Nguyen, Michael Owens, Sara Wickenhiser, Elizabeth Peiros, Florian Richter, Michael C. Yip
2023-09-29T23:15:01Z
http://arxiv.org/abs/2310.00184v3
# NASU - Novel Actuating Screw Unit: ###### Abstract Screw-based locomotion is a robust method of locomotion across a wide range of media including water, sand, and gravel. A challenge with screws is their significant number of impactful design parameters that affect locomotion performance in varying environments. One crucial parameter is the angle of attack, also referred to as the lead angle. The angle of attack has a significant impact on the screw's performance as it creates a trade-off between efficiency and forward velocity. This trend is consistent across various types of media. In this work, we present a Novel Actuating Screw Unit (NASU). It is the first screw-based propulsion design that enables the reconfiguration of the angle of attack dynamically for optimized locomotion across multiple media. The design is inspired by the kresling unit, which is a widespread mechanism in origami robotics, and the angle of attack is adjusted with a linear actuator, while the entire unit is spun on its axis as an archimedean screw. NASU is integrated onto a mobile test-bed and experiments are conducted in a large variety of media including gravel, grass, and sand. Our experiments show the proposed design is a promising direction for reconfigurable screws by allowing control to optimize for efficiency or velocity. ## I Introduction Archimedean screws were first used, in terms of locomotion mechanisms, as propellers for watercraft [1, 2], and later proposed for amphibious vehicles capable of traversing and transporting loads in both liquid and soil environments [3, 4]. Screw-propelled vehicles and rovers have since demonstrated success across a very wide range of environments including snow, ice, sand, and other granular media [5, 6]. These screw-based designs have great potential for exploratory robots, overcoming limitations faced by traditional wheeled rover designs and potentially avoiding situations like NASA's Spirit rover getting stuck in loose sand on Mars [7]. Most screw-based designs use a parallel configuration where two counter-rotating, opposite-handed screws allow for turning [8, 9]. Quad-screw designs have been proposed to take advantage of a partial screw-slippage case for omnidirectional drive [10, 11]. To maintain more points of contact and increase versatility, hyper-redundant snake-like designs have also been proposed. One such robot is ARCSnake [12, 13], which combines Archimedean screw-based propulsion and serpentine body reshaping, and is the evolutionary precursor to the NASA Extant Exobiology Life Surveyor (EELS) robot [14]. EELS is anticipated to serve as a science research vehicle for both earth science missions as well as for space exploration missions on Enceladus and Europa [14]. Deployments of the robot types mentioned above are often in resource-constrained environments where adaptability is crucial. ### _Contributions_ In this paper, we present the Novel Actuator Screw Unit (NASU) which is a screw propulsion unit design that enables an adjustable angle of attack through inspiration from the kresling unit, a popular mechanism in origami robotics [15, 16, 17, 18]. However, unlike origami robotics which are often made from compliant materials, NASU is designed to withstand the high loads in screw-based locomotion [19]. A linear actuator is used to control the kresling unit, providing a means for changing the angle of attack dynamically within a range of 10-35\({}^{\circ}\). NASU's mechanism for adjusting the angle of attack allows for on-the-fly re-configurations, thereby enabling intelligent control for either higher velocity or efficiency as required for different mission objectives (see Figure 1). Ultimately, NASU offers a novel, adjustable screw-based locomotion approach for mobile robots for multi-domain deployments. Fig. 1: NASU is a screw-based propulsion unit for robot ground mobility that combines variable angle-of-attack to its screw pitch. The use of NASU is to allow control optimizing for efficiency or velocity over a wide array of terrains. It allows for trading off higher traveling velocity and lower efficiency (left) and higher efficiency and lower velocity (right) for resource-constrained robotic deployments. ### _Related Works_ A challenge for screw-based locomotion designs is the significant number of design parameters that can affect performance. In the specific case of sand-like media, significant exploration has been done into the effect of varying certain parameters such as angle of attack (also referred to as lead angle, pitch angle, and helix angle in previous literature), blade height, and blade profile [20, 21, 22, 23]. These studies showed that performance is greatly affected by the angle of attack, estimating a maximum thrust achieved at an angle of around 22\({}^{\circ}\)[24], which was then used in previous screw-based designs [12, 13]. The ideal ratio between the screw diameter and length of the screw has also been found [10]. However, it has been shown that for a fixed set of parameters, performance varies significantly across different media and there is an inherent trade-off between traveling velocity and efficiency when picking the angle of attack [19]. Outside of screw-based locomotion, reconfiguration for aerial locomotion has been explored in nature to understand how changing parameters of bird wings affect flight performance [25, 26]. Similar abilities have been replicated in robotics to achieve more efficient flight performance from wing designs [27]. Reconfigurable propeller designs have also been proposed to create more efficient propulsion in different scenarios [28, 29]. In a similar fashion, we present NASU which can dynamically adjust the angle of attack to reconfigure the locomotion capabilities. ## II System Design The NASU design, as noted previously, takes inspiration from the origami kresling unit. However, modifications were made to ensure performance and durability in different and more challenging media. The new design could be described as a twist on a Stewart platform, and its operation is shown in Figure 3. In this current design, the kresling unit structure is actuated by augmenting and controlling the total length of the unit to facilitate the changing of the angle of the blades. In our design, we verified the ability to produce appropriate thrust force and withstand reaction forces based on previous literature that captured data on screw forces. Finally, we provided a means to integrate the NASU into a mobile testing platform which provided a means to gather on performance across media. ### _Mechanical Design_ An overview of the mechanical design can be seen in Figure 2 which depicts an isolated unit actuated with a linear ball screw and a Nema-17 stepper motor. The motor housing is fixed to the test bed [19] vertical rail and the linear ball screw carriage is connected to the back plate. The actuator drives the position of the back plate which will cause the length to increase or decrease depending on direction. When the distance between plates is increased the angle of attack also increases, as described in the following equations: \[d=d_{0}+\ell\sin\left(\theta\right) \tag{1}\] \[\theta=\arcsin\left(\frac{d-d_{0}}{\ell}\right) \tag{2}\] where the distance between the inner surfaces of the plates is denoted by \(d\), \(d_{0}\) is the sum of the offsets between the plates and the pivot points (the second pin joint center of the U-joint joint), \(\ell\) is the length of the link that connects the pivot points, and \(\theta\) is the angle of attack. In our design, \(d_{0}=31\)mm, and \(\ell=100\)mm. The diameter and strut length were chosen to ensure our required angle range. The NASU was built using 3D printed and acrylic parts with a root radius of 192 mm, and an outer radius of 272 mm. The length range is 48-88 mm and the adjustable angle of attack is between 10-35 \({}^{\circ}\). The hexagonal base proscribes six-thread starts. These parameters were chosen to match similar ratios with previous literature [10, 13]. The blade design in terms of height, shape, and profile was inspired by previous work [23]. Each unit has 3 key features: u-joint connecting the front/back plate to the struts, replaceable blade attachment points along the struts, and blades. The u-joint consists of Fig. 2: Close-up of the NASU mechanism. The unit has two motion modes, involving a linear actuator at the top to dynamically change the angle of attack for the blades by compressing or decompressing the kresling-inspired design, and an axially central rotary actuator that provides the Archimedean screw propulsion. 2 pin joints perpendicular to each other created from the strut and a U-bracket shown in Figure 4(a). The struts are designed with through holes to allow for the replacement of the blades to augment height, profile shape, and length shown in Figure 4(b). Finally, the blades themselves contain heat-set inserts to allow for easy swapping between blades with different parameters. Finite Element Analysis (FEA) was done using a static Solidworks Simulation model. A static hold is considered the highest force-producing scenario as it indicates a stalled position in media giving the highest reaction force back against the blades. The NASU was set to its \(35^{\circ}\) configuration as it would also be the highest thrust-producing configuration, and have the highest resultant reaction force. The connections between NASU and the testbed were set as the fixture points. Pin connections were used to model the correct degrees of freedom of the u-joint. The external loads that were applied were taken from previous work [19]. The forces were collected from an earlier version of the testbed described in III. In that work, a load cell (6-DoF Force Torque Sensor (FTS), Axia80 (ATI Industrial Automation)) was used to collect data on the forces and torques loading the system. To ensure our design was strong enough to withstand maximum loading on all media and directions we took the peak forces across media and trials in each principle direction and produced a maximal load based on all those combined. The maximum values are, \(f=[4.71N,\ 20.74N,\ 8.41N]\). Where +Z is forward, +X is down, aligned with gravity, and +Y is pointing to the right following the right-hand rule. In the previous works' experiments, gravity was removed by taring the senor, thus we added a representative gravity back into the X vector by taking the effective mass (1.1kg) of NASU and multiplying it by gravity \(9.81\ m/s\). It was not necessary to account for the geometry change between the previous screw design and the NASU unit as we are applying the same maximum forces produced by the material which is unaffected by screw geometry. The applied forces for the simulation are \(f=[15.5N,\ 20.74N,\ 8.41N]\). Their magnitude is \(27.22N\). The resulting numbers were a minimum Factor of Safety (FOS) of 9.9, a max displacement of 0.75 mm, a max stress of \(4.825\times 10^{6}\) N/m\({}^{2}\), and a maximum strain of \(4.58\times 10^{-5}\) ESTRN. The visual results can be seen in figure 5 which depicts the entire NASU unit under load with deflection measurements and highlights the u-joint under load with stress measurements reflected across the joint. ### _Mobile Test Bed: NASU Actuation_ We reuse the Mobile Test Bed from our previous work [19] to hold NASU and use it for experiments and results. The Mobile Test Bed was designed for experiments to be done "in the real world" thus enabling more significant evidence of the viability and robustness of the design. Furthermore, it was built for easy swapping of different screw configurations which we used to mount NASU. Through the mobile test bed, NASU is constrained to travel in a single, linear direction through linear rails. Traveling velocity is measured on the linear-travel rail with a passive RMD-L 7015 motor. Meanwhile, the height-adjusting linear rail helps compensate for potentially uneven surfaces being experimented on. Finally, a 6-DoF FTS, Axia80 (ATI Industrial Automation), is positioned near the center of mass to measure the screw's applied torque and resulting screw-locomotion forces. Fig. 4: The passive u-joint mechanism is placed at the ends of struts to enable the change of the angle of attack. The struts hold the blades which will produce propulsion when NASU is rotated as a screw. Fig. 5: Solidworks FEA done with the maximum amount of force we measured from our previous outdoor screw experiments [19]. Fixture points are set as connections from NASU to the testbed which is where the system will be mounted for our experiments. NASU was set to its maximum angle (\(35^{\circ}\)) to simulate the most extreme possible situation and resulted in minimal deflection, stress, and strain. Fig. 3: Left and right columns show NASU at the minimum and maximum angles of attack. To actuate the NASU during testing, modifications were added to the test bed to ensure proper transitions between angles of attack for best results. The additions included: (1) a pulley system to add counterweight (2) an attachment and connecting mechanism for the same Nema-17 stepper motor (3) an additional bearing to ensure the rotational degree of freedom (twist) needed to allow for control over the angle of attack. An overview of our changes is shown in Figure 6. The vertical bar on the testbed and NASU together are quite massive and require some counterweighting to avoid continually digging into media rather than propelling on the surface. This enables proper measurement with the FTS. This feature can be seen in Figure 6(b). In the transition onto the test bed, it was necessary to design an actuation method that would not hinder or invalidate the data collection. Therefore we designed a means to control the screw length, hence adjusting the angle of attack, through the linear actuator mentioned previously. The motor housing was secured to the vertical bar of the test bed as seen in Figure 6(c). Lastly, in order for the Kresling unit design to properly morph to new angles of attack it was necessary to maintain a torsional degree of freedom which was accomplished through the addition of a lazy susan bearing sandwiched between the back plate of the NASU unit and the constraining plate driven by the linear actuator. The bearing is shown in Figure 6(d). ## III Experiments and Results Experiments are conducted with the NASU on a mobile test bed in a wide range of media in both a lab setting and in real-world environments as shown in Figure 7. These experiments characterize the locomotion performance at varying angles of attack on each media to provide comprehensive efficiency results of NASU. ### _Experimental Setup_ The experiment is conducted on our previously developed Mobile Test Bed [19] and a similar experimental setup procedure is employed: 1. Flatten and level the media to achieve as uniform conditions as possible. 2. The height-adjustable linear rail is locked such that the NASU is free-hanging. A "free hanging" measurement from the FTS is taken to capture any potential drifts between trials. The angle of attack is set to the desired value. 3. The height-adjustable linear rail is unlocked and the NASU is set down on the media. A "set down" measurement from the FTS is taken to measure any pre-loading from the media on the NASU. 4. The FTS sensor is zeroed to measure differential measurements. From this point, the screw motor can be driven to begin the experiment. To process the raw data from the test bed for analysis, the data from the motors and FTS are passed through a low pass filter with a cutoff frequency of 5 Hz and a sampling frequency of 125 Hz. The data is clipped manually to only include the steady-state portion of the experiment. This experiment allows freedom of motion in both the axial and vertical directions so that the speed of travel can be measured and sink-age into the media is still allowed. In contrast, all other degrees of freedom are restricted by the test bed. The angles of attack being tested are 10\({}^{\circ}\), 15\({}^{\circ}\), 20\({}^{\circ}\), 25\({}^{\circ}\), 30\({}^{\circ}\), and 35\({}^{\circ}\). ### _Performance Evaluation_ Two extremely relevant metrics that characterize performance are linear traveling velocity, measured directly by the mobile testbed, and locomotive efficiency. Locomotive efficiency is defined as follows: \[\eta_{m}=\frac{F_{thrust}v}{\tau_{in}\omega}, \tag{3}\] where \(F_{thrust}\) is the force produced by the NASU along its longitudinal axis, \(v\) is the linear traveling velocity, and \(\tau_{in}\) and \(\omega\) are the average input torque and angular velocity, respectively. Because there is substantial noise in our measurements due to conducting experiments in real, outdoor environments, we use the average velocity over the entire trial and maximum force within the trial to compute the highest achievable efficiency our novel mechanism can produce on that media. Fig. 6: The mobile test bed was augmented to mount NASU for experimentation. The mobile test bed constrains the motion in a linear direction and measures the resultant propulsion forces and traveling velocity. ### _Results and Discussion_ The results are shown in Figure 8. As might be expected, there is a positive correlation between the angle of attack and forward velocity. If we consider our ability to produce thrust directly related to our ability to move mass backward this aligns with the theoretical models for screws and screw-style propulsion. Similarly when the angle of attack is held to a lower value each revolution of the screw pushes the mass of the material less distance and therefore the screw moves forward less. In this experiment, we did not see the expected velocity drop off as you would see in winged aircraft in which an increase in angle of attack would produce a higher drag. However, a key finding of our experiments is that locomotive efficiency actually decreases as the angle of attack increases, implying a trade-off between speed and efficiency that is consistent across all of the media that we tested. This reinforces the fact that changes in the angle of attack affect the performance of the screw. We believe the trade-off between linear traveling velocity and efficiency is because 1. for a given input torque, thrust force increases with decreasing angle of attack, compensating for the decrease in speed, and 2. as the angle of attack increases, the blades push more material to the sides and have more of an excavating effect, thus losing far more energy to the displacement of media. As seen in the experimental results, NASU's operating range for angle of attack sufficiently covers this trade-off. Figure 1 gives a simple diagram comparing the maximum and minimum angles of attack, as well as labeling the trade-off. We find that the relative performance across different media types compares well with previous research on screw-based locomotion performance [19], which suggested that shearing force and coefficient of friction are two main properties contributing to variance in performance across media. In this work, the best performance was obtained in mud, which was compact enough to provide a high shearing force, while still allowing the blades to sink in smoothly and gain traction well. Big gravel and small gravel exhibit the next best performance, respectively. These media also provide a high shearing force and have a lower coefficient of friction compared to other media, although the granularity of the gravel leads to more sporadic, bumpy movement and energy lost to vertical instead of horizontal motion. Meanwhile we we were unable to produce significant propulsion in sand due to NASU pushing the material to the side rather than propelling with it. An example from our experiments is Fig. 8: The left and right plots show the Traveling Linear Velocity and Efficiency, respectively, for different input angles in various media. Dashed lines were added to showcase trends. The trade-off between traveling velocity and efficiency as the angle of attack changes can be seen in our results. Fig. 7: Representative images of the NASU experiments conducted in the following media (from left to right): small gravel, wet sand, big gravel, grass, mud, wood chips, and sand. shown in Figure 9. The most similar model which moves media in this fashion is Archimedes screws. Originally used to transport water to higher potential energy reservoirs. These designs involve an internal screw or auger with a pipe wall (tube) surrounding the mechanism. The mechanism works by constraining the motion of the fluid to a confined space that traverses up the tube. This control volume of material can be achieved in screw-based robots when there is an internal chamber and the material shears easily creating an outer wall of static material. Given the lack of an internal shell our NASU unit would not be able to replicate this ideal model. Figure 9 implies that sand is the most severely affected by the lack of an internal shell. It cannot constrain the motion of the media to propel itself forward efficiently. Pressure and a physical bearer were required to ensure the constrained volumetric motion of the sand. One other note of difference in the designs was the length of the blades. On the NASU the blades are short and plenty. Most of the screw shell designs have a single or few threads that travel a longer path around the screw. This means the media has a continuous force applied rather than short bursts of force applied in a discontinuous fashion. In future designs, it would be advantageous to either make a longer single unit or stack NASU units to create a more continuous helical pattern across the screw surface. ## IV Conclusion This work demonstrates the first mechanism to reconfigure a screw parameter for mobility by changing the angle of attack of a screw actuator to allow for dynamic adjustment between locomotion efficiency and traveling velocity on various media. The experimental results support that we do capture the trade-off between traveling velocity and efficiency through our reconfiguration. Our intention with this mechanism is to enable future screw-based vehicles to control the angle of attack and make adjustments depending on their environment. For example, when traversing in gravel it would be beneficial to have low velocity but high efficiency to ensure the device does not dig itself in. Meanwhile, in water, higher velocity is more desirable. As discussed in the previous section there are new designs we wish to implement for elongating the blades and adding an internal shell. We have also given ourselves the opportunity to experiment with blade geometry with replaceable blade connections. In the future, the goal is to integrate this technology into ARCSnake [13] or other screw-like robots to dynamically augment screw parameters for optimal performance in all media. ## V Acknowledgements The authors would like to thank Mandy Cheung, Peter Gavrilov, Hoi Man (Kevin) Lam, Casey Price, Nikhil Uday Shinde, Anne-Marie Shui, and Mingwei Yeoh for their continued support of the project.
2305.19663
Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains
The computational efficiency of many neural operators, widely used for learning solutions of PDEs, relies on the fast Fourier transform (FFT) for performing spectral computations. As the FFT is limited to equispaced (rectangular) grids, this limits the efficiency of such neural operators when applied to problems where the input and output functions need to be processed on general non-equispaced point distributions. Leveraging the observation that a limited set of Fourier (Spectral) modes suffice to provide the required expressivity of a neural operator, we propose a simple method, based on the efficient direct evaluation of the underlying spectral transformation, to extend neural operators to arbitrary domains. An efficient implementation* of such direct spectral evaluations is coupled with existing neural operator models to allow the processing of data on arbitrary non-equispaced distributions of points. With extensive empirical evaluation, we demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines while retaining or improving the accuracy of Fourier neural operators (FNOs) and related neural operators.
Levi Lingsch, Mike Y. Michelis, Emmanuel de Bezenac, Sirani M. Perera, Robert K. Katzschmann, Siddhartha Mishra
2023-05-31T09:01:20Z
http://arxiv.org/abs/2305.19663v4
# Vandermonde neural operators ###### Abstract. Fourier Neural Operators (FNOs) have emerged as very popular machine learning architectures for learning operators, particularly those arising in PDEs. However, as FNOs rely on the fast Fourier transform for computational efficiency, the architecture can be limited to input data on equispaced Cartesian grids. Here, we generalize FNOs to handle input data on non-equispaced point distributions. Our proposed model, termed as Vandermonde Neural Operator (VNO), utilizes Vandermonde-structured matrices to efficiently compute forward and inverse Fourier transforms, even on arbitrarily distributed points. We present numerical experiments to demonstrate that VNOs can be significantly faster than FNOs, while retaining comparable accuracy, and improve upon accuracy of comparable non-equispaced methods such as the Geo-FNO. Code and data for the VNO experiments may be found here. ## 1. Introduction Partial Differential Equations (PDEs) are extensively used to mathematically model interesting phenomena in science and engineering [1]. As explicit solution formulas for PDEs are not available, traditional numerical methods such as finite difference, finite element, and spectral methods [2] are extensively used to simulate PDEs. Despite their tremendous success, the prohibitively high computational cost of these methods makes them infeasible for a variety of contexts in PDEs ranging from high-dimensional problems to the so-called _many query_ scenarios [3]. This high computational cost also provides the rationale for the development of alternative _data driven_ methods for the fast and accurate simulation of PDEs. Hence, a wide variety of machine learning algorithms have been proposed recently in this context. These include physics-informed neural networks (PINNs) [4], MLPs, and CNNs for simulating parametric PDEs [5, 6, 7, 8] as well as graph based algorithms [9, 10, 11, 12], to name a few. However, as solutions of PDEs are expressed in terms of the so-called _solution operators_, which map input functions (initial and boundary data, coefficients, source terms) to the PDE solution, _Operator learning_, i.e., learning the underlying operators from data, has emerged as a dominant framework for applying machine learning to PDEs. Existing operator learning algorithms include, but are not limited to, operator networks [13], DeepONets [14, 15, 16], attention based methods such as [17, 18, 19], and neural operators [20, 21, 22, 23]. Within this large class of operator learning algorithms, Fourier Neural Operators (FNO) [24] have gained much traction and are widely applied [25, 26]. Apart from favorable theoretical approximation properties [27, 28], FNOs are attractive due to their expressivity, simplicity and computational efficiency. A key element in the computational efficiency of FNO lies in the fact that its underlying convolution operation is efficiently carried out in Fourier space with the _fast Fourier transform (FFT) algorithm. It is well-known that FFT is only (log-)linear in computational complexity with respect to the number of points at which the underlying input functions are sampled. However, this computational efficiency comes at a cost as the recursive structure of FFT limits its applications to inputs sampled on the so-called _Regular_ or _equispaced Cartesian (Rectangular) grids_, see Figure 2 left for an illustration. This is a major limitation in practice. In real-world applications, where information on the input and output signals is measured by sensors, it is not always possible to place sensors only on an equispaced grid. Similarly, when data is obtained through numerical simulations, often it is essential to discretize PDEs on irregular grids, such as those adapted to be refined to capture relevant spatial features of the underlying PDE solution or on unstructured grids that fit the complex geometry of the underlying domain. See Figure 2 for examples of non-equispaced distributions of sample points. Several methods have recently been proposed in the literature to address this limitation of FNOs and modify/enhance it to handle data on non-equispaced points. For instance, geometry-aware FNO (Geo-FNO) [29] appends a neural network to the FNO to learn a deformation from the physical space to a regular grid. Then, the standard FFT can be applied to the latent space of equispaced grid points. This learned diffeomorphism corresponds to an adaptive moving mesh [30]. Factorized-FNO (F-FNO) builds upon the Geo-FNO, introducing an additional bias term in the Fourier layer and performing the Fourier transform over each dimension separately [31]. The non-equispaced Fourier PDE solver (NFS) uses a vision mixer [32] to interpolate from a non-equispaced signal onto a regular grid, again applying the standard FNO subsequently [33]. All these methods share the same design principle, i.e., given inputs on non-equispaced points, _interpolate_ or transform this data into a regular grid and then apply FNO. This observation leads to a natural question: _why not consider the complimentary approach and modify the Fourier Neural Operator itself to enable it to be directly applied to input data on non-equispaced points?_ Addressing this question is the main goal of this paper where we propose a _novel modification_ of FNO to enable its application on non-equispaced input data. More concretely, * We propose a new operator learning algorithm that we term as _Vandermonde Neural Operator_ or VNO which extends FNO to be applied to input data on non-equispaced points. To do this, we design an algorithm to efficiently compute discrete (inverse) Fourier transforms via Vandermonde structured matrices. * We present a novel yet simple construction of Vandermonde-structured matrices to compute the forward and backward (inverse) Fourier transformations within the VNO algorithm that allows it to handle inputs, sampled on arbitrary non-equispaced point distributions. * We present a suite of numerical experiments to demonstrate that VNO can train significantly faster than FNO for input data on non-equispaced points, particularly on lattices (tensor products of arbitrarily non-equispaced points in one-dimension, see Figure 2), while either retaining or improving on the test accuracy. Thus, we present a novel algorithm to efficiently and accurately learn operators, such as those arising in PDEs, with input functions being possibly sampled on arbitrary non-equispaced points. Consequently VNO expands the FNO architecture to learn operators on real-world problems where sensors may not be equispaced such as robotics, atmospheric sciences, and aerodynamics. ## 2. Methods Our goal in this section is to present VNO as a new neural operator in which the FFT algorithm within the Fourier layer in a FNO [24] is replaced with a Vandermonde structured matrix. To this end, we start with a short description of this matrix construction below. ### On Vandermonde Structured Matrices A Vandermonde structured matrix can be defined via nodes \(x_{j}\in\mathbb{R}\) or \(\mathbb{C},j\in\{0,1,\ldots,m\}\), in a geometric progression along each column (or row), defined by \[\mathbf{V}_{j,k}=\left[x_{j}^{k}\right]_{j,k=0}^{m,n}. \tag{1}\] In the case that the nodes represent the primitive \(n^{th}\) roots of unity, this geometric progression of powers along the columns (or rows) yields the discrete Fourier transform (DFT) matrix. This matrix is symmetric, unitary, and periodic, hence it can be factorized as a product of sparse matrices. The resulting factorization can be used to obtain a radix-2 algorithm that can efficiently compute the 1D DFT and its inverse with the FFT algorithm of \(O(n\log n)\) complexity [34, 35, 36]. To motivate an alternative computational realization, we recall that the Fourier transform is an integral operator. Approximating it with a quadrature rule for discretization, we multiply each point value of the underlying function by the sinusoidal basis function for a given mode, and sum these terms to compute the Figure 1. The proposed VNO operates directly on non-equispaced sample points. The input function \(v(x)\) may be sampled at arbitrary points to construct the Fourier representation directly via the Vandermonde Matrix \(\mathbf{V}\). Each element \(\mathbf{V}_{j_{0},j_{1},:}\) computes a convolution with the data for a given pair of harmonics \(j_{0},j_{1}\) over all points. Each element of \(v(x)\) is multiplied by the magnitude of the harmonic pair at that location, and these values are summed, yielding the signal in Fourier space. \(W\) represents the bias term, and \(R\) is a matrix of parameters which modify the Fourier coefficients, implemented as in the standard FNO. magnitude of this sinusoidal term. This process is then repeated for each mode. In contrast to the recursive, butterfly FFT algorithm, this interpretation can be generalized to the Vandermonde-structured matrices of the form in (1), which in turn, can be efficiently implemented through _batched matrix multiplications_. **Forward Transformations.** The forward transformation computes the Fourier representation of a given function by using Vandermonde structured matrices. In one-dimension, the corresponding Vandermonde-structured matrix is \[\mathbf{V}_{j,k}=\frac{1}{\sqrt{n}}\left[e^{-i(jp_{k})}\right]_{j,k=0}^{m-1,n- 1}. \tag{2}\] Here, \(\mathbf{p}=[p_{0},p_{1},\ldots,p_{n-1}]^{T}\) is the vector of the positions of data points at which the underlying function is sampled. \(n\) is the number of data points, \(m\) the number of modes, and \(i=\sqrt{-1}\). As in the FFT, note that it is necessary to normalize the positions of data points to a range between \(0\) and \(2\pi\), as the Fourier basis functions are assumed to be periodic on the unit circle. The 2-dimensional Fourier transform is equivalent to one-dimensional Fourier transforms along each axis. Therefore, the transformation on any 2-dimensional lattice, i.e, the tensor product of one-dimensional point distributions along each axis (see Figure 2), can be performed by constructing two Vandermonde matrices, \(\mathbf{V}_{1}\) and \(\mathbf{V}_{2}\), corresponding to the positions of data points along each axis. Given \(X\in\mathbb{R}^{n\times n}\) or \(\mathbb{C}^{n\times n}\) as the data matrix containing values of the underlying function, sampled on the non-equispaced lattice, the Vandermonde-structured matrix can be used as a basis transformation of this data to the Fourier space \(\mathcal{X}\in\mathbb{C}^{m\times m}\) given via \[\mathcal{X}=\mathbf{V}_{1}X\mathbf{V}_{2}^{T}. \tag{3}\] The transforms for non-equispaced one-dimensional and non-equispaced two-dimensional rectangular lattices are already present in the literature [37], yet generalizations to other distributions are not present. This related work is discussed in greater detail in Section 4. Extension to arbitrary non-equispaced point distributions. Next, we would like to extend this construction beyond lattices to arbitrarily non-equispaced point distributions in two dimensions. To do this, we store again the positions of the sampling points as \(P=[\mathbf{p}_{0},\mathbf{p}_{1}]\in\mathbb{R}^{n\times 2}\) and normalize the points' values to a range between \([0,2\pi]\). The corresponding Vandermonde matrix is extended into a \(3^{rd}\) dimension as a tensor, _i.e._, it will have three indices. The first two indices of Figure 2. Distributions discussed in this paper. The FNO is restricted to the regular grid. The VNO may be applied to the lattice distribution via (3), or the random distribution via (6) correspond to the sinusoidal components along the first and second dimensions, respectively, while the third index corresponds to a specific point within the distribution. This results in the following _tensor_, \[\mathbf{V}_{j_{0},j_{1},k}=\sqrt{\frac{2}{n}}\left[e^{-i(j_{0}P_{k,0}+j_{1}P_{k, 1})}\right]_{j_{0},j_{1},k=0}^{m-1,m-1,n-1}, \tag{4}\] Given \(X\in\mathbb{R}^{n\times n}\) or \(\mathbb{C}^{n\times n}\) as the data matrix containing values of the underlying function, sampled on any arbitrary distribution of points, the Vandermonde-structured matrix can be used as a basis transformation of this data to the Fourier space \(\mathcal{X}\in\mathbb{C}^{n\times n}\) given via, \[\mathcal{X}=\mathbf{V}X, \tag{5}\] with matrix \(\mathbf{V}\) below, \[\mathbf{V}=\sqrt{\frac{2}{n}}\left[\left[\begin{array}{c}e^{-i(0\mathbf{p_{0 }}^{T}+0\mathbf{p_{1}}^{T})}\\ e^{-i(1\mathbf{p_{0}}^{T}+0\mathbf{p_{1}}^{T})})\\ \vdots\\ e^{-i((m-1)\mathbf{p_{0}}^{T}+0\mathbf{p_{1}}^{T})}\end{array}\right]\left[ \begin{array}{c}e^{-i((0\mathbf{p_{0}}^{T}+1\mathbf{p_{1}}^{T})}\\ e^{-i(1\mathbf{p_{0}}^{T}+1\mathbf{p_{1}}^{T})})\\ \vdots\\ e^{-i((m-1)\mathbf{p_{0}}^{T}+1\mathbf{p_{1}}^{T})}\end{array}\right]\cdots \left[\begin{array}{c}e^{-i(0\mathbf{p_{0}}^{T}+(m-1)\mathbf{p_{1}}^{T})}\\ e^{-i(1\mathbf{p_{0}}^{T}+(m-1)\mathbf{p_{1}}^{T})}\end{array}\right] \tag{6}\] This construction can be readily generalized to a fully non-equispaced domain of \(N\in\mathbb{N}\) dimensions by the construction of an tensor with \(N+1\) indices, by equation 7. Here, \(P=[\mathbf{p}_{0},\mathbf{p}_{1},\ldots,\mathbf{p}_{N-1}]\in\mathbb{R}^{n\times N}\) is a matrix whose columns are vectors of the positions of the data along each dimension. \[\mathbf{V}_{j_{0},\ldots,j_{N-1},k}=\sqrt{\frac{N}{n}}\left[e^{-i\left( \sum\limits_{i=0}^{N-1}j_{i}P_{k,i}\right)}\right]_{j_{0},\ldots,j_{N-1}=0,k=0} ^{m-1,\ldots,m-1,n-1} \tag{7}\] **Backward Transformations.** The backward transformation computes the spatial representation of the function given its Fourier coefficients. This is realized in a straightforward manner via the adjoint of the Vandermonde-structured matrix in (6). We define the adjoint as the conjugate-transpose of the Vandermonde-structured matrix, denoted \(\mathcal{V}_{j,k}^{*}\), where \(\mathcal{V}_{j,k}\) is the stacked version of (6) as defined in **Supplementary Material (SM)** (15). ### The Vandermonde Neural Operator Next, we propose a new neural operator for extending FNO to handle inputs on arbitrary point distributions. For clarity, we maintain consistency with the notation used by Li et al. [24] when presenting this procedure. Our neural operator is expressed as an iterative map \(v_{t}\mapsto v_{t+1}\)\(\forall t\in\{0,1,\ldots,T-1\}\). The subsequent solution, \(v_{t+1}\) is expressed as, \[v_{t+1}(x)=\sigma\left(Wv_{t}(x)+\left(\mathcal{K}(\phi)v_{t}\right)(x)\right), \tag{8}\] with nonlinear activation function \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) applied elementwise, residual connection \(W:\mathbb{R}^{d_{v}}\mapsto\mathbb{R}^{d_{v}}\), and a bounded linear _kernel_ operator. We recall that for FNO [24], this _kernel operator_ realizes a convolution in Fourier space with kernel, \[(\mathcal{K}(\phi)v_{t})(x)=\mathcal{F}^{-1}\left(R_{\phi}\cdot\mathcal{F}(v _{t})\right)(x),\quad\forall x\in D \tag{9}\] Here, \(\mathcal{F},\mathcal{F}^{-1}\) are the Fourier and Inverse Fourier transforms, respectively, \(D\subset\mathbb{R}^{d}\), and \(R_{\phi}\in\mathbb{C}^{d_{v}\times d_{v}}\) is a matrix representing the Fourier transformation of a learned periodic kernel function parameterized by \(\phi\). In the discrete case, these Fourier and inverse Fourier transforms are performed with the FFT. However, this choice imposes the assumption that the input data is sampled on a regular equispaced grid. To deal with input data sampled at points with non-equispaced distributions, we replace the _Fourier layer_ (9) with the following _Vandermonde layer_, \[(\mathcal{K}(\phi)v_{t})(x)=\mathcal{V}^{*}\left(R_{\phi}\cdot\mathcal{V}(v_{t })\right)(x),\quad\forall x\in D. \tag{10}\] Here, \(\mathcal{V}\) denotes a transformation by the Vandermonde structured matrix, defined by \[(\mathcal{V}(f))_{\xi}(\mathbf{k})=\sum_{j=0}^{n-1}f_{\xi}(\mathbf{x}_{j})e^{ -2\pi i\langle\mathbf{x}_{j},\mathbf{k}\rangle},\qquad(\mathcal{V}^{*}(f))_{ \xi}(\mathbf{x})=\sum_{\mathbf{k}\in[\mathbb{Z}_{m}]^{d}}f_{\xi}(\mathbf{k})e ^{2\pi i\langle\mathbf{x},\mathbf{k}\rangle}, \tag{11}\] where \(i=\sqrt{-1}\), \(\xi=0,\ldots,d_{v}-1\), \(f:D\mapsto\mathbb{R}^{d_{v}}\), \(\mathbf{k}=(k_{0},\ldots,k_{d-1})\in[\mathbb{Z}_{m}]^{d}\), \(\mathbf{x}=(x_{0},\ldots,x_{d-1})\in D\), where \(D\) has been discretized by \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\), and \(\langle\mathbf{x},\mathbf{k}\rangle=x_{0}k_{0}+x_{1}k_{1}+\cdots+x_{d-1}k_{d-1}\). This is tantamount to a multiplication by the matrix in (4) for two dimensional data on a lattice, and (7) for the general case. The resulting neural operator of the form (8) with kernel given by (10), is termed as the _Vandermonde Neural operator_ or VNO. Figure 1 provides an illustration of this architecture. This generalization allows VNO to be applied beyond the FNO's restriction to equispaced grids. Furthermore, as the structure of the Vandermonde matrix makes no assumptions about the point distributions, it is possible to learn the solution operator to PDEs from random distributions of sampling points, even if each realization (input sample) has a different distribution. Additionally, as no additional transformations are to be learned like in [29], the training complexity remains the same as in FNO. Instead, the VNO can be applied directly to a non-equispaced distribution to compute the most accurate Fourier coefficients from the provided data. ### Computational Complexity of VNO The most notable feature of FFT is its computational efficiency. Calculating the Fourier coefficients of a 1-dimensional signal, sampled at \(n\) points, by using the brute force DFT, costs \(O(n^{2})\). In contrast, the FFT algorithm computes these coefficients with \(O(n\log n)\) complexity. Hence, it is natural to wonder why one should reconsider matrix multiplication techniques in our setting. In this context, we observe that the maximum performance gain with FFT occurs when the FFT computes all the Fourier coefficients, or modes, of an underlying signal. Furthermore, peak efficiency is reached for points on a dyadic interval. While the number of modes to compute may be truncated, the interconnected nature of the self-recursive radix-2 FFT algorithm makes it difficult in practice to attain peak efficiency. We refer the reader to **SM** Figure 4 for a visual representation. Therefore, reported performance gains by new FFT algorithms are often optimistic. Thus, in the case of truncated modes, matrix multiplications techniques should not be ruled out. Moreover, for neural operators such as FNO and VNO, only a small subset of nonzero modes are required to approximate the operator [24]. This implies that for a one-dimensional problem, the Vandermonde matrix has a fixed number of rows, while the number of columns grows with the problem size. Therefore, the computational complexity of the proposed transformations by a Vandermonde matrix cost \(O(n)\) as the Vandermonde-structure can be fully determined using \(O(n)\) as opposed to \(O(n^{2})\)[38, 39, 40], and hence the number of points is independent of the number of modes. ## 3. Experimental Results In this section, our aim is to investigate the performance of the proposed VNO architecture on a challenging suite of diverse PDE learning tasks. Implementation, Training Details and Baselines. A key contribution of this paper is a new implementation of the Vandermonde structured matrix multiplications in _PyTorch_, which enables us to efficiently compute Fourier and Inverse Fourier transforms. Within a neural network, an efficient \(O(n)\) algorithm must also be parallelizable to handle batches, as this massively speeds up the training process. Batches of data with the same or different point distributions are easily handled by the _torch.matmul()_ and _torch.bmm()_ functions, respectively. In all experiments, we use the same hyperparameters for training purposes; ADAM optimizer with a learning rate of 0.005, scheduler step 10, gamma decay of 0.97, and trained for 500 epochs. We also use the L1-loss function, which produced both a lower L1-error and L2-error than the L2-loss. The test error was measured as the relative L1 error. As baselines, we use FNO in all the experiments and its variant, Geo-FNO [29] in experiments where the underlying domain has a complicated, non-equispaced geometry. For comparisons with the FNO or Geo-FNO, we choose the number of modes in each layer as well as the width of each layer to be the same across all architectures in an experiment. Given that the Fourier layer is fundamentally the same for VNO, FNO, and Geo-FNO, albeit using different methods to compute the transform, the relative performance differences between methods are consistent as the number of modes and width are varied. We see this as a fair comparison, as all model parameters and the model size remains consistent within an experiment. All experiments are performed on the Nvidia GeForce RTX 3090 with 24GB memory. Benchmark 1: Burgers' Equation. The one-dimensional viscous Burgers' equation is a widely considered model problem for fluid flow given by \[\partial_{t}u(x,t)+\partial_{x}(\frac{1}{2}u^{2}(x,t)) =\nu\partial_{xx}u(x,t)\qquad x\in(0,1)\quad t\in(0,1]\] \[u(x,0) =u_{0}(x)\qquad x\in(0,1) \tag{12}\] where \(u\) denotes the fluid velocity and \(\nu\) the viscosity. We follow [24] in fixing \(\nu=0.1\) and considering the operator that maps the initial data \(u_{0}\) to the solution \(u(\cdot,T)\) at final time \(T=1\). The training and test data, presented in [24] for this problem, is used. Points are sub-sampled to create non-equispaced distributions. We test the VNO and FNO on this benchmark on three different point distributions, shown in **SM** Figure 5, namely equispaced point distribution, contracting-expanding distribution, and fully non-equispaced randomly chosen set of points. The training time (per epoch) and test errors are shown in Table 1. We observe from this table that for equispaced points, both FNO and VNO have low test errors, with FNO being marginally more accurate than VNO. On the other hand, VNO has slightly lower training time per epoch showing that batched matrix multiplications, which form the basis of VNO, are as computationally efficient as the FFT algorithm that underpins the FNO. However, FNO cannot be directly applied to the two non-equispaced point distributions that we consider here (contracting-expanding and random). Hence, we have to interpolate the input point values of the underlying function to an equispaced grid and subsequently apply FNO. To this end, we use a cubic-spline interpolation procedure and report the test results of this model in Table 1. We observe that for the contracting-expanding distribution, VNO is more accurate than FNO-interpolation while being faster in training by a factor of almost 4. For the random point distribution, FNO-interpolation is clearly more accurate but has a significantly higher training time (almost a factor of 6) vis a vis VNO. Thus, this experiment already indicates that not only is VNO able to handle input data on arbitrary non-equispaced grids, its accuracy is still comparable to FNO. When FNO is augmented with interpolation procedures to deal with non-equispaced sample points, VNO is considerably faster to train while being comparable in accuracy. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **model size** & **training time** & **testing error** \\ \hline **Burgers’ Equation** & & & \\ _Equispaced Distribution:_ & & & \\ VNO & 549569 & **0.86**s & 0.095\% \\ FNO & 549569 & 0.97s & **0.071**\% \\ _Contracting-Expanding Distribution:_ & & & \\ VNO & 549569 & **0.35**s & **0.86**\% \\ FNO–interpolation & 549569 & 1.24s & 1.00\% \\ _Random Distribution:_ & & & \\ VNO & 549569 & **0.25**s & 1.67\% \\ FNO–interpolation & 549569 & 1.41s & **1.11**\% \\ \hline **Shear Layer** & & & \\ VNO & 6571010 & **44**s & **5.89**\% \\ FNO & 6571010 & 189s & 6.16\% \\ \hline **Surface-level Specific Humidity** & & & \\ VNO & 16786657 & **3.6**s & **4.37**\% \\ FNO & 16786657 & 38s & 5.25\% \\ \hline **Flow past Airfoil** & & & \\ VNO & 2368225 & 7.38s & **0.49**\% \\ Geo-FNO & 3020963 & 7.42s & 1.14\% \\ \hline \hline \end{tabular} \end{table} Table 1. Performance results for the experiments on shear flow, surface-level specific humidity, and flow past airfoils. The FNO is applied to a dense, equispaced, rectangular grid for the first three problems, while the VNO is applied to a lattice engineered to balance resolution and efficiency. Benchmark 2: Shear Layer. We follow a recent work on convolutional neural operators [23] in considering the incompressible Navier-Stokes equations \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}+\nabla p= \nu\Delta\mathbf{u},\quad\nabla\cdot\mathbf{u}=0. \tag{13}\] Here, \(\mathbf{u}\in\mathbb{R}^{2}\) is the fluid velocity and \(p\) is the pressure. The underlying domain is the unit square with periodic boundary conditions and the viscosity \(\nu=4\times 10^{-4}\), only applied to high-enough Fourier modes (those with amplitude \(\geq 12\)) to model fluid flow at _high Reynolds-number_. The solution operator maps the initial conditions \(\mathbf{u}(t=0)\) to the solution at final time \(T=1\). We consider initial conditions representing the well-known _thin shear layer_ problem [41, 42] (See [23] for details), where the shear layer evolves via vortex shedding to a complex distribution of vortices (see Figure (a)a for an example of the flow). The training and test samples are generated, with a spectral viscosity method [42] of a fine resolution of \(1024^{2}\) points, from an initial sinusoidal perturbation of the shear layer [42], with layer thickness of \(0.1\) and \(10\) perturbation modes,the amplitude of each sampled uniformly from \([-1,1]\) as suggested in [23]. As seen from Figure (a)a, the flow shows interesting behavior with sharp gradients in two mixing regions, which are in the vicinity of the initial interfaces. On the other hand, the flow is nearly constant further away from this mixing region. Hence, we will consider VNO with input functions being sampled on a lattice shown in **SM** Figure 6 which is adapted to resolve large gradients features of the flow. On the other hand, FNO is tested on the equispaced point distribution. From Table 1, we observe that VNO is marginally more accurate than FNO while being \(4\) times faster per training epoch, demonstrating a significant computational advantage over FNO on this benchmark. Benchmark 3: Surface-Level Specific Humidity. Next, we focus on a _real world_ data set and learning task where the objective is to predict the surface-level specific humidity over South America at a later time (\(6\) hours into the future), given inputs such as wind speeds, precipitation, evaporation, and heat exchange at a particular time. The exact list of inputs is given in **SM** Table 2. The physics of this problem are intriguingly complex, necessitating a _data-driven approach_ to learn the underlying operator. To this end, we use data provided by the Modern-Era Retrospective analysis for Research and Application v2 (MERRA-2) satellite data to forecast the surface-level specific humidity [43]. Moreover, we are interested in a more accurate regional prediction, namely over the Amazon rainforest. Hence, for the VNO model, we will sample data on points on a lattice that is more dense over this rainforest, while being sparse (with smooth transitions) over the rest of the globe, see **SM** Figure 7 for visualization of this lattice. In contrast to VNO, FNO samples input on the equispaced global grid. Test error is calculated over the region, shown in Figure (b)b. The results, presented in Table 1, show that VNO is not only more accurate than FNO, but also one order of magnitude faster to train. The greater accuracy of VNO over FNO, on a regional scale, is also clearly observed in Figure (b)b, where we observe that VNO is able to capture elements such as the formation of vortices visible in the lower right hand corner and the mixing of airstreams over the Pacific Ocean. These elements are smoothed out by the FNO. Benchmark 4: Flow Past Airfoil. We also investigate transonic flow over an airfoil, as governed by the compressible Euler equations, \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=0\qquad\quad\frac{ \partial\rho\mathbf{u}}{\partial t}+\nabla\cdot(\rho\mathbf{u}\times\mathbf{u}+p \mathbb{I})=0\qquad\quad\frac{\partial E}{\partial t}+\nabla\cdot((E+p)\mathbf{ u})=0. \tag{14}\] Here, \(\rho\) is the fluid density, \(t\) time, \(\mathbf{u}\) the velocity vector, \(p\) the pressure, and \(E\) the total energy, related to each other by an ideal gas equation of state. The data for this experiment has been taken from [29], where the authors have chosen farfield conditions \(\rho_{\infty}=1\left[\mathrm{kg/m^{3}}\right]\), \(p_{\infty}=1\left[\mathrm{atm}\right]\), Mach number \(M_{\infty}=0.8\), and angle of attack \(AoA=0\). The underlying operator maps the airfoil shape to the pressure field. In this case, the underlying distribution of sample points changes between each input (airfoil shape). VNO can readily handle this situation. As a baseline, we use Geo-FNO as proposed in [29]. To have a fair comparison with VNO, we allow Geo-FNO to learn the diffeomorphism online just as VNO does. The test errors, presented in Table 1 show that VNO is significantly more accurate than Geo-FNO for this problem, while being comparable in training time. In particular, as observed from Figure 2(c), VNO captures the trailing shock much better than Geo-FNO. Figure 3. These figures display examples of the ground truth, the target which the VNO, FNO, or Geo-FNO attempt to match. Left: Ground Truth. Center: VNO Right: FNO for (a) and (b) and Geo-FNO for (c). ## 4. Discussion **Summary.** FNO has emerged as a widely used architecture for operator learning, particularly in the context of PDEs. However, as the FNO relies on the FFT to efficiently carry out convolutions in Fourier space, its application is restricted to input data sampled on equispaced Cartesian grids. In practice, input data is often sampled on non-equispaced point distributions. Hence, extending FNO to handle such scenarios is of great interest. In contrast to the common strategy of interpolating or transforming data from non-equispaced to equispaced point distributions, we aimed to propose an alternative framework that directly replaced FFT within FNO to allow for input functions to be sampled on more general point distributions. To this end, we leveraged the Vandermonde-structured matrices that arise quite naturally in computing the Fourier and Inverse Fourier transforms. By coming up with an efficient implementation of the resulting _batched matrix multiplications_, we propose a new operator learning framework called the Vandermonde Neural operator (VNO) with the ability to handle inputs on arbitrary non-equispaced point distributions. We compare VNO with FNO and recently proposed extensions of FNO such as Geo-FNO on a suite of experiments that range from the simple one-dimensional viscous Burgers' equation to more complicated incompressible and compressible flow equations and also a realistic Earth science data set. We find that the VNO can be significantly faster at training, while being comparable in accuracy or significantly more accurate than FNO (and its extensions) on the considered problems. Hence, we demonstrate that the VNO can serve as an efficient and accurate neural operator which can be broadly applied in various scenarios. **Related Work.** We start with a succinct summary of the extensive literature on Vandermonde-structured matrices and related constructions. In this context, the delay Vandermonde matrix (DVM) is a superclass of the DFT matrix. The DVM structure is utilized to analyze TTD wideband multibeam beamforming while solving the longstanding beam squint problem [44, 45, 46, 47, 48]. Although the Vandermonde matrices can be ill-conditioned [49, 50, 51, 52, 53, 39, 54], the proposed VNO does not encounter ill-conditioning, as nodes are placed on the unit disk [54, 55, 48], and we do not compute the explicit inverse of the Vandermonde-structured matrix because it is too expensive and numerically less accurate [51]. Nonequispaced FFT (NFFT) has already been developed [56, 57, 58]. These algorithms rely on a mixture of interpolation, windowing techniques, and strategic applications of FFT to maintain \(O(n\log n)\) cost. However, the inverse NFFT cannot be calculated in a comparably direct manner [59, 60]. The nonequispaced discrete Fourier transform is also presented as a summation [56, 57, 58], which can be arranged into a Vandermonde-structured matrix, but this is not presented in the literature. Excluding fast transforms, the structure of the Vandermonde matrix has been employed to perform the nonequispaced, or nonuniform discrete Fourier transform (NUDFT) [37] in one and two dimensions; however, the two-dimensional distributions, in this case, are limited to the lattice and nonuniform parallel lines. The methods we propose in this paper extend the use of Vandermonde-structured matrices to irregular point distributions in two or more dimensions. The NUDFT is rarely used, as many applications of Fourier transforms require all Fourier coefficients, resulting in \(O(n^{2})\) cost [37]. This is not the case for the FNO, and thus the proposed VNO avoids the rapidly growing computational costs associated with NUDFT. **Limitations and Future Work.** The elements of the Vandermonde-structured matrix are directly related to the positions of the data points for a given problem. If all samples are using an identical point distribution, the Vandermonde-structured matrix may be constructed once, at the beginning of the run time. However, this cannot be done if point distributions vary among samples. In this case, we must either precompute Vandermonde-structured matrices corresponding to each point distribution and load them with the corresponding data, or we must construct the matrices at run-time. Precomputing the matrices can offer performance advantages, but for problems with many data points, a large number of samples, or a large number of modes, the size of the Vandermonde-structured matrices in memory can grow quite large, even exceeding the size of the original data set. Furthermore, the number of rows grows as the power of the number of spatial dimensions even though the operation complexity. This may limit this method for data in 3 or more spatial dimensions--even though the operation complexity is approximately equivalent to the FFT. Performance gains from precomputing the matrices are diminished as such large matrices can not be loaded from memory as quickly. Constructing the matrices at run time, _i.e._, during training, also hinders performance. In the future, it would be worth investigating how the run-time matrix construction might be sped up by using compile time languages [61]. Another direction to investigate is the use of different basis functions. The Fourier basis functions assume periodicity along the torus, but it is possible to extend and modify the Vandermonde-structured approach to handle spherical basis functions as well, expanding the use of VNO techniques to new fields [62, 63, 64, 65].
2306.00104
Teaching Linear Algebra in a Mechanized Mathematical Environment
This paper outlines our ideas on how to teach linear algebra in a mechanized mathematical environment, and discusses some of our reasons for thinking that this is a better way to teach linear algebra than the ``old fashioned way''. We discuss some technological tools such as Maple, Matlab, Python, and Jupyter Notebooks, and some choices of topics that are especially suited to teaching with these tools. The discussion is informed by our experience over the past thirty or more years teaching at various levels, especially at the University of Western Ontario.
Robert M. Corless, David J. Jeffrey, Azar Shakoori
2023-05-31T18:24:57Z
http://arxiv.org/abs/2306.00104v1
# Teaching Linear Algebra ###### Abstract This paper outlines our ideas on how to teach linear algebra in a mechanized mathematical environment, and discusses some of our reasons for thinking that this is a better way to teach linear algebra than the "old fashioned way". We discuss some technological tools such as Maple, Matlab, Python, and Jupyter Notebooks, and some choices of topics that are especially suited to teaching with these tools. The discussion is informed by our experience over the past thirty or more years teaching at various levels, especially at the University of Western Ontario. Keywords:mechanization linear algebra teaching. ## 1 Overview "Linear algebra is the first course where the student encounters algebra, analysis, and geometry all together at once." --William (Velvel) Kahan, to RMC at the 4th SIAM Linear Algebra Conference in Minneapolis 1991 This paper describes the current state of our ongoing practice of teaching linear algebra in mechanized environments. We report our thoughts, arrived at after several decades of history in differing technological and administrative support structures. Some of our teaching philosophy is laid out in [2] and the references therein (especially for active teaching), but to keep this paper self-contained we will give a precis of our approach in section 1.1. We believe that this paper will be of interest for this conference both for its use of various computational environments (Jupyter notebooks, Maple, Matlab, and historically the HP48 series of calculators) and for its recommendations of what is needed for future environments for mechanized mathematics. Linear algebra as a mathematical subject is second only to Calculus in terms of overall teaching effort at secondary institutions, accounting for many millions of dollars spent every year. There are those who believe that we should devote even more money and effort to it, because linear algebra is foundational for so many applications: optimization (linear programming), scientific computing, and analysis of data, for examples. We take as fundamental that the vast majority of people taking these enormous numbers of courses are not going to choose careers as pure mathematicians. Rather, they are going to become engineers, biologists, chemists, physicists, economists, computer scientists, or something else1. They will likely need probability, and methods to solve linear equations, and the understanding of what an eigenvalue is (and perhaps what a singular value is). By and large they will not need to reason their way out of tricky artificial problems. They will need graph theory, and how to solve algebraic equations. They will need to learn how to use computers to help with the drudgery of the computations involved, so that they can be free to think about what the answers mean, instead of how they are arrived at. They will need to learn when they can rely on computers to help, and when they should be suspicious. Footnote 1: The diversity of where our students go afterwards makes it tricky to choose motivating applications. Network flow problems will appeal to a subset of people; electrical circuits might appeal to another subset. Markov chains are fun for some. Very few applications are interesting to everybody. Our favourite introductory textbook--out of the myriad possible choices--arose from an NSF-funded educational project, namely [6]. The book is [14]. Yet this choice is not uncontroversial, and the book is not an especially good match for a mechanized environment. We see a need for a specialized textbook to support active learning of linear algebra in a mechanized environment. ### Active learning in a mechanized environment Within the mathematics mechanization community, it is uncontroversial to assert that the tools available and being developed will make the learning and practice of mathematics better. In theory, this is obvious. In practice, there are devils in the details. For one thing, students (and researchers in industrial environments) must be trained in the use of the new tools, and the time spent learning these tools cannot also be spent on learning the mathematical topics. For this reason, we advocate at least some "re-use" of tools, namely that teaching of mechanized mathematics should use tools that will also be used for something else in the student's or researcher's career. Nowadays this largely means Jupyter notebooks and Python, which are both very popular in data science and neuroscience. In a few years this might mean a replacement for Jupyter together with Julia (perhaps). The one thing that we can say about the software environment for mathematics is that it is changing as rapidly now as it ever has been. However, it will not be surprising to the attendees of this conference that there are lessons to be learned from attempts to use mechanized mathematics in teaching in the past. Indeed the "deep structure" of Python is not so different from that of Maple, and many aspects of programming in the one language transfer readily to the other (for instance, dictionaries in Python are analogous to tables in Maple). More to the point, learning to program in any language exercises some of the same mental muscles that writing a mathematical proof does. The analogy between recursion and mathematical induction is very close, indeed. So, at least some of the material that has been developed with older technology can be given some syntactic re-sugaring and used in much the same way. We will give examples. The most important use of technology, however, is to increase the activity level of the student. One needs to engage the student's attention, and get them to do more than just passively read a text, attend a lecture, watch a video, or regurgitate on an exam. In some ways, fashion helps with this. The students are more likely to want to learn Python than (say) C. ### How to teach with technology There are many papers, and indeed books, written on how to teach with technology. We mention the influential paper [3], which introduced the "White Box" / "Black Box" model, which we have used with some success. The idea there is that when teaching a particular technique (for instance, what a determinant is) the student is not allowed to use the Determinant command; but after they have understood that topic, whenever they are using determinants in a future topic (say, Cramer's Rule) they are allowed to use it. The psychological and pedagogical point is that people need a certain amount of human action with a concept before it is internalized. We tend to say that at that point, the concept has become an answer to the student instead of a question. At that point, the students can use the technology with assurance, and the feeling that they know what is going on. This rule can be used in other ways, and even backwards: use a tool as a mysterious Black Box for a while, probing its output by giving it various inputs until some sense of what is going on arises. We have used this reverse strategy with some success, as well, most commonly with the Singular Value Decomposition (SVD). See [2] for more strategies for teaching with technology that have been tested in practice. ### What to teach, when technology is involved A much more interesting question arises when one considers that the curriculum must be continually curated as new tools come available. New topics may be added (for instance, the SVD), and old topics dropped (for instance, condensation, or perhaps Gauss-Seidel iteration). Indeed a certain amount of room must be made in the course for instruction in the responsible use of the new tools. This is by no means easy, and the students will resist such instruction if they are not also assessed on the use of the tools. The fact that they will be expected to use these tools later in life as a matter of course is sometimes not enough to encourage the students to learn them now. However, society appears to expect that we as instructors will be teaching the students the best way to actually use the material we teach, and (as a matter of course) this means that we must be teaching the students to use the tools of modern mechanized mathematics. Those of us who are actually in the classroom know that sometimes compromises are necessary. ### Outline of the paper In section 2, we discuss some of the tools that are available. In section 3 we mention a few necessary topics that work well with these tools (we do not give a full syllabus, because of space limitations). In section 4 we discuss methods of assessment. In section 5 we discuss some reactions from colleagues and students to these changes from a traditional syllabus, and then conclude. ## 2 Tools The members of this community will have their own preferred computational tools, which may not be the same as ours. We will not fully justify our choices here, but instead sketch only some of the reasons for our choices. ### Proprietary Tools We do use some proprietary tools, namely Maple and Matlab. Our Universities have site licences for these, and we have a significant body of experience with using these tools both for research and for teaching. Many engineering students will graduate into work environments that have Matlab, and by the usual feedback mechanism from other students and other professors, most engineering students are well-motivated to learn Matlab. Matlab has some especially nice tools for sparse matrices, and its live scripts are quite usable. Maple is less well-used in industry, but in some countries it does have a presence; nonetheless it is a harder "sell" to students, and if the course does not explicitly give marks for knowing how to use Maple, students are sometimes reluctant to spend time learning it. But it is powerful enough that students do appreciate it, once they have made the effort. There are other proprietary products which also could be used. Maple Learn is a new one, for instance; but we do not yet have experience with it. Other places will use Mathematica instead of Maple, but the concerns and affordances are similar. ### Free software Within the free software ecosystem, Python and Jupyter stand out as tools of choice for a lot of scientists and engineers. For linear algebra, Matlab and Maple are both superior in terms of capability and in terms of ease of use (in our opinion), especially for sparse matrices, but there is no doubt whatever that Python and Jupyter are more popular. Python is remarkable for its support for long integer arithmetic (although its quiet casting of types behind the scenes can cause problems, especially when things unexpectedly contain 32 bit unsigned integers instead of the expected long integers). Learning to program in Python is perhaps easier in the beginning than is learning any other language (we are aware that opinions differ in this regard, but surely the statement "the easy parts of Python are easy to learn" would be uncontroversial). Julia is newer, more exciting, and extremely impressive for its speed as well as its ease of use. We anticipate that use of Julia will eclipse that of Python. ### Visualization Linear algebra might not seem to need visualization tools as much as Calculus does, but there are several instances where we have found dynamic visualizations to be extremely helpful. One is exemplified by the old Matlab command eigshow (which, curiously, has been deprecated and moved into a relatively obscure location inside the Matlab environment) which is extremely effective in giving students "aha!" moments about both eigenvalues and singular values. One of the keys to that tool's effectiveness is (was) the kinesthetic use of the mouse, by the student, to move the input vectors around. The immediate visual feedback of where the output eigenvectors (and singular vectors) move to in response is, in our experience, much more effective than simple animations (or static pictures). More simply, getting the students to plot eigenvalue distributions, or to plot eigenvector components, is valuable as an action. An opportunity, neglected in most courses and textbooks, is the making of a connection between equation solving and linear transformations. Typically, a course or book opens with an algebraic account of equation solving. The question of how many solutions an equation has is answered by row reduction and the defining of column space. When transformations are introduced, equation solving is not reconsidered. The equation \(Ax=b\) is a transformation of the unknown \(x\), in the domain of \(A\), to the range, containing \(b\). The reverse journey is equation solving, and can be the subject of visualization. In 2-D, everything is rather trivial1, so software allowing 3-D interactive plotting is much better. Transforming a cube using a singular matrix, we observe that the cube is squashed flat. An equation, or the reverse transform, is solvable only if \(b\) lies in the plane. See figure 1. Footnote 1: We resisted the temptation to call it “2” trivial. ### Programming One of the most venerable introductory programming tasks is to write code for LU factoring. One can then add partial pivoting, complete pivoting, or rook pivoting. The topic is accessible, but difficult enough that students will really feel a sense of accomplishment when they have succeeded. The hard part is to get them actually to do it and not to copy someone else's code. This is especially true in engineering classes, where the students are so heavily pressured that they feel that they must cut corners wherever they can. One needs to be creative, here, in finding ways to encourage them not to cheat themselves. One method that we have found effective is to allow them to work in small groups, and to allow them to use code that they find on the internet or copy from other groups provided that they give proper credit and cite where they found it. Students are frequently surprised that their instructors know about Stack Overflow or Chegg as well; but then, in a work environment, any and all tools will be allowed. With some creativity in problem assignment, enough novel features can be used so that the online resources will only help, not solve the complete problem for them. That's unless they use the outright cheating resources where the students post the problems and pay other people to give them the solutions, of course. To combat that, you have to encourage a culture of honesty by being honest yourself and by actually punishing people caught cheating in that way, so that the honest students feel that they can benefit more by remaining honest. However, that's a very hard problem to deal with. It is however something that people in the mathematics mechanization community need to be aware of. For some decades now, some fully automatic servers have been giving step-by-step solutions to math homework problems. This is only going to get harder for educators to deal with. The statement "if anything can be automated, it should be automated" ignores the need for the "White Box" part of education. Some concepts need human manual work to be internalized. Remark 1: Many students are only comfortable using computers where they simply enter the data into prescribed fields, and push buttons to achieve pre-programmed aims. One of the things that we want them to do is to get their "keyboards dirty" and engage with a programming language. Doing this at the same time as teaching them the concepts of linear algebra is a stretch. One should expect only minimal success with getting them to write programs, and then only if you assess them (give them marks) on their ability to do so. Time Figure 1: Transformation of the cube with a singular matrix. The three images are an attempt to show in a static medium a student rotating the plot to see that the cube is now flat. \(Ax=b\) has no solution because \(b\) is not in the plane. We show, however, a projection of \(b\) onto the plane, if least-squares is part of the course. spent on that is time that cannot be spent on linear algebra topics. The topics that we discuss below are chosen in part for their aptness to programming. ## 3 Topics In this section we sketch some of the topics that we feel should be encountered in a modern, mechanized, first course in linear algebra, together with how we think that some of the described tools can help with the concepts. ### The language of matrices There is a nontrivial transition from systems of equations such as \[3x+4y =7\] \[2x-8y =1 \tag{1}\] to the equivalent matrix equations, and most mechanized systems do not have features to help with this transition. Matlab, for instance, expects the user to enter the matrices. We spend some time on this transition, and the conventions that lead to the natural rules for matrix-vector multiplication and thence to matrix-matrix multiplication. The use of elementary matrices to encode operations on equations (especially elimination of a variable) is a crucial feature. With beginning students, this takes time. Hand manipulation is best for this at the beginning, but after experiencing a certain amount of tedium, the students begin to appreciate the ability to construct and manipulate equations through the algebraic rules of matrix multiplication5. The simple syntax of Matlab is likely the most appreciated: A*b for matrix-vector multiplication is close to \(\mathbf{A}\cdot\mathbf{b}\), a common human notation; omitting the \(\cdot\) seems natural. Maple's A.b is somewhat less natural. Footnote 5: They quite like Maple’s GenerateMatrix command, which transforms linear equations with variables into matrix-vector equations. We try to be careful to introduce this only after the students have some experience in doing the transformation by hand. Python's notation is similar, except for one thing. The issue is transpose. Some linear algebra approaches are very snobbish, and insist that there is no such thing as a row vector or column vector, only abstract vectors. Python is like this. This can be very confusing for students. We have found it best to be explicit and consistent about dimensions in our teaching, and to treat vectors normally as column vectors and to treat these as basically indistinguishable from \(n\times 1\) matrices (even that convention needs to be taught: one of our colleagues memorably put it as "you row with columns (oars) when you row a boat"). The "four ways" of interpreting matrix-matrix multiplication is something we explicitly teach. For instance, in one of these four ways, the matrix-matrix product \(\mathbf{AB}\) can be usefully thought of by first thinking of \(\mathbf{B}=[\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{n}]\) as a collection of columns, and then \(\mathbf{AB}=[\mathbf{Ab}_{1},\mathbf{Ab}_{2},\ldots,\mathbf{Ab}_{n}]\) is then a collection of the column vectors \(\mathbf{Ab}_{k}\). Technological support for this can be as simple as asking the students to construct the matrix on the right hand side explicitly, and verifying that the internal matrix multiplication routine produces the same result. An advanced question is to consider parallelism in matrix-matrix multiplication using this partition. We also begin with complex numbers. They will be needed, so we introduce them first thing. Without technological support, students hate complex numbers. With technological support, complex numbers become routine. ### Parametric Linear Algebra One important feature of our course is that it is not purely numerical. Mathematical modelling frequently involves unknown parameters. One wants the solution in terms of those parameters (if possible) to make it possible to identify those parameters by comparing to experimental data. There is also the pedagogical value of strengthening student's understanding of formulas, when the answers are not numbers but instead are formulas. As is well-known in the computer algebra community, this can make computations much more costly and indeed some problems are known to have exponential cost or, worse, combinatorial cost. There is significant literature on the topic, starting with [19]. Recent work includes [10, 4, 8] and [11]. We will address this issue as it comes up in the various topics. The paper [11] raises the important point that for many practical problems with only a few parameters, perhaps only one or two, and for problems with structure or low dimension or both, solutions are perfectly feasible using modern computers and infrastructure. ### Factoring Matrices Factoring matrices, whether it is the Turing factoring \(\mathbf{PA}=\mathbf{LDUR}\) which gives the reduced row echelon form [9], or \(\mathbf{A}=\mathbf{QR}\) into an orthogonal factor \(\mathbf{Q}\) and upper (right) triangular factor \(\mathbf{R}\), or any of several other factorings, is fundamental for modern linear algebra. There is the Schur factoring \(\mathbf{A}=\mathbf{QTQ}^{H}\) which gives the eigenvalues in a numerically stable way. We teach the notion of factoring matrices as a method of solving linear systems of equations (and of eigenvalue problems). This represents a conceptual advance over Gaussian Elimination, and has several important consequences in a symbolic context [13, 9]. The most important feature in a symbolic context is that a factoring preserves special cases. Students can factor matrices by hand (and in the beginning, they should). This gives them something useful to do. Elementary matrices encoding row operations, column operations, and row exchanges are all useful to teach because they consolidate students' knowledge into a modern framework of understanding of linear algebra, and they do so in a way that allows the student to be active. Then one can introduce block matrix manipulation and block factoring, with noncommuting elements. This gives the Schur complement and the Schur determinantal formula. Interestingly, Maple has recently begun to support matrices over noncommuting variables via the Physics package by Edgardo Cheb-Terrab. This allows students to manipulate block matrices with technology, although they still have to think about dimensions. This is apparently also possible in SageMath. Here is an example, showing the Schur complement, in Maple. > _with_(_Physics_): > _Setup_(_mathematicalnotation_ = _true_): > _Setup_(_noncommutativeprefix_ = {\(B\)}): > _with_(_LinearAlgebra_): > _A_\(\coloneqq\)_Matrix_([[B[1,1],B[1,2]],[B[2,1],B[2,2]]]\(\,\)) \[A\coloneqq\left[\begin{smallmatrix}B_{1,1}&B_{1,2}\\ B_{2,1}&B_{2,2}\end{smallmatrix}\right] \tag{2}\] > _L_\(\coloneqq\)_Matrix_([[1,0],[B[2,1]\cdot B[1,1]^{-1},1]]) \[L\coloneqq\left[\begin{smallmatrix}1&0\\ B_{2,1}{B_{1,1}}^{-1}&1\end{smallmatrix}\right] \tag{3}\] > _U_\(\coloneqq\)_Matrix_([[B[1,1],B[1,2]],[0,B[2,2]-B[2,1]\cdot B[1,1]^{-1}\cdot B[1,2]]]\,\)) \[U\coloneqq\left[\begin{smallmatrix}B_{1,1}&B_{1,2}\\ 0&B_{2,2}-B_{2,1}{B_{1,1}}^{-1}{B_{1,2}}\end{smallmatrix}\right] \tag{4}\] > _L_\(\cdot\)_U_ \[\left[\begin{smallmatrix}B_{1,1}&B_{1,2}\\ B_{2,1}&B_{2,2}\end{smallmatrix}\right] \tag{5}\] This illustrative usage of simple noncommuting scalar variables to represent blocks inside matrices, where 1 represents an appropriately-sized identity matrix and 0 represents a zero block, might disconcert people intent on formalizing the computations involved. One of the things that would be necessary to properly formalize this would be a notion of dimension of each block; in practice one would want the dimensions to be symbolic but to match appropriately. We are not aware of any widely-available system at present that can deal properly with this, although there has been research in the area, such as [17, 18]. Making a package widely available that could do such computations correctly would be very welcome. ### Determinant Approaching linear algebra via the determinant is a historically valid approach. It is pedagogically valid, also, because the students are happier (and better off) with having something to do, not just think about. We feel that it is "fair game" that the students be required to memorize the formulas for the determinant and the inverse of a \(2\times 2\) matrix (and in fact this memorization is surprisingly useful for them, later). Laplace expansion (determinant by minors) can be costly and numerically dubious but is extremely useful for sparse symbolic matrices. More, it is crucial in the one "gem" proof that we include in the course simply because it is so pretty, namely the proof of Cramer's Rule6 which we learned from [5]. Footnote 6: One of us teaches Cramer’s Rule only because of this beautiful proof. Cramer’s Rule itself is not particularly useful computationally nowadays, except in very special situations. But that proof is so beautiful. The students seem to like it, too. Asking them to memorize a formula for a three-by-three determinant serves no useful purpose, in our opinion, and letting them use technology for computation of third or higher-order determinants seems perfectly justified. We also demonstrate combinatorial growth by showing the determinant of fully symbolic matrices, for a few small dimensions. Asking them to program Laplace expansion recursively is also useful for this. One can also ask them to program the recursive computation of determinant by the Schur determinantal formula \(\det\mathbf{A}=\det\mathbf{B}_{11}\det(\mathbf{B}_{22}-\mathbf{B}_{21}\mathbf{ B}_{11}^{-1}\mathbf{B}_{12})\). Explicit computation of the inverse of \(\mathbf{B}_{11}\) should be avoided, and can be, by using a suitable factoring. The end result can be significantly more efficient than Laplace expansion. We spend time on the geometry of determinant and its relationship to how area transforms under linear transformations; this is needed in calculus, and can be motivating for the students as well because it makes a connection to something that they already know. Computer visualizations help, here. The ones freely available on YouTube, especially the very professionally produced ones by 3Blue1Brown such as [https://youtu.be/Ip3X9LOh2dk](https://youtu.be/Ip3X9LOh2dk), are hard to compete with. So, we do not compete, and instead share our favourites (such as that one) with the student. With determinant in hand, the students have a worthwhile test for linear dependence. We extend this using the SVD because in the context of data error (which our clientele will surely encounter), the notion of exact singularity or dependence is less useful than that of ill-conditioning or near-dependence. Least squaresMatlab will silently return a least-squares solution to overdetermined problems. Or, even, inconsistent problems. Therefore it is incumbent on us as instructors to teach least squares solutions, in order that the user may understand and appreciate what the system has done. ### Eigenvalues and floating-point We teach eigenvalues more by the "Black Box" / "White Box" approach, because computing eigenvalues by first computing the determinant of \(\lambda\mathbf{I}-\mathbf{A}\) and then solving the polynomial is a pretty brutal hand computation for anything more than \(2\times 2\) matrices. We show them what eigenvalues and eigenvectors are by the use of eigshow or similar, and then set them to compute eigenvalues by the technology. For instance in Figure 2 we see how to do this using Maple (from inside a Jupyter notebook). This requires a discussion of floating-point arithmetic and backward error analysis, which we do not shy away from. Again, our clientele will encounter data error and they must learn tools such as the condition number (which is really just the derivative) to deal with it; putting numerical error on the same footing as data error gives them the tools to deal with that, as well. The computation of eigenvalues of small matrices (say, of dimension less than 1000) is a solved problem nowadays. Indeed we view eigenvalues as answers nowadays because the algorithms are so good in practice (and have recently been shown to be globally convergent in theory, as well [1]). We have had units (in some of our courses) where we talk about companion matrices of various kinds, as tools for solving polynomial equations and systems of polynomial equations. We discuss this in section 3.6. Eigenvalues of parametric matrices are important, for instance in dynamical systems, and their study leads directly to bifurcation theory. We do not include many such problems, but we have used one in particular, namely a perturbation of Matlab's gallery(3) matrix to examine the sensitivity of its eigenvalues to perturbations. This is an advanced topic, however, and occurs only toward the end of the first course (and much more frequently in the second or later course). ### Special matrices There are countless kinds of special matrices. Likely the most important in practice are symmetric (Hermitian) positive definite matrices; others include orthogonal (unitary) matrices, triangular matrices, banded matrices, circulant matrices, Toeplitz matrices, Hankel matrices, and totally positive matrices. Getting the students to write programs that generate some of these, or factor some of these in special ways, is quite interesting. The Cayley transform is quite important nowadays (see e.g. [15]) in control theory and in some kinds of scientific computing, and getting students to parameterize orthogonal matrices using symmetric matrices and the Cayley transform may teach several lessons. While this course should include some of the most common and useful kinds of special matrices, we feel it is also important to let the students invent some Figure 2: Using Maple from a Jupyter notebook of their own kinds of matrices. Examples of student-generated matrices include "checkerboard" matrices which alternate nonzero entries with zero entries and "anti-tridiagonal" matrices. We have found it fun to let the students play, as they program. Sometimes even their bugs give rise to interesting developments. _Symmetric Positive Definite matrices_ "Symmetric positive definiteness is one of the highest accolades to which a matrix can aspire." --Nicholas J. Higham, in [12, p. 196] Symmetric Positive Definite (SPD) matrices arise very often in practice. For an enlightening discussion of just why this is so, see [20]. The inductive proof of unicity of the Cholesky factoring for SPD matrices (see e.g. [12, p. 196]) can be turned into a recursive program for its computation, and this is a useful programming exercise for the students. The many applications of SPD matrices can be motivating for students, but having the technology to solve them is clearly essential. _Companion matrices_ "What does this all have to do with matrices? The connection is through the companion matrix." --Cleve Moler, in [16]. Another thing technology really makes possible is the use of companion matrices and resultants in the solution of polynomial equations. The topic is surprisingly rich, not just useful. Algebraically, companion matrices for a monic polynomial \(p(z)\) are matrix representations of multiplication by \(z\) in the ideal generated by \(p(z)\). Companion matrices are not unique, and indeed there are open problems as to which is the "best" companion for a given polynomial \(p(z)\), as we will discuss. Extending the idea to non-monic polynomials leads to generalized eigenvalue problems \(p(z)=\det(\lambda\mathbf{B}-\mathbf{A})\) where now \(\mathbf{B}\) is not necessarily the identity matrix (or of full rank). Using other polynomial bases (e.g. Chebyshev, Bernstein, or Lagrange interpolational bases) leads again to surprisingly deep waters. Given a (monic) polynomial over the integers, one can ask which companion matrix over the integers has minimal height? The "height" of a matrix is the infinity norm of the matrix made into a vector; that is, the largest absolute value of any entry. No good algorithms for this problem are known [7]. In the case of Mandelbrot polynomials \(p_{0}=0\) and \(p_{n+1}(z)=zp_{n}^{2}(z)+1\) there are companions of height 1, while the maximum coefficient of \(p_{n}(z)\) is exponential in the degree of \(p_{n}(z)\) (and therefore doubly exponential in \(n\)). Smaller height matrices seem to be easier to compute eigenvalues for. ### Proof and formal methods "I have absolutely no interest in proving things that I know are true." --the American physicist Henry Abarbanel, at a conference in 1994 Entering students in North America have long since been deprived of an introductory course on proof (which was, classically, Euclidean geometry). Typically the first course in which they encounter "proof" nowadays is their first linear algebra class. For the clientele described previously, we feel it is more important to motivate proof at this stage. Students who are asked to listen to a proof of something they consider obvious (or for which they would be happy to take the professor's word, such as \(\det{\bf A}{\bf B}=\det{\bf A}\cdot\det{\bf B}\)) do not learn much. Ed Barbeau put it thus: "there should be no proof without doubt" (on the part of the student). Asking students to write programs is, we believe, a useful intermediate step. In addition to developing the necessary habit of precise thinking, writing programs makes students receptive to the idea of proving their programs correct (after they have witnessed a few failures, which are somehow always surprising to beginning programmers). ## 4 Assessment Assessment is critical for the success of a course. Students want bribes (marks) in order to spend time on any particular topic. If a topic is not assessed, then it can be safely skipped and the student can rationally spend their effort on topics that actually will be assessed. The recent introduction of chat AIs that generate plausible-sounding answers has thrown a further monkey wrench into assessment of courses by project, a method that we have heretofore favoured. It is even the case that these chat AIs can, perhaps by plagiarising GitHub and other software sources, provide readable (and sometimes even working) software to students. We may have to go back to individual exams with direct supervision: essentially, oral examinations. This is so labour intensive that it seems impractical for the very large linear algebra classes that our Universities want us to teach, however. There are several strategies for written exams that still may be of interest, however, and we give some of them here. The first is the venerable multiple-choice exam. For computations, one can remove the "reverse engineering" method by asking not for the exact answer, but rather asking for the closest answer not larger than the true answer. For instance, supposing that the true answer was \(\sqrt{2}\), one could list decimal answers (a) 1.2 (b) 1.3 (c) 1.5 and (d) 1.8. The desired answer would be (b), 1.3. This tool is surprisingly effective, although many students view it as being "unfair." A second assessment strategy is to use computer-generated individual questions, where the student is expected to work at their computer (or at a locked-down lab computer) and provide full notes on their work. These kinds of exams are very stressful for students, however. They are even more stressful if intrusively-monitoring software is involved (and there may be human rights abuses committed by those pieces of software which the instructor or administration will be responsible for). For the purpose of discussion, we will assume that no intrusive monitoring software is used, and that measures are taken to alleviate student stress: for instance, one can give out "practice" exams ahead of time. Since we want to include the use of mechanized tools into the assessment, testing in a computational environment is quite natural. If the students know that they will be tested on their competence in (say) Python, then they will spend some effort to learn it. Incorporating personalized questions into such exams then becomes both feasible and informative. ## 5 Promoting agreement on syllabus change Some of our colleagues and administrative structures have been very supportive of innovation along these lines. Others have been, well, reactionary. Using technology is more labour intensive than is re-using the same old linear algebra textbooks, problem sets, and exam questions. Using technology also requires continual re-training because the technologies keep changing. Some people resent being told that they have to change in order to do their jobs well in a changing environment. We give an example here of a suboptimal linear algebra exam question, taken from last year's multi-section course at Western7, taught both by progressive and regressive colleagues. The exam took place without notes, books, calculators, or computers. Students are allowed by law (in some parts of the world) to have access to their phones, but many universities will attempt to restrict that, too. The exams at Western typically have quite alarming language on the cover sheet saying that students caught with a cell phone will be given a zero. We feel that this is a lamentable state. Footnote 7: A simple web search for “Math 1600 Western” brings the entire exam up, if you wish to see the entire context. The question was: Find the inverse of the matrix \[\mathbf{A}=\left[\begin{array}{rr}2&1&0\\ 1&0&-1\\ 0&1&1\end{array}\right]\,. \tag{6}\] This question does have a few virtues. For one, it is something the students can do. It was worth three marks, which the students could grind out. But it also has some serious flaws. Probably the most serious is that it does not test anything that the students will really need in their future use of linear algebra. There were calculators thirty years ago that could solve this problem in under a second. No one is going to invert \(3\times 3\) matrices by hand any more, unless there is something special about it. [There is something special about this matrix; it is unimodular, so that the elements of the inverse are all integers. That didn't happen by chance, so we suspect the examiners chose the question so as not to strain the student's arithmetic overmuch.] More, not only will students not need to invert by hand, they usually will not need to invert at all. The inversion of matrices is really only of very specialized concern nowadays. There are statistical applications where the elements of the inverse are what is wanted; but for the most part, "Anything that you can do with the matrix inverse can be done without it." Matrix factorings are much more important. Students are rational creatures. If this is the kind of question that they have to answer in order to pass, then they will spend their time trying to find strategies to give good answers to this kind of question. They will do that at the expense of time spent learning to program (for instance). This represents a significant lost opportunity for the student and for this University. Indeed, the absolute explosion in on-line courses (for instance, at brilliant.org, where they claim that interactive learning is six times more effective than lectures) is a direct response to the failure of many universities to adapt their courses. Students resent having to pay twice to get the knowledge they actually want and need. The next few years are going to be "interesting." One way to repair that particular question might be to ask if the matrix factors into a lower triangular and upper triangular factor, without pivoting. The matrix is tridiagonal, so this variation has fewer computations, although this time involving fractions (just 1/2 though). This is something that could be asked even if the student has access to technology during the exam. The details of the computation are not that important--it is just arithmetic--but the question of whether or not the factoring can be done without pivoting would require some understanding of the process involved. ## 6 Concluding Remarks The state of the art for learning linear algebra is, to our minds, unsatisfactory, though getting better. Technological platforms are split: some are proprietary, while some others are unsupported at the level needed for reliable use. Methods and syntax are not standardized (or, rather, there are too many standards). The textbooks largely do not integrate mechanized mathematical tools into the learning process. [A very notable exception is [21], which uses Matlab extensively.] Yet failing to use a mechanized approach does a true disservice to students who will go on to practice linear algebra in some kind of mechanized environment. The role of technology, including formal methods, is therefore multiplex. We believe that people must be trained in its use. In particular, people must be trained to want proof, and to want formal methods. We feel that having students write their own programs plays a motivating role in that training as well as a developmental role. The first linear algebra course is important not only because its tools and concepts are critical for science, but also as a venue for teaching the responsible use of mathematical technology.
2305.00454
Few-shot Classification via Ensemble Learning with Multi-Order Statistics
Transfer learning has been widely adopted for few-shot classification. Recent studies reveal that obtaining good generalization representation of images on novel classes is the key to improving the few-shot classification accuracy. To address this need, we prove theoretically that leveraging ensemble learning on the base classes can correspondingly reduce the true error in the novel classes. Following this principle, a novel method named Ensemble Learning with Multi-Order Statistics (ELMOS) is proposed in this paper. In this method, after the backbone network, we use multiple branches to create the individual learners in the ensemble learning, with the goal to reduce the storage cost. We then introduce different order statistics pooling in each branch to increase the diversity of the individual learners. The learners are optimized with supervised losses during the pre-training phase. After pre-training, features from different branches are concatenated for classifier evaluation. Extensive experiments demonstrate that each branch can complement the others and our method can produce a state-of-the-art performance on multiple few-shot classification benchmark datasets.
Sai Yang, Fan Liu, Delong Chen, Jun Zhou
2023-04-30T11:41:01Z
http://arxiv.org/abs/2305.00454v1
# Few-shot Classification via Ensemble Learning with Multi-Order Statistics ###### Abstract Transfer learning has been widely adopted for few-shot classification. Recent studies reveal that obtaining good generalization representation of images on novel classes is the key to improving the few-shot classification accuracy. To address this need, we prove theoretically that leveraging ensemble learning on the base classes can correspondingly reduce the true error in the novel classes. Following this principle, a novel method named Ensemble Learning with Multi-Order Statistics (ELMOS) is proposed in this paper. In this method, after the backbone network, we use multiple branches to create the individual learners in the ensemble learning, with the goal to reduce the storage cost. We then introduce different order statistics pooling in each branch to increase the diversity of the individual learners. The learners are optimized with supervised losses during the pre-training phase. After pre-training, features from different branches are concatenated for classifier evaluation. Extensive experiments demonstrate that each branch can complement the others and our method can produce a state-of-the-art performance on multiple few-shot classification benchmark datasets. ## 1 Introduction Few-shot Classification (FSC) is a promising direction in alleviating the labeling cost and bridging the gap between human intelligence and machine models. It aims to accurately differentiate novel classes with only a few labeled training samples. Due to limited supervision from novel classes, an extra base set with abundant labeled samples is often used to improve the classification performance. According to the adopted training paradigms, FSC methods can be roughly divided into meta-learning-based [14, 15] and transfer-learning-based [14, 13, 20]. The first type takes the form of episodic training, in which subsets of data are sampled from the base set to imitate the meta-test setting. Since sampling does not cover all combinations, this paradigm cannot fully utilize the information provided by the base set. In contrast, the transfer-learning takes the base set as a whole, so it avoids the drawback of meta-learning and achieves better performance. Many effective regularization techniques have been exploited in transfer-learning, for example, manifold mixup [17], self-distillation [15], and self-supervised learning [16], which leads to significant improvement on the generalization of image representations and the FSC performance. Ensemble learning combines multiple learners to solve the same problem and exhibits better generalization performance than any individual learners [16]. When combining ensemble learning with deep Convolutional Neural Networks (CNN), the new paradigm usually requires large-scale training data for classification tasks [11, 1], making it challenging to be adopted for FSC. Recently, two notable studies [13, 1] employed an ensemble of deep Figure 1: (a) The traditional methods often use different backbone networks as individuals, which significantly increases the computation and storage costs. (b) Our method takes the same backbone and equips different branches with multi-order statistics as learning individuals. They are parameter-free and trained jointly, and do not require extra model size and computation time. neural networks for FSC tasks under either a meta-learning or a transfer-learning setting. They demonstrated that ensemble learning is also applicable to FSC. Yet, these works are still preliminary and lack a theoretical analysis to explain the underlying reason behind the promising performance. To address this challenge, we provide an FSC ensemble learning theorem for the transfer-learning regime. Its core idea is a tighter expected error bound on the novel classes, in which the expected error on the novel classes can be reduced by implementing ensemble learning on the base classes, given the base classes-novel classes domain divergence. The generalization ability of ensemble learning is strongly dependent on generating diverse individuals. As shown in Figure 1 (a), traditional methods often use different backbone networks as individuals, which significantly increases the computation and storage costs. Our work finds that different-order statistics of the CNN features are complementary to each other, and integrating them can better model the whole feature distribution. Based on this observation, we develop a parameter-free ensemble method, which takes the same backbone and equips different branches with multi-order statistics as learning individuals. We name this method Ensemble Learning with Multi-Order Statistics (ELMOS), as shown in Figure 1 (b). The main contributions of this paper are summarized as follows: * To our knowledge, this is the first theoretical analysis to guide ensemble learning in FSC. The derived theorem proves a tighter expected error bound is available on novel classes. * We propose an ensemble learning method by adding multiple branches at the end of the backbone networks, which can significantly reduce the computation time of the training stage for FSC. * This is the first time that multi-order statistics is introduced to generate different individuals in ensemble learning. * We conduct extensive experiments to validate the effectiveness of our method on multiple FSC benchmarks. ## 2 Related Work In this section, we review the related work to the proposed method. ### Few-shot Classification According to how the base set is used, FSC methods can be roughly categorized into two groups, meta-learning-based [15] and transfer-learning-based [1, 16]. Meta-learning creates a set of episodes to simulate the real FSC test scenarios and simultaneously accumulate meta-knowledge for fast adaptation. Typical meta-knowledge includes optimization factors such as initialization parameters [14] and task-agnostic comparing ingredients of feature embedding and metric [17, 2]. Recent literature on transfer learning [13, 15] questioned the efficiency of the episodic training in meta-learning, and alternatively used all base samples to learn an off-the-shelf feature extractor and rebuilt a classifier for novel classes. Feature representations play an important role in this regime [13]. To this end, regularization techniques such as negative-margin softmax loss and manifold mixup [16, 17] have been adopted to enhance the generalization ability of cross-entropy loss. Moreover, self-supervised [15, 14] and self-distillation [18, 19] methods have also shown promising performance in transfer-learning. To this end, supervised learning tasks can be assisted by several self-supervised proxy tasks such as rotation prediction and instance discrimination [15], or by adding an auxiliary task of generating features during the pre-training [20]. When knowledge distillation is adopted, a high-quality backbone network can be evolved through multiple generations by a born-again strategy [14]. All these methods suggest the importance of obtaining generalization representations, and we will leverage ensemble learning to achieve this goal. ### Ensemble Learning Ensemble learning builds several different individual learners based on the same training data and then combines them to improve the generalization ability of the learning system over any single learner. This learning scheme has shown promising performance on traditional classification tasks with deep learning on large-scale labeled datasets. Recently, ensemble learning for FSC methods has been presented. For example, [13] combined an ensemble of prototypical networks through deep mutual learning under a meta-learning setting. [1] reduced the capacity of each backbone in the ensemble and pre-trained them one by one with the same routine. However, the size of the ensemble learner increased for inference in the former work, while the latter required extra time to pre-train many learning individuals. Therefore, it still lacks efficient designs for learning individuals in FSC ensemble learning. Moreover, these works did not involve any theoretical analysis of the underlying mechanism of ensemble learning in FSC. In this paper, we investigate why ensemble learning works well in FSC under the transfer-learning setting. Based on the analysis, we propose an efficient learning method using a shared backbone network with multiple branches to generate learning individuals. ### Pooling Convolutional neural network models progressively learn high-level features through multiple convolution layers. A pooling layer is often added at the end of the network to output the final feature representation. To this end, Global Average Pooling (GAP) is the most popular option, however, it cannot fully exploit the merits of convolutional features because it only calculates the \(1^{st}\)-order feature statistics. Global Covariance Pooling (GCP) such as DeepO\({}^{2}\)P explores the \(2^{nd}\)-order statistic by normalizing the covariance matrix of the convolutional features, which has achieved impressive performance gains over the classical GAP in various computer vision tasks. Further research shows that using richer statistics may lead to further possible improvement. For example, Kernel Pooling [14] generates high-order feature representations in a compact form. However, a certain order statistic can only describe partial characteristics of the feature vector from the view of the characteristic function of random variables. For example, the first- and second-order statistics can completely represent their statistical characteristic only for the Gaussian distribution. Therefore, higher-order statistics are still needed for the non-Gaussian distributions, which are more ubiquitous in many real-world applications. This motivates us to calculate multi-order statistics to retain more information on features. ## 3 The Proposed Method Here we present the proposed method. We start with a formal definition of FSC, and then present a theorem on FSC ensemble learning. This theorem leads to the development of an ensemble learning approach with multi-order statistics. ### Theory Foundation Under the standard setting of few-shot classification, three sets of data with disjoint labels are available, i.e., the base set \(S_{b}\), the validation set \(S_{val}\) and the novel set \(S_{n}\). In the context of transfer-learning, \(S_{b}\) is used for pre-training a model to well classify the novel classes in \(S_{n}\), with the hyper-parameters tuned on \(S_{val}\). Let \(S_{b}=\{(x_{i},y_{i})\}_{i=1}^{N_{b}}\) denotes the source domain with \(N_{b}\) labelled samples and \(S_{n}\) denotes the target domain labelled with \(K\) samples in each episode, where \(N_{b}>>K\). Let the label function of \(S_{b}\) and \(S_{n}\) be \(f_{b}\) and \(f_{n}\), respectively. During the pre-training, a learner \(h\) is obtained to approximate the optimal mapping function \(h^{*}\) based on all \(N_{b}\) training samples in \(S_{b}\) from all possible hypotheses \(\mathcal{H}\). When ensemble learning is introduced into the pre-training, several learners denoted as \(\{h_{o}\}_{o=1}^{O}\) can be obtained. With the ensemble technique of weighted averaging, the final learner \(\overline{h}\) is produced as: \[\overline{h}=\sum_{o=1}^{O}\alpha_{o}h_{o}, \tag{1}\] where \(\alpha_{o}\) is the weight parameter. There is a domain shift between the base and novel classes [13], and we use the \(L_{1}\) distance [12] to measure the domain divergence between \(S_{b}\) and \(S_{n}\): \[\mathcal{D}(S_{b},S_{n})=\int\left|\eta_{b}(x)-\eta_{n}(x)\right|\left| \overline{h}(x)-f_{n}(x)\right|dx, \tag{2}\] where \(\eta_{b}(x)\) and \(\eta_{n}(x)\) is the density functions of \(S_{b}\) and \(S_{n}\) respectively. **Theorem 1** (FSC Ensemble Learning): _Let \(\mathcal{H}\) be a hypothesis space, for any \(h\in\{h_{o}\}_{o=1}^{O}\in\mathcal{H}\) is learned from \(S_{b}\), and \(\overline{h}=\sum_{o=1}^{O}\alpha_{o}h_{o}\in\mathcal{H}\), the expected error on \(S_{n}\) respectively with \(\overline{h}\) and \(h\) holds the following relationship:_ \[e_{n}(\overline{h})\leq e_{b}(\overline{h})+\underbrace{ \mathcal{D}(S_{b},S_{n})}_{(S_{b}\cdot S_{n})\text{ divergence}}+\lambda\] \[\leq e_{b}(h)+\underbrace{\mathcal{D}(S_{b},S_{n})}_{(S_{b}\cdot S _{n})\text{ divergence}}+\lambda,\] _where \(\lambda=E_{X\in S_{b}}\left|f_{n}(x)-f_{b}(x)\right|\) is a constant, \(e_{n}(\overline{h})\) is the expected error on \(S_{n}\) with \(\overline{h}\), \(e_{b}(h)\) is the expected error on \(S_{b}\) with \(h\), \(e_{b}(\overline{h})\) is the expected error on \(S_{b}\) with \(\overline{h}\)._ The proof is provided in the Supplementary Material. **Remark 1**: _The core idea of Theorem 1 is to define a tighter expected error bound on the novel classes with the learned mapping function in the form of ensemble learning during the pre-training. Theorem 1 tells that the true error on the novel classes can be reduced by implementing ensemble learning on the base classes, given the domain divergence between the novel class and base class. This can well explain the effectiveness of ensemble learning in few-shot classification, in which multiple learners are assembled to enhance the generalization on the base set, resulting in better performance in novel classes._ ### FSC via Ensemble Learning with Multi-order Statistics **Overview** Our method employs the transfer-learning paradigm in a two-phase manner. In the first phase, a good feature extractor is pre-trained on the base set. In the second phase, FSC evaluation is done on the novel set with the pre-trained feature extractor. Following Theorem 1, we introduce ensemble learning in the first phase to improve the FSC performance. The key to this phase is to effectively train multiple diverse individuals. Different from the previous works [15, 1] that use many different networks as individuals, we add multiple branches after the backbone network to create individuals for reducing training costs. Each branch calculates different-order statistics for pooling to highlight the discrepancy between the individuals. This step is optimized by supervised losses. After pre-training, features from different branches are concatenated for FSC evaluation. We name this method as Ensemble Learning with multi-Order Statistics (ELMOS) for FSC. An overview of ELMOS is shown in Figure 2, and a flow description of ELMOS is given in Algorithm 1. **Pre-training via Multi-order Statistics** The proposed model architecture mainly consists of the following four components: an image processing module, the backbone network, a multi-order statistics module, and a supervised classifier module. The image processing module is denoted as \(M\left(\cdot\right)\), which performs transformation of multi-scale rotation to augment the original base set and their label space. The backbone network is denoted as \(B_{\theta}\left(\cdot\right)\) and parameterized by \(\theta\), which converts each image into a tensor of size \(H\times W\times d\). The multi-order statistics module module is denoted as \(S\left(\cdot\right)\), which maps the tensor from the backbone into multiple feature representations to generate individual learners for ensemble learning. The supervised classifier module is composed of softmax classifiers \(L_{W}\left(\cdot\right)\) and the projectors \(L_{U}\left(\cdot\right)\) with parameter matrices \(W\) and \(U\), respectively, which are used to build the supervised losses for pre-training. Given \(L\) samples be randomly sampled from \(S_{b}\) with \(C_{b}\) classes, in which an image and its corresponding label are denoted as \((x_{i},y_{i})\), \(y_{i}\in\{1,2,...C_{b}\}\). \(M\left(\cdot\right)\) scales the images with the aspect-ratio of 2:3 and rotates the images with \(\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\) under both the new and the original scales, resulting in eight times expansion of training samples. Feed \(x_{i}\) into \(B_{\theta}\) to produce a tensor feature of \(T_{i}=B_{\theta}(x_{i})\in\mathscr{R}^{H\times W\times d}\). Next, we reshape the tensor \(T_{i}\) into the matrix \(T_{i}\in\mathscr{R}^{HW\times d}\), and view each row vector in the matrix \(t_{j}\in\mathscr{R}^{d}\) as an observation of the random variable of \(t\in\mathscr{R}^{d}\). When \(d=1\), the first characteristic function of variable \(t\) in the Laplace operator is given by: \[\phi(s)=\int_{-\infty}^{+\infty}f(t)e^{st}dt=\int_{-\infty}^{+\infty}e^{st}dF( t), \tag{3}\] where \(f(t)\) and \(F(t)\) are the density function and distribution function of \(t\), respectively. Let \(\psi(s)=ln\phi(s)\) be the second characteristic function of the random variable \(t\). **Theorem 2** (The Inversion Formula for Distributions): _Let \(t\) be a random variable with distribution function \(F(t)\) and characteristic function \(\phi(s)\). For \(a,b\in C(F)\) and \(a<b\),_ \[F(b)-F(a)=\lim_{c\to\infty}\frac{1}{2\pi}\int_{-c}^{c}\frac{e^{-sa}-e^{-sb}}{s }\phi(s)ds.\] **Corollary 1** (Uniqueness): _If two distributions of \(F_{1}(t)\) and \(F_{2}(t)\) are identical, then the corresponding characteristic functions \(\psi_{1}(s)\) and \(\psi_{2}(s)\) are identical._ See proof of Theorem 2 and Corollary 1 in [20]. From Theorem 2 and Corollary 1, we can see that there is a one-to-one correspondence between the characteristic function and the probability density function such that the characteristic function can completely describe a random variable. The \(o^{th}\)-order cumulant of the random variable \(t\) is defined as the \(o^{th}\) derivative of function \(\psi(s)\) at the origin, which is: \[c_{o}=\frac{d^{o}\psi(s)}{ds^{o}}\bigg{|}_{s=0}. \tag{4}\] Then the Taylor series expansion of function \(\psi(s)\) at the origin with respect to \(s\) yields: \[\psi(s)=c_{1}s+\frac{1}{2}c_{2}s^{2}+...+\frac{1}{o!}c_{p}s^{o}+R_{s}(s^{o}), \tag{5}\] where \(R_{s}(s^{o})\) is the remainder term. It can be seen from Equation (5) that the \(o^{th}\)-order cumulant of \(t\) is the coefficient of the term \(s^{o}\) in Equation (5). **Proposition 1**: _Consider a Gaussion distribution \(f(t)\) with mean \(\mu\) and variance \(\Sigma^{2}\) for the random variable \(t\), its second characteristic function is:_ \[\psi(s)=\mu s+\frac{1}{2}\Sigma^{2}s^{2}.\] _Consequently, the cumulant of the random variable \(t\) are:_ \[c_{1}=\mu,c_{2}=\Sigma^{2},c_{o}=0\quad(o=3,4,...).\] The proof is provided in the Supplementary Material. **Remark 2**: _Proposition 1 implies that for Gaussian signals only, the cumulants are identically zero when the order is greater than 2. Please note this conclusion can be naturally extended to the scenario of multivariate variables when \(d>1\). For the random variables with Gaussian distribution, the first and second-order statistics can completely represent their statistical characteristics. However, the non-Gaussian signals are more common in real-world applications. In this case, higher-order statistics also contain a lot of useful information. Therefore, we propose a multi-order statistics module consisting of multiple branches, each equipped with different order statistics of the tensor feature \(T_{i}\)._ In particular, we employ three branches in the multi-order statistics module, which respectively calculate three orders cumulants of the variable \(t\) with the observations in \(T_{j}\). The specific formulation of the \(1^{st}\)-order, \(2^{nd}\)-order and \(3^{rd}\)-order cumulants of \(t\) are expressed as: \[c_{i1} =\frac{1}{H\times W}\sum_{j=1}^{H\times W}t_{j}\quad c_{i1}\in \mathscr{R}^{d},\] \[c_{i2} =\frac{1}{H\times W}\sum_{j=1}^{H\times W}(t_{j}-c_{i1})(t_{j}-c_{ i1})^{T}\quad c_{i2}\in\mathscr{R}^{d\times d},\] \[c_{i3} =\frac{1}{H\times W}\sum_{j=1}^{H\times W}\frac{(t_{j}-c_{i1})^{2 }(t_{j}-c_{i1})^{T}}{c_{i2}^{2}c_{i2}^{T}}\quad c_{i3}\in\mathscr{R}^{d\times d}. \tag{6}\] Figure 2: An overview of our framework. The images from \(S_{b}\) are augmented by the image processing module and fed into the backbone for feature extraction. The CNN features from the backbone are then reshaped into the matrix, which is used to calculate multi-order statistics to equip different branches. Ensemble learning is implemented by the linear combination of multiple branches during the pre-training phase. As \(c_{i2}\) and \(c_{i3}\) are \(d\times d\) matrices, we flatten them into \(d^{2}\)-dimensional vectors and finally get the feature representations of \(z_{i1}\), \(z_{i2}\) and \(z_{i3}\). We use these three features as individuals in ensemble learning, which respectively pass through their corresponding softmax classifier \(L_{W}(\cdot)\) and projectors \(L_{U}(\cdot)\). So the \(o\)-th (\(o=1,2,3\)) outputs are: \[\begin{split} P_{ij}^{o}&=L_{Wo}\left(z_{io} \right)=\frac{exp({z_{i0}}^{T}w_{oj})}{\sum_{j=1}^{8C_{b}}exp({z_{i0}}^{T}w_{oj })},\\ u_{io}&=\left\|L_{Uo}(z_{io})\right\|=\left\|{{z_{i0 }}^{T}U_{o}}\right\|,\end{split} \tag{7}\] where \(L_{Wo}(\cdot)\) is the \(o\)-th softmax classifier with the parameter matrix of \(W_{o}\), \(w_{oj}\) is the \(j\)-th component of \(W_{o}\). \(L_{Uo}(\cdot)\) is the \(o\)-th projector with the parameter matrix \(U_{o}\). \(P_{ij}^{o}\) is the \(j\)-th component of the output probability from the \(o\)-th softmax classifier. \(u_{io}\) is the output vector from the \(o\)-th projector. We simultaneously employ Classification-Based (CB) loss of cross-entropy and Similarity-Based (SB) loss of supervised contrastive in supervised learning for each individual [21]. These two losses are formulated as: \[L_{CB}^{o}\left(\theta,W_{o}\right)=-\sum_{i=1}^{8L}\sum_{j=1}^{8C_{b}}y_{ij} logP_{ij}^{o},\] \[L_{SB}^{o}(\theta,U_{o})=-\sum_{i=1}^{8L}log\sum_{q\in Q(u_{io})}\frac{exp(u_ {io}\cdot u_{qo}/\tau)}{\sum_{a=1}^{8L}exp(u_{ao}\cdot u_{qo}/\tau)}, \tag{8}\] where \(y_{ij}\) is the \(j\)-th component of label \(y_{i}\), \(\tau\) is a scalar temperature parameter. \(Q(u_{io})\) is the positive sample set, in which each sample has the same label as \(u_{io}\). \(u_{qo}\) is the \(q\)-th sample in \(Q(u_{io})\). Then the learning objective function for the \(o\)-th individual is: \[L_{o}(\theta,W_{o},U_{o})=L_{CB}^{o}\left(\theta,W_{o}\right)+L_{SB}^{o}( \theta,U_{o}). \tag{9}\] The overall loss function with ensemble learning is: \[L_{overall}=\sum_{o=1}^{O}\alpha_{o}L_{o}(\theta,W_{o},U_{o}), \tag{10}\] where \(\alpha_{o}\) is a weight controlling the contribution of each individual in the ensemble learning. The pre-training adopts the gradient descent method to optimize the above loss function. ### Few-shot Evaluation The phase of few-shot evaluation still needs to construct a set of \(N\)-way \(K\)-shot FSC tasks, with a support set and a query set in each task. The support set randomly selects \(K\) samples from each of the \(N\) classes that are sampled from \(S_{n}\), which is denoted as \(S_{p}=\{x_{s},y_{s}\}_{s=1}^{NK}\), where \((x_{s},y_{s})\) is the \(s\)-th images and its corresponding label. The query set consists of the remaining images in these \(N\) classes, which is denoted as \(S_{q}=\{x_{q}\}_{q=1}^{Q}\) with any image of \(x_{q}\). After pre-training, we get rid of the softmax classifier \(L_{W}(\cdot)\) and projectors \(L_{U}(\cdot)\) and fix the backbone network \(B_{\theta}(\cdot)\) and the multi-order statistics module module \(S(\cdot)\). The support set \(S_{p}\) is input into \(B_{\theta}(\cdot)\) and \(S(\cdot)\) to produce the output features: \[z_{so}=B_{\theta}\circ S(x_{s})\quad(o=1,2,3), \tag{11}\] where \(\circ\) is the stack operator. The features \(z_{s1},z_{s2},z_{s3}\) are concatenated into a final expression of \(x_{s}\): \[z_{s}=con(z_{s1},z_{s2},z_{s3}), \tag{12}\] where \(con(\cdot)\) is the concatenated operator. A logistic regression classifier \(g_{\xi}\left(\cdot\right)\) parameterized by \(\xi\) is then trained with \(z_{s}\) and its corresponding label \(y_{s}\). The query image \(x_{q}\) is finally classified as: \[\hat{y}_{q}=g_{\xi}(z_{q}), \tag{13}\] where \(\hat{y}_{q}\) is the inference label value of \(x_{q}\). ``` Input: Base set \(S_{b}\), support set \(S_{p}\), query set \(S_{q}\); augmentation module \(M\left(\cdot\right)\), backbone network \(B_{\theta}\left(\cdot\right)\), multi-order statistics module \(S(\cdot)\), softmax classifier \(L_{Wo}\), projector \(L_{Uo}\) and logistic regression \(g_{\xi}\left(\cdot\right)\); temperature parameter \(\tau\), weight \(\alpha_{o}\) (\(o=1,2,3\)). Output: Final prediction of the query samples Stage 1: Pre-training with ensemble learning fornumbers of training epochsdo Sample a mini-batch with any image of \(\{x_{i},y_{i}\}\); Feed\(x_{i}\) into \(T(\cdot)\) and \(B_{\theta}\left(\cdot\right)\) to obtain feature map \(T_{i}\in\mathscr{R}^{H\times W\times d}\); Pass\(T_{i}\) through \(S(\cdot)\) to output features \(z_{io}\), (\(o=1,2,3\)); Pass\(z_{io}\) through \(L_{Wo}\) and \(L_{Uo}\) to get the output probability and projection feature ; Calculate optimization loss for each individual via Equation (9); Calculate overall loss for pre-training via Equation (10); Update the parameters of \(\theta\), \(W_{o}\), \(U_{o}\) using SGD; end Stage 2: Few-shot evaluation foralliteration = 1, 2,..., MaxIterationdo Feed\(x_{s}\in S_{p}\) into \(B_{\theta}(\cdot)\) and \(S(\cdot)\) to output feature \(z_{so}\), (\(o=1,2,3\)); Concatenate\(z_{so}\) into the feature \(z_{s}\) to train the classifier of \(g_{\xi}\left(\cdot\right)\); end Classify the query samples according to Equation (13). ``` **Algorithm 1**Ensemble Learning with multi-Order Statistics (ELMOS) for FSC ## 4 Experiments ### Datasets **miniImageNet** contains 600 images over 100 classes, which are divided into 64, 16 and 20 respectively for base, validation and novel sets. **tiredImageNet** consists of 779, 165 images belonging to 608 classes, which are further grouped into 34 higher-level categories with 10 to 30 classes per category. These categories are partitioned into 20 categories (351 classes), 6 categories (97 classes) and 8 categories (160 classes) respectively for base, validation and novel sets. **CIFAR-FS** is derived from CIFAR100 and consists of 100 classes with 600 images per class. The total classes are split into 64, 16 and 20 for base, validation and novel sets. **Caltech-UCSD Bird-200-2011(CUB)** has a total number of 11,788 images over 200 bird species. These species are divided into 100, 50, and 50 for the base, validation and novel sets, respectively. ### Implementation Details In the experiments, we primarily used ResNet12 architecture with 4 residual blocks. Each block had 3 convolutional layers with 3x3 kernels. The number of kernels for the 4 blocks was 64, 160, 320, and 640, respectively. A max-pooling layer was added at the end of the first three blocks. The last block was branched with three pooling layers, which respectively modeled different statistical representations of the images. We opted for the SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. The learning rate was initialized to be 0.025. We trained the network for 130 epochs with a batch size of 32 in all the experiments. For miniImageNet, tiredImageNet and CIFAR-FS, the learning rate was reduced by a factor of 0.2 at the 70-\(th\) and 100-\(th\) epoch. For CUB, the learning rate was reduced by a factor of 0.2 for every 15 epochs after the 75-\(th\) epoch. We randomly sampled 2,000 episodes from \(S_{n}\) with 15 query samples per class for both 5-way 1-shot and 5-shot evaluations, to produce the mean classification accuracy as well as the 95% confidence interval. ### Ablation Studies The effectiveness of our method is attributed to the ensemble of different branches equipped with multi-order statistics. In this section, we conducted ablation studies to analyze the effect of the \(1^{st}\)-order, \(2^{nd}\)-order and, \(3^{rd}\)-order statistical pooling and their combination on the miniImageNet, CIFAR-FS and CUB datasets. Above methods are respectively denoted as B_1, B_2, B_3,and ELMOS. Their accuracies under 5-way 1-shot and 5-shot tasks on three datasets are shown in Table 1. From the results, we can see that: (1) On all three datasets, the test accuracy of \(B_{1}\) and \(B_{3}\) is higher than \(B_{2}\) under the 1-shot task, but the test accuracy of \(B_{2}\) is higher than \(B_{1}\) and \(B_{3}\) under the 5-shot task. The above phenomenon shows that different order statistics provide different information about the images. (2) The test accuracy of ELMOS is higher than \(B_{1}\), \(B_{2}\) and \(B_{3}\) under both 1-shot and 5-shot tasks, which illustrates that different order statistics complement each other. Combing them can bring more useful information for classification, resulting in higher classification performance. For each individual in the ensemble learning, the optimization is cooperatively accomplished by the Classification-Based (CB) loss and Similarity-Based (SB) loss [20]. Hence, we conducted ablation experiments to analyze the contribution of each loss on three benchmark datasets: miniImageNet, CIFAR-FS and CUB. Subsequently, we pre-trained the model respectively with CB and SB loss alone and their combination, resulting in three methods denoted as CB, SB and CB&SB. The test accuracies under different methods are shown in Figure 3. The test results show that the accuracy of CB&SB is higher than CB and SB, which implies that both classification-based and similarity-based losses play important roles in our method. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{miniImageNet} & \multicolumn{2}{c}{CIFAR-FS} & \multicolumn{2}{c}{CUB} \\ \cline{3-7} & & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline B\_1 & ResNet12 & 69.06\(\pm\)0.44 & 83.61\(\pm\)0.29 & 77.09\(\pm\)0.46 & 88.46\(\pm\)0.34 & 81.46\(\pm\)0.39 & 92.55\(\pm\)0.18 \\ B\_2 & ResNet12 & 66.42\(\pm\)0.42 & 85.76\(\pm\)0.26 & 71.53\(\pm\)0.48 & 88.83 \(\pm\)0.27 & 77.79\(\pm\)0.39 & 94.44\(\pm\)0.17 \\ B\_3 & ResNet12 & 67.68\(\pm\)0.43 & 82.81\(\pm\)0.29 & 72.83\(\pm\)0.46 & 86.34\(\pm\)0.34 & 83.89\(\pm\)0.38 & 91.20\(\pm\)0.17 \\ ELMOS & ResNet12 & 70.30\(\pm\)0.45 & 86.17\(\pm\)0.26 & 78.18\(\pm\)0.41 & 89.87\(\pm\)0.31 & 85.21\(\pm\)0.38 & 95.02\(\pm\)0.16 \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracy (%) of each branch and their ensemble under 5-way 1-shot and 5-shot tasks on three datasets. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{CUB} \\ \cline{2-3} & 1-shot & 5-shot \\ \hline **Meta-learning** & & & \\ Relational [21] & 55.00\(\pm\)1.00 & 69.30\(\pm\)0.80 \\ DeepEMD [18] & 75.65\(\pm\)0.83 & 88.69\(\pm\)0.50 \\ BML [19] & 76.21\(\pm\)0.63 & 90.45\(\pm\)0.36 \\ RENet [21] & 79.49\(\pm\)0.44 & 91.11\(\pm\) 0.24 \\ FPN[17] & 83.55\(\pm\)0.19 & 92.92\(\pm\)0.10 \\ IEPT [18] & 69.97\(\pm\)0.49 & 84.33\(\pm\)0.33 \\ APP2S [19] & 77.64\(\pm\)0.19 & 90.43\(\pm\)0.18 \\ MFS [1] & 79.60\(\pm\)0.80 & 90.48\(\pm\)0.44 \\ DeepBDC [17] & 84.01\(\pm\)0.42 & 94.02\(\pm\)0.24 \\ HGNN [21] & 78.58\(\pm\)0.20 & 90.02\(\pm\)0.12 \\ INSTA[20] & 75.26 \(\pm\) 0.31 & 88.12 \(\pm\) 0.54 \\ **Transfer-learning** & & & \\ Baseline++ [1] & 60.53\(\pm\)0.83 & 79.34\(\pm\)0.61 \\ Neg-Cosine [19] & 72.66\(\pm\)0.85 & 89.40\(\pm\)0.43 \\ S2M2 [18] & 80.68\(\pm\)0.81 & 90.85\(\pm\)0.44 \\ DC-LR[21] & 79.56\(\pm\)0.87 & 90.67\(\pm\)0.35 \\ CCF [21] & 81.85\(\pm\)0.42 & 91.58\(\pm\)0.32 \\ ELMOS (ours) & & 85.21\(\pm\)0.38 & 95.02\(\pm\)0.16 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of results against state-of-the-art methods on CUB dataset.The top three results are marked in red, blue and green. Figure 3: Test accuracy (%) of the classification-based (CB) loss, similarity-based (SB) loss and their combination (CB&SB) under 5-way 1-shot and 5-way 5-shot tasks on three datasets. ### Comparison with the Most Related Method Our method is most related to EASY [1], which is also a FSC ensemble learning method in context of transfer learning. The comparison of results between them on CIFAR-FS and CUB datasets is shown in Table 4. From the results, we can see that our method beats EASY by a very large margin under both 1-shot and 5-shot tasks. Please note that our method is more efficient that EASY, because EASY needs to pre-train multiple individual networks, which spends much more pre-training time than our method. ### Comparison with State-of-the-Art Methods We compare the performance of our method with several state-of-the-art methods. These methods are either meta-learning based or transfer-learning based. The comparison of results is shown in Table 2 and Table 3. From Table 2, we can see the performance of our method ranks at the top under both 1-shot and 5-shot tasks on CUB. Specifically, our method exceeds the second-best model DeepBDC by 1.2% and 1.0% respectively in 1-shot and 5-shot settings. From Table 3, we can see that our method beats state-of-the-art methods under both 5-way 1-shot and 5-way 5-shot tasks on the dataset of miniImageNet, tiredImageNet, and CIFAR-FS. Specifically, on miniImageNet, PAL and IE behave the second best respectively in 1-shot and 5-shot settings. Our method beats them by 0.93% and 1.39%. On tiredImageNet, our method outperforms the second-best MFS by 0.21% and 0.39% respectively in 1-shot and 5-shot settings. On CIFAR-FS, our method achieves 0.31% and 0.13% improvement over IE for 1-shot and 5-shot respectively. In brief, our method consistently outperforms the state-of-the-art FSC methods under both 1-shot and 5-shot tasks on multiple datasets. The promising results are achieved because of the generalization representation obtained by ensemble learning with multi-order on the base set. ## 5 Conclusion This paper analyzes the underlying work mechanism of ensemble learning in few-shot classification. A theorem is provided to illustrate that the true error on the novel classes can be reduced with ensemble learning on the base set, given the domain divergence between the base and the novel classes. Multi-order statistics on image features are further introduced to produce learning individuals to get an effective ensemble learning design. Comprehensive experiments on multiple benchmarks have illustrated that different-order statistics can generate diverse learning individuals due to their complementarity. The promising FSC performance with ensemble learning on the base set has validated the proposed theorem. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{CIFAR-FS} & \multicolumn{2}{c}{CUB} \\ \cline{2-5} & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline EASY & 75.24\(\pm\)0.20 & 88.38\(\pm\)0.14 & 77.97\(\pm\)0.20 & 91.59\(\pm\)0.10 \\ ELMOS & 78.18\(\pm\)0.41 & 89.87\(\pm\)0.31 & 85.21\(\pm\)0.38 & 95.02\(\pm\)0.16 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of results with the most related method under 5-way 1-shot and 5-shot tasks on CIFAR-FS and CUB. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Venue} & \multicolumn{2}{c}{miniImageNet} & \multicolumn{2}{c}{tiredImageNet} & \multicolumn{2}{c}{CIFAR-FS} \\ \cline{3-10} & & & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline **Meta-learning** & & & & & & & & \\ DeepEMD [15] & ResNet12 & CVPR’20 & 65.91\(\pm\)0.82 & 82.41\(\pm\) 0.56 & 71.16\(\pm\)0.87 & 86.03\(\pm\)0.58 & - & - \\ CC+rot [1] & ResNet12 & CVPR’20 & 62.93\(\pm\)0.45 & 79.87\(\pm\)0.33 & 70.53\(\pm\)0.51 & 84.98\(\pm\)0.36 & 76.09\(\pm\)0.30 & 87.83\(\pm\)0.21 \\ BMI [12] & ResNet12 & ICCV’21 & 67.04\(\pm\)0.63 & 83.63\(\pm\)0.29 & 68.99\(\pm\)0.50 & 85.49\(\pm\)0.34 & 73.45\(\pm\)0.47 & 88.04\(\pm\)0.33 \\ RENet [16] & ResNet12 & ICCV’21 & 67.60\(\pm\)0.44 & 82.58\(\pm\)0.30 & 71.61\(\pm\)0.51 & 85.28\(\pm\)0.35 & 74.51\(\pm\)0.46 & 86.60\(\pm\)0.32 \\ MeTAL[1] & ResNet12 & CVPR’21 & 66.61\(\pm\)0.28 & 81.43\(\pm\)0.25 & 70.29\(\pm\)0.40 & 86.17\(\pm\)0.35 & - & - \\ DAN [21] & ResNet12 & CVPR’21 & 67.76\(\pm\)0.46 & 82.71\(\pm\)0.31 & 71.89\(\pm\)0.52 & 85.96\(\pm\)0.35 & - & - \\ IEFI [15] & ResNet12 & ICLR’21 & 67.05\(\pm\)0.44 & 82.90\(\pm\)0.30 & 72.24\(\pm\)0.50 & 86.73\(\pm\)0.34 & - & - \\ APP2S [15] & ResNet12 & AAA’22 & 66.25\(\pm\)0.20 & 83.42\(\pm\)0.15 & 72.00\(\pm\)0.22 & 86.23\(\pm\)0.15 & 73.12\(\pm\)0.22 & 85.69\(\pm\)0.16 \\ DeepBDC [21] & ResNet12 & CVPR’22 & 67.34\(\pm\)0.43 & 84.46\(\pm\)0.28 & 72.34\(\pm\)0.49 & 87.31\(\pm\)0.32 & - & - \\ MFS [1] & ResNet12 & CVPR’22 & 68.32\(\pm\)0.62 & 82.71\(\pm\)0.46 & 73.63\(\pm\)0.88 & 87.59\(\pm\)0.57 & - & - \\ TPMN[14] & ResNet12 & CVPR’22 & 67.64\(\pm\)0.63 & 83.44\(\pm\)0.43 & 72.24\(\pm\)0.70 & 86.55 \(\pm\) 0.63 & - & - \\ HGNN [16] & ResNet12 & AAAI’22 & 67.02\(\pm\)0.20 & 83.00\(\pm\)0.13 & 72.05\(\pm\)0.23 & 86.49\(\pm\)0.15 & - & - \\ DST[15] & ResNet12 & ECCV’22 & 61.27\(\pm\)0.71 & 80.13\(\pm\)0.17 & 65.46\(\pm\) 0.70 & 82.41\(\pm\)0.53 & - & - \\ MTR[17] & ResNet12 & ECCV’22 & 62.69\(\pm\) 0.20 & 80.95\(\pm\)0.14 & 68.44 \(\pm\)0.23 & 84.20 \(\pm\)0.16 & - & - \\ **Transfer-learning** & & & & & & & & \\ Baseline++ [16] & ResNet12 & ICLR’19 & 48.24\(\pm\)0.75 & 66.43\(\pm\)0.63 & - & - & - & \\ Neg-Cosine [15] & WRN28 & ECCV’20 & 61.72\(\pm\)0.81 & 81.79\(\pm\)0.55 & - & - & - & \\ RFS [15] & WRN28 & ECCV’20 & 64.82\(\pm\)0.60 & 82.14\(\pm\)0.43 & 71.52\(\pm\)0.69 & 86.03\(\pm\)0.49 & - & - \\ CBRM [16] & ResNet12 & MM’20 & 64.77\(\pm\)0.46 & 80.50\(\pm\)0.33 & 71.27\(\pm\)0.50 & 85.81\(\pm\)0.34 & - & - \\ SKD [15] & ResNet12 & Arxiv’21 & 67.04\(\pm\)0.85 & 83.54\(\pm\)0.54 & 72.03\(\pm\)0.91 & 86.50\(\pm\)0.58 & 76.9\(\pm\)0.9 & 88.9\(\pm\)0.6 \\ IE[15] & ResNet12 & CVPR’21 & 67.28\(\pm\)0.80 & 84.78\(\pm\)0.33 & 72.21\(\pm\)0.90 & 87.08\(\pm\)0.58 & 77.87\(\pm\)0.85 & 89.74\(\pm\)0.57 \\ PAL [15] & ResNet12 & ICCV’21 & 69.37\(\pm\)0.64 & 84.40\(\pm\)0.44 & 72.25\(\pm\)0.72 & 86.95\(\pm\)0.47 & 77.1\(\pm\)0.7 & 88.0\(\pm\)0.5 \\ CCF[16] & ResNet12 & CVPR’22 & 68.88\(\pm\)0.43 & 84.59\(\pm\)0.30 & - & - & - & - \\ ELMOS (ours) & ResNet12 & - & 70.30\(\pm\)0.45 & 86.17\(\pm\)0.26 & 73.84\(\pm\)0.49 & 87.98\(\pm
2301.13861
Bounding first-order quantum phase transitions in adiabatic quantum computing
In the context of adiabatic quantum computation (AQC), it has been argued that first-order quantum phase transitions (QPTs) due to localisation phenomena cause AQC to fail by exponentially decreasing the minimal spectral gap of the Hamiltonian along the annealing path as a function of the qubit number. The vanishing of the spectral gap is often linked to the localisation of the ground state in a local minimum, requiring the system to tunnel into the global minimum at a later stage of the annealing. Recent methods have been proposed to avoid this phenomenon by carefully designing the involved Hamiltonians. However, it remains a challenge to formulate a comprehensive theory of the effect of the various parameters and the conditions under which QPTs make the AQC algorithm fail. Equipped with concepts from graph theory, in this work we link graph quantities associated to the Hamiltonians along the annealing path with the occurrence of QPTs. These links allow us to derive bounds on the location of the minimal spectral gap along the annealing path, augmenting the toolbox for the analysis of strategies to improve the runtime of AQC algorithms.
Matthias Werner, Artur García-Sáez, Marta P. Estarellas
2023-01-31T18:56:28Z
http://arxiv.org/abs/2301.13861v2
# Bounding first-order quantum phase transitions in adiabatic quantum computing ###### Abstract In the context of adiabatic quantum computation (AQC), it has been argued that first-order quantum phase transitions (QPTs) due to localisation phenomena cause AQC to fail by exponentially decreasing the minimal spectral gap of the Hamiltonian along the annealing path. The vanishing of the spectral gap is often linked to the localisation of the ground state in a local minimum, requiring the system to tunnel into the global minimum at a later stage of the annealing. Recent methods have been proposed to avoid this phenomena by carefully designing the involved Hamiltonians. However, it remains a challenge to formulate a comprehensive theory on the effect of the various parameters and the conditions under which QPTs make the AQC algorithm fail. Equipped with concepts from graph theory, in this work we link graph quantities associated to the Hamiltonians along the anneal path with the occurrence of QPTs. These links allow us to derive bounds on the location of the minimal spectral gap along the anneal path, augmenting the toolbox for the design of strategies to improve the runtime of AQC algorithms. ## I Introduction One of the central goals of quantum computing is the prospect of being able to efficiently solve classically hard computational problems. Adiabatic Quantum Computation (AQC), proposed by Farhi et al. [1], is a model of quantum computation particularly well suited to tackle optimization tasks that fall into this category. Roland et al. [2] showed that a quadratic speed-up of Grover's search algorithm [3] can be obtained not only through a gate-based quantum circuit but also by AQC. This, together with the proofs of equivalence between AQC and the gate-based model [4], indicate that a universal AQC device would provide quantum advantage. In AQC, a quantum system is prepared in the ground state of a relatively simple initial Hamiltonian, also called the driver Hamiltonian. The Hamiltonian of the system is then _slowly_ interpolated to a target Hamiltonian whose ground state encodes the solution of the target problem. By _slowly_ we mean that the rate of change of the Hamiltonian adheres to the adiabatic condition as stated by the adiabatic theorem [5]. As a consequence, the runtime of the algorithm is inversely related to the width of the spectral gap of the instantaneous Hamiltonian and a rapidly closing spectral gap therefore dramatically increases the runtime, making the algorithm infeasible. A major cause of these exploding runtimes are first-order quantum phase transitions (QPTs) [6; 7] due to Anderson localisation, which results into (avoided) level-crossings that lead to an exponential closing of the spectral gap and, consequently, to a exponential runtime. Altshuler et al. [8] considered this to be a failure proof of AQC. However, this proposition has been contested, as methods are known to avoid the exponential closing of the gap [9; 10; 11; 12; 13], suggesting that there are specific conditions when localisation phenomena can be avoided by careful design of the initial Hamiltonian. In the context of AQC, first-order QPTs can occur when an initially delocalized state transitions into a localized state that is supported in a local minimum, while only having negligible amplitudes in the global minimum. As a consequence, as the annealing continues, the ground state transitions to the global minimum, resulting in a rapidly closing spectral gap as well as a discontinuity in the solution fidelity. The latter transition from the local to the global minimum constitutes a first-order QPT. In the spectrum of the interpolated Hamiltonians the first-order QPTs correspond to (avoided) level crossings of the ground and first excited state. However, ideally these transitions are avoided and the delocalized state transitions directly into the global minimum, which results in a smoother fidelity profile. The two scenarios are depicted in Figure 1. Such a qualitative difference raises the question of what are the distinguishing properties of the local minima that make the ground state localize there first. Amin and Choi linked the occurrence of first-order QPTs to the presence of a large number of low-energy states which are connected by a small number of bit flips [14]. This notion, however, is rather broad. In this work we push towards answering this question from a graph-theoretical perspective. By applying degenerate perturbation theory we show how particular graph theoretic quantities obtained from the initial Hamiltonian can be related to the spectral gap along the annealing process. These quantities are well understood in spectral graph theory, linking them to the spectral gap of adjacency matrices [15; 16; 17; 18]. Importantly, these links allow us to give conditions for the occurrence or absence of first-order QPTs and derive bounds on its location, shedding some light into this particular error mechanism common in AQC algorithms with the prospect of finding strategies to mitigate it. The use of graph theory has proven to be a useful tool in the understanding of many-body systems [19; 20]. This work is structured as follows: first we review the basics of the AQC model. In the following section we give the necessary definitions and show how degenerate perturbation theory allows for the introduction of certain graph theoretical concepts, specifically the conductance of a subset of nodes \(V\) and the maximum degree of the respective induced sub-graph \(G(V)\). Using these concepts, we derive bounds on the energy of states localized in local minima of the energy landscape, which further allows us to derive bounds on the location of the minimal spectral gap along the annealing path. In order to numerically investigate the validity of the derived bounds, we then exactly solve artificially generated toy model instances, as well as an instance of an NP-complete problem and compare the observed location of the minimal spectral gap with the predictions of our bounds. We conclude this work by discussing the tightness and interpretation of the derived bounds. ## II Adiabatic quantum computation AQC works by interpolating between a driver Hamiltonian \(H_{D}\), also called initial Hamiltonian, and the target Hamiltonian \(H_{T}\). The ground state of \(H_{D}\) needs to be simple to prepare, while \(H_{T}\) has been carefully designed such that the ground state encodes the solution of the problem at hand. In the case of optimization problems, this is often done by formulating the problem to a quadratic unconstrained binary optimization (QUBO) [21]. The full Hamiltonian as a function of the interpolation variable \(s=s(t)\in[0,1]\) is given by \[H(s)=(1-s)H_{D}+sH_{T} \tag{1}\] The instantaneous eigenstates of \(H(s)\) are denoted by \(|\Psi_{n}(s)\rangle\) with respective eigenvalues \(E_{n}=E_{n}(s)\) for \(n=0,...,N-1\) such that \(E_{0}\leq E_{1}\leq...\leq E_{N-1}\) with \(N\) the dimension of the Hilbert space. Consider \[|\Psi(s)\rangle=\sum_{n}a_{n}(s)|\Psi_{n}(s)\rangle \tag{2}\] the state of the quantum system. At \(s=0\) we prepare the system such that \(|a_{0}(s=0)|^{2}=1\). In order to ensure \(|a_{0}(s=1)|^{2}\approx 1\), the rate of change has to obey the adiabatic theorem [5] \[\frac{|\langle\Psi_{1}(s)|\frac{dH}{ds}|\Psi_{0}(s)\rangle|}{g_{min}^{2}}\leq\epsilon \tag{3}\] for \(\epsilon<<1\) and where \[g_{min}=\min_{s}\left(E_{1}(s)-E_{0}(s)\right) \tag{4}\] is the minimal spectral gap between ground and first excited state. The matrix element in the numerator of Eq. (3) can typically be assumed to be bounded by a constant. Therefore, the runtime of an AQC algorithm is determined by the minimal gap \(g_{min}\). In this work we will assume the target Hamiltonians \(H_{T}\) to be diagonal in the computational basis, i.e. \[H_{T}=\text{diag}(E_{0}^{T},E_{1}^{T},...,E_{N-1}^{T}) \tag{5}\] with eigenstates \(|z\rangle\) for each eigenvalue \(E_{z}^{T}\). A common choice for \(H_{D}\) is \[H_{D}=-\sum_{i=0}^{N_{Q}-1}\sigma_{i}^{x} \tag{6}\] where \(\sigma_{i}^{x}\) is the Pauli-x operator applied on qubit \(i\). Various works [22; 23] investigated the impact of \(H_{D}\) on the spectral gap and hence on the runtime. In this work we investigate the impact of \(H_{D}\) on the runtime as well, however we will focus on the underlying graph of the Hamiltonian and its role in creating first-order QPTs. ## III Graph theory and QPTs ### Basic definitions We consider driver Hamiltonians \(H_{D}\) that can be associated with a graph \(G\) in the Hilbert space. A graph \(G=(\mathcal{V},\mathcal{E})\) is defined by a set of nodes \(\mathcal{V}\), as well as a set of edges \[\mathcal{E}:=\{(i,j):i,j\in\mathcal{V}\text{ connected in }G\} \tag{7}\] Figure 2 shows an example of a graph. For each graph \(G\) one can define the adjacency matrix \(A_{G}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) with \[(A_{G})_{ij}=\begin{cases}1\text{ if }(i,j)\in\mathcal{E}\\ 0\text{ else}\end{cases} \tag{8}\] Figure 1: Ground state localisation with (red arrows and solid graph profiles) and without (green arrow and dashed graph profiles) tunneling to the global minimum; the inset shows the spectral gaps and solution fidelities over the interpolation parameter, or annealing schedule, \(s\). As it is the case with Eq. (6), we assume the elements of \(H_{D}\) to be negative. Then we can write more generally \[H_{D}=\frac{-1}{d}A_{G} \tag{9}\] with the adjacency matrix of a \(d\)-regular simple graph \(G=(\mathcal{V},\mathcal{E})\) where the nodes \(\mathcal{V}\) are the computational basis states, denoted by \(|z\rangle\), and the edges \(\mathcal{E}\) are the non-zero matrix elements of \(H_{D}\) as depicted in Figure 2. For simplicity, we will limit the analysis to \(d\)-regular simple graphs, as many commonly used \(H_{D}\) such as Eq. (6) fall into this category. The scaling by \(\frac{1}{d}\) is introduced to normalize the ground state energies of the investigated \(H_{D}\) to -1. Further we will make use of the following concepts. **Definition 1**: _(Induced subgraph) Let \(G=(\mathcal{V}\), \(\mathcal{E})\) a graph and \(V\subseteq\mathcal{V}\). The induced subgraph \(G(V)\subseteq G\) is defined as the graph_ \[G(V)=(V,E) \tag{10}\] _with_ \[E=\{(i,j):(i,j)\in\mathcal{E}\wedge i,j\in V\} \tag{11}\] **Definition 2**: _(Edge boundary) Let G = (\(\mathcal{V}\), \(\mathcal{E}\)) a graph and \(V\subseteq\mathcal{V}\). The edge boundary \(\partial V\subseteq\mathcal{E}\) of \(V\) is defined as_ \[\partial V=\{(i,j):i\in V\wedge j\in\mathcal{V}\backslash V\} \tag{12}\] **Definition 3**: _(Conductance) Let G = (\(\mathcal{V}\), \(\mathcal{E}\)) a graph and \(V\subseteq\mathcal{V}\). The conductance \(\phi(V)\) of \(V\) is defined as_ \[\phi(V)=\frac{|\partial V|}{|V|} \tag{13}\] In a slight abuse of notation we will use the symbol \(V\) for both the subset of nodes in \(G\), as well as the subspace of the Hilbert space spanned by (nearly) degenerate eigenstates of \(H_{T}\). ### From degenerate perturbation theory to spectral graph theory We will make use of degenerate perturbation theory. To this end let us define the set \(V\) as the set of (nearly) degenerate eigenstates of \(H_{T}\) with \(E_{z}^{T}\approx E_{V}^{T}\), with \(E_{V}^{T}\) being the energy of the local minimum \(V\). Considering \(sH_{T}\) the unperturbed Hamiltonian and \((1-s)H_{D}\) the perturbation, we have to diagonalize \(H(s)\) on the subspace spanned by \(V\). Given that \(H_{T}\) is (nearly) degenerate in this subspace, we have to solve the eigenvalue equation \[E_{V}(s)|V)=(1-s)H_{D}^{\prime}|V)+sE_{V}^{T}|V) \tag{14}\] where \(H_{D}^{\prime}\) is the projection of \(H_{D}\) onto the subspace \(V\). Note that \(|V)\) by definition is an element of the subspace \(V\) and hence we consider it a state localized in \(V\) with energy \(E_{V}(s)\). From Eq. (14) it follows directly that \(|V)\) has to be an eigenvector of \[H_{D}^{\prime}=\frac{-1}{d}A_{G(V)} \tag{15}\] with \(G(V)\) the sub-graph of \(G\) induced by \(V\) and \(A_{G(V)}\) its adjacency matrix. The eigenvector with the minimal energy is the principal eigenvector of \(G(V)\) and its energy is given by \[E_{V}(s)=-(1-s)\frac{1}{d}\lambda_{V}+sE_{V}^{T} \tag{16}\] where \(\lambda_{V}\) is the principal eigenvalue of \(G(V)\). It can be shown for a \(d\)-regular graph \(G\) that \[d-\phi(V)\leq\lambda_{V}\leq d_{max}(V) \tag{17}\] where \(d_{max}(V)\) is the maximal degree of \(G(V)\) (see Appendix A). Note that while \(G\) is \(d\)-regular, \(G(V)\) may be irregular and \(d_{max}(V)\leq d\). Using these ingredients we obtain both a lower bound on \(E_{V}(s)\) \[E_{V}(s)\geq-(1-s)\frac{d_{max}(V)}{d}+sE_{V}^{T} \tag{18}\] and a corresponding upper bound \[E_{V}(s)\leq(1-s)\left(\frac{\phi(V)}{d}-1\right)+sE_{V}^{T} \tag{19}\] The bounds Eq. (18) and Eq. (19) are the first key result of this work. In the next section, we will use them to derive bounds on the location of first-order quantum phase transitions along the anneal. Figure 2: An example of a 3-regular Graph \(G\) of the configuration space spanned by the eigenvectors \(|z\rangle\) of \(H_{T}\). The \(|z\rangle\) are represented by the nodes \(\mathcal{V}\), while the set of edges \(\mathcal{E}\) correspond to the off-diagonals given by the matrix elements of \(H_{D}\). The nodes in the shaded area represent the degenerate subspace \(V\) of \(H_{T}\), inducing the subgraph \(G(V)\) (black nodes, solid black edges). All edges leaving \(V\) (dashed black edges) constitute the edge boundary \(\partial V\) of \(V\). ### Bounding first-order quantum phase transitions Quantum phase transitions [24; 25] are also called zero-temperature phase transitions, as they are driven by the contradiction between quantum fluctuations and minimizing some potential. Classical phase transitions are driven instead by the entropic fluctuations. At zero temperature, the entropic part of the potential goes to zero, but quantum fluctuations persist. Level crossings of the ground and first excited states in that case can be seen as first-order phase transitions according to the Ehrenfest classification, since the ground state energy is identical to the thermodynamic free energy at zero temperature. The first derivative of the free energy with respect to the annealing schedule s will be discontinuous at the level crossing, hence level crossings are considered a first-order QPT [26; 14]. Using the bounds Eq. (18) and Eq. (19), it is possible to estimate the location of the crossing of two energy levels within first-order perturbation. There are two conditions to be met for a level crossing to occur. Let \(E_{local}(s)\) and \(E_{global}(s)\) the energies of states localized in the potentially degenerate local and the global minimum respectively, as depicted in Figure 3. First, the energies are required to cross at some value of \(s^{*}\in[0,1]\) \[E_{global}(s^{*})=E_{local}(s^{*}) \tag{20}\] However, this is not sufficient for the level crossing to lead to a first-order QPT. If at \(s^{*}\) we find \[E_{deloc}(s^{*})<E_{local}(s^{*})=E_{global}(s^{*}) \tag{21}\] the instantaneous ground state would still be the delocalized state and the closing gap between the local and global minimum would not lead to a ground state transition. Hence, the second condition is that the crossing between the global and local minimum has to occur at a time \(s^{*}\) when \[E_{local}(s^{*})=E_{global}(s^{*})<E_{deloc}(s^{*}) \tag{22}\] We will refer to the transition from the delocalized state to either of the localized states as delocalized-localized transition, while the transition from one localized state to another is referred to as localized-localized transition. By our assumptions, all the \(H_{D}\) in the class of Hamiltonians Eq. (9) that we consider here have the unique ground state \[|\psi_{0}\rangle=\frac{1}{\sqrt{N}}\sum_{z}|z\rangle \tag{23}\] with eigenvalue -1. To obtain \(E_{deloc}(s)\), we reverse the roles of the Hamiltonians and treat \(H_{D}\) as the unperturbed and \(H_{T}\) as the perturbing Hamiltonian. Using first-order non-degenerate perturbation theory we find that \[\begin{split} E_{deloc}(s)&=-(1-s)+s\langle\psi_{0} |H_{T}|\psi_{0}\rangle\\ &=-(1-s)+s\langle E_{T}\rangle\end{split} \tag{24}\] with \[\langle E_{T}\rangle=\frac{1}{N}\sum_{z}E_{z}^{T} \tag{25}\] At \(s=0\), \(E_{deloc}\) is the minimal energy, so the system will be in state \(|\psi_{0}\rangle\). As \(s\) increases, \(E_{deloc}(s)\) will cross either \(E_{global}(s)\) or \(E_{local}(s)\) first. This crossing will demarcate a delocalized-localized transition. If \(E_{deloc}(s)\) crosses \(E_{global}(s)\) first, the ground state will transition directly from the delocalized state to the global minimum (compare Figure 3 (a)). In case \(E_{deloc}(s)\) first crosses \(E_{local}(s)\), the ground state transitions first from the delocalized state to the local minimum. At a later time \(s^{*}\) when \(E_{local}(s)\) crosses \(E_{global}(s)\), there will be an additional localized-localized transition from the local to the global minimum, as depicted in Figure 3 (b). The location of the localized-localized transition \(s^{*}\) is given by the crossing of \(E_{global}(s)\) and \(E_{local}(s)\). Assuming the ground state of \(H_{T}\) to be non-degenerate, \(E_{global}(s)\) can be computed with non-degenerate perturbation theory \[E_{global}(s)=sE_{0}^{T} \tag{26}\] If the degenerate first excited space of \(H_{T}\) is \(V\), we can bound \(E_{local}(s)\) using Eq. (18) and Eq. (19). This results in the following bounds on \(s^{*}\) \[\frac{1-\frac{\phi(V)}{d}}{1-\frac{\phi(V)}{d}+\Delta E^{T}}\leq s^{*}\leq \frac{d_{max}(V)}{d_{max}(V)+d\Delta E_{T}} \tag{27}\] with the spectral gap of \(H_{T}\) \[\Delta E_{T}=E_{1}^{T}-E_{0}^{T} \tag{28}\] The lower bound depends on the conductance of \(V\), while the upper bound depends on the maximum degree. We will refer to these bounds as the conductance and the degree bound respectively. The bounds Eq. (27) are the second key result of this work. To predict if there will be a phase transition, we also need to know the location of the delocalized-localized transition to the global minimum \(s^{\prime}\) by solving \[E_{deloc}(s^{\prime})=E_{global}(s^{\prime}) \tag{29}\] which renders \[s^{\prime}=\frac{1}{1+\langle E_{T}\rangle-E_{0}^{T}} \tag{30}\] This allows to classify problem instances into three categories 1. the instance has no first-order QPT due to a localized-localized transition (Figure 3 (a)) if \[s^{\prime}\geq\frac{d_{max}(V)}{d_{max}(V)+d\Delta E_{T}}\] 2. the instance has a first-order QPT due to a localized-localized transition(Figure 3 (b)) if \[s^{\prime}\leq\frac{1-\frac{\phi(V)}{d}}{1-\frac{\phi(V)}{d}+\Delta E_{T}}\] 3. if \(s^{\prime}\) is in between the bounds, no statement about first-order QPTs can be made Note that our first-order analysis allows to estimate the value \(s^{*}\) where a level crossing occurs and provides a qualitative understanding on the conditions leading to first-order QPTs, however, it neglects higher-order interactions of the energy levels that would lift the degeneracy at \(s^{*}\) and lead to an avoided level crossing instead. While these effects are essential for tunneling to occur, they are not required to analyze the conditions when tunneling becomes necessary in the first place. As will be discussed in Section IV, our analysis allows to estimate the location of the minimal spectral gap along the annealing path, while the size of the minimal gap can be estimated using higher-order perturbation theory [14]. ### Correction of the degree bound using graph symmetries We will discuss how graph symmetries of \(G(V)\) can be used to improve the degree bound on the principal eigenvalue \(\lambda_{V}\). A more detailed discussion of this approach can be found in Appendix B. If \(G(V)\) is an undirected simple graph, then \(\lambda_{V}\leq d_{max}(V)\), as discussed above. The graph symmetries of \(G(V)\) are represented by permutation matrices \(\Pi\)[27]. **Definition 4**: _(Permutation matrix) A permutation matrix has in each row and each column one entry 1 and 0 in all other entries. Together with the standard matrix product the permutation matrices form a group \(\text{Sym}(V)\)._ Permutation matrices are orthogonal bijectively map the set of nodes onto itself \[\Pi|z\rangle=|z^{\prime}\rangle \tag{31}\] The permutation matrices that commute with the adjacency matrix of a graph form the automorphism group of said graph. **Definition 5**: _(Automorphism group) Let \(G(V)=(V,E)\) a graph. The automorphism group of \(G(V)\) is denoted by \(\mathcal{S}_{V}\subseteq\text{Sym}(V)\) and defined as_ \[\mathcal{S}_{V}=\{\Pi:\Pi\in\text{Sym}(V),[\Pi,A_{G(V)}]=0\} \tag{32}\] Note that, by definition, the elements of the automorphism group \(\mathcal{S}(V)\) conserve the neighborhood relations of \(G(V)\), as for two nodes \(|z_{1}\rangle,|z_{2}\rangle\in V\) and their images \(|z_{1}^{\prime}\rangle=\Pi|z_{1}\rangle\) and \(|z_{2}^{\prime}\rangle=\Pi|z_{2}\rangle\) it holds that \[(A_{G(V)})_{z_{1}z_{2}}=(A_{G(V)})_{z_{1}^{\prime}z_{2}^{\prime}} \tag{33}\] Consider \(|x\rangle=\sum_{z}a_{z}|z\rangle\) an eigenvector of \(A_{G(V)}\) with eigenvalue \(\lambda_{V}\). Since \(A_{G(V)}\) commutes with every permutation matrix \(\Pi\in\mathcal{S}_{V}\), \(\Pi|x\rangle\) must also be an eigenvector of \(A_{G(V)}\) with eigenvalue \(\lambda_{V}\). However, if the eigenspace of \(\lambda_{V}\) is one-dimensional, \(\Pi|x\rangle\) must be proportional to \(|x\rangle\), implying that \[\Pi|x\rangle=\lambda_{\Pi}|x\rangle \tag{34}\] for some eigenvalue \(\lambda_{\Pi}\) with \(|\lambda_{\Pi}|=1\). That \(|\lambda_{\Pi}|=1\) follows, since all \(\Pi\in\text{Sym}(V)\) are orthogonal matrices. Hence, any non-degenerate eigenvector of \(A_{G(V)}\) must also be an eigenvector of all permutation matrices that commute with \(A_{G(V)}\). In order to respect Eq. (34), the \(a_{z}\) of nodes connected by some symmetry of the graph need to have the same amplitude and a fixed phase relation. Assuming \(G(V)\) is connected, the principal eigenvalue \(\lambda_{V}\) is non-degenerate and the principal eigenvector can be chosen with all positive coefficients, according to the Perron-Frobenius theorem. This means that the principal eigenvalue lies in the subspace spanned by \[|\xi\rangle=\frac{1}{\sqrt{|\xi|}}\sum_{z\in\xi}|z\rangle \tag{35}\] where the \(\xi\) are the sets of nodes that are mapped onto each other by some symmetry. As discussed in Appendix Figure 3: Cartoon of approximate energies with level crossings between the localized states (red stars) and between localized states and the delocalized state (green squares). The localized states correspond to energy \(E_{global}\) (blue dashed line) and \(E_{local}\) (orange dash-dotted line) respectively, while the delocalized state has the energy \(E_{deloc}\) (solid green line). **(a):** the crossing of the localized states occurs at a time \(s\) when the system is still delocalized, hence the ground state will transition to the global minimum directly and only the delocalized-localized transition will be observed. **(b):** the system transitions from the delocalized state first to the local minimum and subsequently has to tunnel into the global minimum. B, the \(\xi\) are equivalence classes of nodes. The adjacency matrix elements in this subspace have the form \[\langle\xi|A_{G(V)}|\xi^{\prime}\rangle=\frac{1}{\sqrt{|\xi||\xi^{\prime}|}}\sum_{ z\in\xi,z^{\prime}\in\xi^{\prime}}(A_{G(V)})_{z,z^{\prime}} \tag{36}\] Applying Gershgorin's circle theorem to this matrix, we find \[\lambda_{V}\leq\max_{\xi}\sum_{\xi^{\prime}}|\langle\xi|A_{G(V)}|\xi^{\prime}\rangle| \tag{37}\] which evaluates to \[\lambda_{V}\leq\max_{\xi}\sum_{\xi^{\prime}}\sqrt{\frac{|\xi|}{|\xi^{\prime}|} }|E_{\xi\xi^{\prime}}| \tag{38}\] where \(|E_{\xi\xi^{\prime}}|\) is the number of nodes of equivalence class \(\xi^{\prime}\) in the neighborhood of a node of equivalence class \(|\xi|\). For comparison, in the computational basis \(|z\rangle\), Gershgorin renders the bound \[\lambda_{V}\leq\max_{\xi}\sum_{\xi^{\prime}}|E_{\xi\xi^{\prime}}| \tag{39}\] The right-hand side of Eq. (38) can be smaller than the right-hand side of Eq. (39), thus symmetries of the graph can tighten the bound. It is, in fact, reasonable to expect symmetries to move the bounds Eq. (17) closer to each other, as the conductance bound is based on the uniform superposition of all nodes in \(V\) as a variational ansatz (see Appendix A). The uniform superposition naturally is invariant under all permutations of nodes in \(V\). ## IV Numerical investigation ### Simple toy model We will first analyze the developed bounds in an idealized toy model. To this end we generate random \(d\)-regular simple graphs of size \(N=256\). We place the local and global maxima as far apart on the graph as possible and iteratively grow the local minimum by randomly selecting a node \(i\in\mathcal{N}(V)\) and add it to the set \(V\leftarrow\{i\}\cup V\). Here, \(\mathcal{N}(V)\) denotes the neighborhood of \(V\), i.e. any node in \(\mathcal{V}\setminus V\) that shares at least one edge with any node in \(V\). As the energy of the global minimum we choose \(E_{0}^{T}=-1\), while the energy of the local minimum is chosen as \(E_{V}^{T}=-1+\Delta E^{T}\) with \(\Delta E^{T}\) sampled uniformly between \(0\) and \(1\). The first-order QPTs can be identified easily by looking at the solution fidelity \(F(s)\) along the anneal, which is defined as the overlap between the instantaneous ground state \(|\Psi_{0}(s)\rangle\) and the target ground state \(|0\rangle\) \[F(s)=|\langle\Psi_{0}(s)|0\rangle|^{2} \tag{40}\] In Figure 4 we show the ground state energy \(E_{0}(s)\), the solution fidelity \(F(s)\) and the spectral gap between instantaneous ground and first excited state \(E_{1}(s)-E_{0}(s)\) over the annealing of a toy model instance with (a, c, e) and without (b, d, f) first-order QPT according to the classification based on the conductance and degree bounds Eq. (27). Note that we are analyzing the spectral properties of the instantaneous Hamiltonian \(H(s)\) as a function of \(s\), rather than concrete dynamics of a system. We observe that the true ground state energy follows closely the respective minimum of the perturbed energies of the delocalized state and local and global minima. As the energy of the local minimum is bounded by Eq. (18) and Eq. (19), it is depicted as a shaded area (orange). The bounds on the transition point between the local and global minimum from Eq. (27) are shown as the shaded, striped interval (red) and bound the location of the abrupt jump in \(F(s)\) as well as the location of the minimal spectral gap as seen in Figure 4 (a, c, e). In the case where the bounds predict an absence of a first-order QPT (Figure 4 b, d, f), the true ground state energy is well described by the delocalized energy and the per Figure 4: Instantaneous ground state energy \(E_{0}(s)\) (a), solution fidelity \(F(s)\) (c) and instantaneous spectral gap \(E_{1}(s)-E_{0}(s)\) (e) with a first-order QPT. The true ground state energy (black) follows closely the predicted energies of the delocalized state (solid green line), the local minimum (orange shaded area) and the global minimum (dashed blue line). The energy of the local minimum is given as a shaded area according to the bounds Eq. (18) and Eq. (19). The solution fidelity is discontinuous within the predicted bounds Eq. (27) (red shaded striped interval) coinciding with the minimal spectral gap. (b), (d) and (f) show the respective quantities for a problem instance without a first-order QPT. Again, the true ground state energy follows the predicted energies, the solution fidelity is smoother, while the spectral gap is larger. turbed global minimum. The solution fidelity is smooth and the minimal spectral gap is significantly larger. Following the procedure described above we generate several problem instances and predict the bounds on the localized-localized transition. As first-order QPTs are typically associated with an exponentially closing gap, it is reasonable to assume the location of the minimal spectral gap \(s_{min}\) to coincide with the location of the QPT \(s^{*}\). Therefore, we can test the theory by comparing the predictions of \(s^{*}\) with \(s_{min}\) obtained from exact diagonalization of the instantaneous Hamiltonian \(H(s)\). In Figure 5, we show the predicted bounds over the observed minimal spectral gap from exact diagonalization. The diagonal (red dashed line) denotes the equality of predicted and observed \(s_{min}\). If a point is close to the diagonal, it means that the predicted and the observed \(s_{min}\) are close to each other. The derived bounds Eq. (27) reliably cross the diagonal, indicating that the true \(s_{min}\) is within the predicted bounds. We further classify the problem instances according the the presence (blue) or absence (red) of first-order QPTs, as well as the undecidable instances (orange). The problem instances with QPT are well described by the bounds, as well as the ambiguous cases. For the instances without QPT, \(s_{min}\) is not predicted well. This can be explained as we associate the minimum with the QPT, but the instances do not display a level-crossing between the local and global minimum, as the localized-localized transition would happen at a value of \(s<s^{\prime}\) when the ground state is still delocalized. The phenomenon of first-order QPTs has been investigated in a previous work by Amin et al. [14], where they employ second-order non-degenerate perturbation theory to calculate the location of the level-crossing between local and global minimum. Non-degenerate perturbation theory diverges for degenerate eigenstates, making the predictions less accurate as the local minima become wide. As first-order QPTs are associated with wide local minima [28], we would expect degenerate perturbation theory to better describe the relevant energy corrections. Furthermore, as we argue here, degenerate perturbation theory leads to an interesting unification with spectral graph theory. For comparison we apply their predictions as well. To make non-degenerate perturbation theory applicable, the toy model needs to be modified slightly. As before, the first state in the set \(V\) has its energy set to \(1+\Delta E_{T}\), but the energies of all other states in \(V\) are set to \(1+\Delta E_{T}+\epsilon\). Here we use \(\epsilon=0.01\). Then we can take the first state in \(V\) as the local minimum and compute its energy correction by coupling to its neighbors using second-order non-degenerate PT. This is necessary since for degenerate neighboring states, the second-order non-degenerate energy corrections would diverge and not render meaningful predictions. The predictions from non-degenerate perturbation theory are also shown in Figure 5 (green crosses). Figure 5 shows that the predictions based on non-degenerate perturbation theory are further away from the diagonal, implying that the predictions are less accurate. Qualitatively, first-order non-degenerate perturbation theory does not predict level crossings for \(s\in(0,1]\), as discussed in [14]. Our approach shows that the energy corrections leading to first-order QPTs can be described by first-order degenerate perturbation theory, while non-degenerate perturbation theory requires second order corrections, i.e. the relevant energy corrections are a first-order, rather than second order effect. ### Weighted Minimum Independent Set Lastly, we apply our analysis to a problem instance of an NP-complete problem, namely the Weighted Maximum Independent Set (WMIS) problem. As our analysis requires extensive knowledge of the energy landscape and eigenstates of \(H_{T}\) in order to compute the relevant quantities, it is infeasible to apply it in a more automatized manner. For this reason, as well as for direct comparison with the results of Amin et al., we use the same problem instance as in [14]. Consider a vertex-weighted graph \(G_{P}=(\mathcal{V}_{P},\mathcal{E}_{P},w)\), where \(w:i\to w(i)\) is the weight of node \(i\in\mathcal{V}_{P}\). This graph defines a problem instance and the nodes are identified with qubits. This is a strictly distinct type of graph Figure 5: Predicted location of the minimal spectral gap \(s_{min}\) according to Eq. (27) over the observed location from exact diagonalization. The bounds for instances with a first-order QPT (blue) reliably cross the diagonal (red dashed line), indicating that the true \(s_{min}\) is within the predicted bounds Eq. (27). For instances without a QPT (red) the true value is outside of the bounds, as these instances do not exhibit a localized-localized transition. For comparison the predictions of the location of the QPT according to second-order non-degenerate perturbation theory according to [14] are shown (green crosses). The predictions from non-degenerate perturbation theory are further away from the observed values, especially if the observed \(s_{min}\) is further away from 1. from the graphs used in the theoretical analysis, where the nodes are individual basis states. This distinction is highlighted in by the subscript \(P\). The WMIS problem is to find the largest subset \(S\subseteq\mathcal{V}_{P}\) such that no two nodes in \(S\) share an edge (i.e. it is independent), while simultaneously maximizing the weight \[w(S)=\sum_{i\in S}w(i) \tag{41}\] This optimization problem can be cast into the target Ising Hamiltonian \[H_{T}=\sum_{i\in\mathcal{V}_{P}}h_{i}\sigma_{i}^{z}+\sum_{i,j\in\mathcal{E}_{P }}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z} \tag{42}\] with the fields and couplings being \[\begin{split} h_{i}&=\sum_{i,j\in\mathcal{E}_{P}}J_ {ij}-2w(i)\\ J_{ij}&>\min\{w(i),w(j)\}\end{split} \tag{43}\] Amin et al. use two different weights on the nodes, \(w(i)=w_{G}=1\) for all nodes that partake in the optimal solution and \(w(i)=w_{L}<2w_{G}\) for all nodes outside the optimal solution. The couplings are chosen as \(J_{ij}=2\). The parameter \(w_{L}\) can be altered to change the depth of the local minima. The problem graph has \(N_{q}=15\) nodes, each represented by a single qubit. The driver Hamiltonian is the transverse field driver from Eq. (6) and its associated graph is the \(N_{q}\)-dimensional hypercube. \(H_{T}\) has 27 shallow local minima. These local minima are separated by two bit-flips, allowing for tunneling between the local minima. Therefore, we can consider the 27 local minima plus the shallow potential walls between them as one nearly degenerate local minimum \(V\). The relevant quantities for \(V\) can be counted \[\begin{split}|V|&=135\\ |\partial V|&=1539\\ d_{max}(V)&=9\\ \Delta E_{T}&=4(6w_{G}-3w_{L})\end{split} \tag{44}\] Note that \(H_{D}\) and \(H_{T}\) for the WMIS do not adhere to our assumptions about the normalization of the ground state energies. We adapt the expressions Eq. (27) and Eq. (30) accordingly by dropping the respective normalization factors in the derivation. For more details on the problem graph \(G_{P}\) we refer to the original publication [14]. The adapted expressions allow to make predictions of the location of the minimal spectral gap along the annealing path \(s_{min}\). In Figure 6 we show the graph theoretic bounds as well as the predictions by second-order non-degenerate perturbation theory by Amin et al. for various depths of the local minimum \(w_{L}\), together with the exact numerical diagonalization. From this comparison, we observe that the exact results are within the bounds that we define and are very close to the lower one, i.e. the conductance bound. The degree bound can be tightened by taking the graph symmetries of \(G(V)\) into account. We find the local minimum \(V\) of the WMIS instance has nodes of three equivalence classes, as depicted by the black circles, red squares and blue triangles in Figure 7. The black circles Figure 7: Induced subgraph \(G(V)\) of the local minimum of the WMIS instance from [14], black dots corresponding to states with one qubit in the \(|0\rangle\)-state in each of the outer cliques defining the local minimum, red squares representing states with one qubit in the \(|0\rangle\)-state in two cliques and two qubits in the \(|0\rangle\)-state in the third and blue triangles corresponding to states with one qubit in the \(|0\rangle\)-state in two cliques and no qubit in the \(|0\rangle\)-state in the third. Figure 6: Location of the minimal spectral gap along the annealing path \(s_{min}\) for a WMIS instance [14] obtained via exact numerical diagonalization (red triangles), by second-order non-degenerate perturbation theory (blue stars) and by the graph theoretic method introduced here (green error bars). Remarkably, the conductance bound matches very well the exact result. The upper bound is fixed using the bound improved by graph symmetries. are the nodes with the largest degree, hence the previous upper bound \(\lambda_{V}\leq d_{max}=9\). There are 27 black circles, 27 blue triangles and 81 red squares. As there are no connections within equivalence classes in \(G(V)\), \(\langle\xi|A_{G(V)}|\xi\rangle=0\). Hence, in the basis of the equivalence classes the block with uniform local phases reads \[\begin{split}&\sum_{\xi,\xi^{\prime}}\langle\xi|A_{G(V)}|\xi^{ \prime}\rangle|\xi\rangle\langle\xi^{\prime}|\\ &=\begin{pmatrix}0&2\sqrt{3}&3\\ 2\sqrt{3}&0&0\\ 3&0&0\end{pmatrix}\end{split} \tag{45}\] Applying Gershgorin's circle theorem results in \[\lambda_{V}\leq 2\sqrt{3}+3=6.46... \tag{46}\] Using this improved estimate of the principal eigenvalue of \(G(V)\), we get improved estimates of the location of the minimal spectral gap, as shown in Figure 6. ## V Discussion ### Tightness of the bounds and Cheeger inequalities It is possible to obtain an intuition about the tightness of the bounds on \(E_{V}(s)\) and thus about the bounds on \(s^{*}\). As we find for the average degree \(\langle d(V)\rangle\) of nodes in \(G(V)\) (see Appendix A) \[\langle d(V)\rangle=d-\phi(V) \tag{47}\] we can conclude for regular induced subgraphs \(G(V)\) that the bounds Eq. (17) turn into equalities \[d-\phi(V)=d_{max}(V)=\lambda_{V} \tag{48}\] As a consequence, the bounds Eq. (18) and Eq. (19) on \(E_{V}(s)\) are equal and therefore exact within first-order perturbation. However, \(G(V)\) does not exactly need to be regular for the conductance bound Eq. (19) to become tight. It can be shown that the principal eigenvalue of large Erdos-Renyi graphs \(G(n,p)\) is approximately \(np\)[29], i.e. the average degree \(\langle d\rangle\). Therefore, if \(G(V)\) can be considered a large, sparse random graph, the conductance bound is probably tight. Employing further results from spectral graph theory, the conductance \(\phi\) admits an interesting connection to the spectral gap of \(H_{D}\). Let us define a non-trivial lower bound to the conductance of an undirected graph \(G=(\mathcal{V},\mathcal{E})\) \[\phi_{0}=\min_{\begin{subarray}{c}U\subset\mathcal{V}\\ U\neq\emptyset\\ |U|\leq N/2\end{subarray}}\ \phi(U) \tag{49}\] In words, \(\phi_{0}\) is the smallest conductance over all non-empty subsets of \(\mathcal{V}\) containing at most half of all nodes. This quantity is called the Cheeger constant [30], also known as the conductance of the graph \(G\). The Cheeger constant can be linked to the spectral gap of a \(d\)-regular graph's adjacency matrix \(A_{G}\) via the Cheeger inequalities [30]. Since the \(H_{D}\) considered here are proportional to a \(d\)-regular graph's \(A_{G}\), the inequalities are easily adapted to give \[\frac{1}{2}\frac{\phi_{0}^{2}}{d^{2}}\leq\Delta E_{D}\leq 2\frac{\phi_{0}}{d} \tag{50}\] with \(\Delta E_{D}\) the spectral gap of \(H_{D}\). Assuming that the low-energy sub-spaces of \(H_{T}\) under perturbation with \(H_{D}\) for some problem class fulfill the conditions discussed above and the corrected energy eigenvalues are close to the conductance bound, we can determine the location of the localized-localized transition to be close to the conductance bound \[s^{*}\approx\frac{1-\frac{\phi(V)}{d}}{1-\frac{\phi(V)}{d}+\Delta E_{T}} \tag{51}\] Under these conditions, \(s^{*}\) increases monotonously as \(\phi(V)\) decreases, allowing us to claim \[s^{*}\leq\frac{1-\frac{\phi_{0}}{d}}{1-\frac{\phi_{0}}{d}+\Delta E_{T}} \tag{52}\] This gives a condition for the absence of first-order QPT under the stated assumptions by using Eq. (30) and setting \(s^{*}<s^{\prime}\) to get \[\frac{E_{1}^{T}-\langle E_{T}\rangle}{E_{0}^{T}-\langle E_{T}\rangle}<\frac{ \phi_{0}}{d} \tag{53}\] Using the Cheeger inequalities Eq. (50) the condition Eq. (53) can be stated in terms of the spectral gap of \(H_{D}\) \[\frac{E_{1}^{T}-\langle E_{T}\rangle}{E_{0}^{T}-\langle E_{T}\rangle}<\frac{ \Delta E_{D}}{2} \tag{54}\] Given assumption Eq. (51) this will render a sufficient condition for the absence of first-order QPTs for any distribution of the energies \(E_{z}^{T}\) over the nodes of \(G\), i.e. the target eigenstates \(|z\rangle\), that is consistent with the above assumption. The condition Eq. (54) implies that the energy landscape of the problem encoded in \(H_{T}\) needs to be sufficiently flat, except for a pronounced ground state energy. As an example, consider an \(H_{T}\) with \(E_{0}^{T}=E_{1}^{T}-\Delta E_{T}=-1\) and \(\langle E_{T}\rangle=0\). Then Eq. (54) can be rearranged to read \[1-\frac{\Delta E_{D}}{2}<\Delta E_{T} \tag{55}\] For a fixed spectral gap \(\Delta E_{D}\) this implies a lower bound on the spectral gap of \(H_{T}\) and since by assumption we restricted \(E_{0}^{T}=-1\) and \(\langle E_{T}\rangle=0\), the spectrum of \(H_{T}\) has to concentrate close to \(0\). ### Interpretation of the bounds Let us now present a more physical interpretation of the derived bounds. \(d_{max}(V)\) provides a notion of the maximum number of degrees of freedom involved in the local minimum, e.g. in the case of a transverse field \(H_{D}\) as in Eq. (6), \(d_{max}(V)\) describes the number of floppy qubits involved in the minimum. Floppy qubits are qubits that do not significantly impact the energy of the total system, whether they are in the \(|0\rangle\)- or \(|1\rangle\)-state. Therefore they cause wide local minima and are known to contribute to perturbative anti-crossings [12; 28]. However, the notion of width of a local minimum in higher dimensions becomes an increasingly poorly defined concept. Our lower bound using the conductance provides a notion of width, as the conductance can be thought of as some version of surface-to-volume ratio of the local minimum, where the volume corresponds to the number of nodes in the local minimum, while the surface corresponds to the edges leaving the local minimum. ## VI Conclusion First-order QPTs are known to lead to exponential runtime in AQC algorithms, hampering the chances of getting any quantum advantage in the adiabatic computation. Understanding the causation of such phenomena is therefore key in the design of potential strategies to allow for its mitigation. In this work we examine the conditions linked to the occurrence of first-order QPTs caused by localization in AQC. We explicitly show how the use of degenerate perturbation theory enables the application of tools from graph theory in order to analyze this phenomenon in depth. Crucially, this formalism allows us to derive bounds and conditions for the occurrence of QPTs as well as its position along the annealing path. We show how such inequalities are linked to two properties of the subgraph containing the local minimum: its maximum degree \(d_{max}(V)\) and its conductance \(\phi(V)\). We numerically test the accuracy of these bounds with a toy-model problem as well as a real optimization problem (WMIS) and find that the lower bound seems to be closer to the exact solution obtained through direct diagonalization of the Hamiltonian. Based on this observation, we provide two scenarios when we can expect the conductance bound to be exact up to first order perturbation theory: namely when the subgraph induced by the degenerate subspace \(V\), \(G(V)\), is regular or can be assumed to a large, sparse Erdos-Renyi graph. Additionally we show how knowledge on the symmetries of the induced subgraph \(G(V)\) can be used to improve the upper bound on the principal eigenvalue and further work is required to understand whether this contributes to a tighter conductance bound. Upon establishing the basis for this formalism, future work will be focused on its application in the study of catalysts in the Hamiltonian as a strategy to enlarge the minimum energy gap and thus improve the performance of AQC algorithms. We believe that the use of our graph-based approach is not only convenient in the study of QPTs in the context of AQC, but also of a wide range of many-body phenomena present in analog-based quantum computation.
2302.14486
TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation
The railway industry is searching for new ways to automate a number of complex train functions, such as object detection, track discrimination, and accurate train positioning, which require the artificial perception of the railway environment through different types of sensors, including cameras, LiDARs, wheel encoders, and inertial measurement units. A promising approach for processing such sensory data is the use of deep learning models, which proved to achieve excellent performance in other application domains, as robotics and self-driving cars. However, testing new algorithms and solutions requires the availability of a large amount of labeled data, acquired in different scenarios and operating conditions, which are difficult to obtain in a real railway setting due to strict regulations and practical constraints in accessing the trackside infrastructure and equipping a train with the required sensors. To address such difficulties, this paper presents a visual simulation framework able to generate realistic railway scenarios in a virtual environment and automatically produce inertial data and labeled datasets from emulated LiDARs and cameras useful for training deep neural networks or testing innovative algorithms. A set of experimental results are reported to show the effectiveness of the proposed approach.
Gianluca D'Amico, Mauro Marinoni, Federico Nesti, Giulio Rossolini, Giorgio Buttazzo, Salvatore Sabina, Gianluigi Lauro
2023-02-28T11:00:13Z
http://arxiv.org/abs/2302.14486v1
# TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation ###### Abstract The railway industry is searching for new ways to automate a number of complex train functions, such as object detection, track discrimination, and accurate train positioning, which require the artificial perception of the railway environment through different types of sensors, including cameras, LiDARs, wheel encoders, and inertial measurement units. A promising approach for processing such sensory data is the use of deep learning models, which proved to achieve excellent performance in other application domains, as robotics and self-driving cars. However, testing new algorithms and solutions requires the availability of a large amount of labeled data, acquired in different scenarios and operating conditions, which are difficult to obtain in a real railway setting due to strict regulations and practical constraints in accessing the trackside infrastructure and equipping a train with the required sensors. To address such difficulties, this paper presents a visual simulation framework able to generate realistic railway scenarios in a virtual environment and automatically produce inertial data and labeled datasets from emulated LiDARs and cameras useful for training deep neural networks or testing innovative algorithms. A set of experimental results are reported to show the effectiveness of the proposed approach. Railway simulator, Dataset generation, LiDAR simulation, LiDAR modeling. ## I Introduction Increasing the efficiency and the safety of the railway network requires the automation of several complex functions, as signal recognition, object detection, track discrimination, as well as an accurate train positioning. Most of these functions imply the execution of sophisticated perceptual tasks that must process and integrate data in real time from different heterogeneous sensors, as cameras, LiDARs, inertial measurement units (IMUs), wheel encoders, the global navigation satellite system (GNSS) receivers, and the transponders placed along the track line (balises). Current train position functions are mainly based on balises, which include two components: a transponder on the track and a balise reader installed on the on-board train subsystem. When the reader detects a balise transponder on the track, the train position function is able to estimate the absolute position of the train. Between consecutive balises, the relative position of the train is estimated via odometry by integrating the rotation speed measured by wheel encoders. Due to the integration operation and slip and slide phenomena that can occur between the wheels and the rails, the estimated train position is normally affected by a large drift, which causes the position error to increase with the travelled distance. Moreover, balises have a high deployment and maintenance cost and are subject to tampering. Railway stakeholders are aiming at increasing the capacity of the line and reducing setup phases to provide cost-effective solutions capable of improving the exploitation of existing infrastructures, as demonstrated by the activities of the ERTMS User Group [1]. However, reaching these goals requires improvements in the track discrimination function and a more precise train localization function in terms of odometry. In the last years, to overcome the limitations of odometry algorithms based on balises and wheel encoders, different navigation solutions have been proposed to integrate GNSS data with inertial measurements through a Kalman filter [2]. Unfortunately, however, the GNSS signal is not always available (e.g., in tunnels and canyons) and is affected by multi-path phenomena that reduce the accuracy of its positioning. On the other hand, inertial data need to be integrated to estimate the position, and the integration process leads to increasing drifts over the time. Moreover, the high vibrations present in a railway environment have to be carefully addressed to limit and bound the position errors. To overcome such problems, camera and LiDAR odometry algorithms have been integrated in navigation systems to reduce the localization error in different scenarios [3]. Cameras and LiDAR sensors are also essential for other tasks related to signal recognition, object detection, and track discrimination, which consists of identifying the track where the train is running with respect to the other tracks present on the line. In particular, LiDARs are preferred to cameras, since the produced data are less affected by illumination conditions (due to the active nature of the LiDAR sensor) and because they directly provide the distances between the objects and the sensor in the form of point cloud. Given the great performance of machine learning algorithms in several perceptual tasks, deep neural networks are the first candidates to be used for all the tasks mentioned above. More recently, several neural models have also been proposed to process the point cloud generated by a LiDAR [4, 5], as well as to integrate camera and LiDAR data [6, 7]. Unlike the automotive domain, where it is possible to find several studies and open datasets (e.g., Kitti [8]) among the scientific community, the major obstacle in using deep learning algorithms in a railway environment is the lack of available datasets for training and testing new models. Also, planning an acquisition campaign is much more complex than in the automotive scenario, due to strict regulations and practical constraints in accessing the trackside infrastructure and equipping a train with the required sensors. To address this issue, this paper proposes TrainSim, a simulation framework that generates synthetic datasets for training and testing deep neural networks, processing images, point clouds, and inertial data in a variety of railway environments and operating conditions. In particular, the paper provides the following contributions: 1. It presents a highly configurable and extendable environment generator to create a wide range of realistic railway scenarios controlled by a set of user-defined parameters. 2. It proposes a method for generating arbitrarily large labeled datasets from such virtual environments using a set of simulated sensors (LiDARs, cameras, and IMUs) that can produce data similar to their physical counterparts. 3. It provides a method for exporting the obtained datasets in a standard format for training deep neural networks or streaming them to ROS [9] for real-time visualization. At present, the proposed simulation framework, TrainSim, can generate the following datasets: * RGB images: taken from one or multiple cameras placed on the front vehicle of the train, in the positions specified by the user. * Depth images: taken from one or multiple cameras placed in user-defined positions, where each pixel value encodes the distance between the camera and the object point corresponding to the pixel. They can be used as ground truth data for depth estimation algorithms. * Segmented images: taken from one or more cameras placed in the same position of the RGB or Depth cameras, where each pixel value encodes the class of the object corresponding to the pixel. They are used as labels for semantic segmentation and other tasks. * Point Clouds: taken from a LiDAR sensor placed on the locomotive in a position specified by the user. A point cloud includes a set of 3D points acquired according to the scanning pattern of the user-defined LiDAR configuration. * Segmented Point Clouds: taken from the same LiDAR. Each point is associated with a label that identifies the object hit by the LiDAR beam. They are used as ground truth data for the point cloud segmentation task. * Vehicle Pose, Speed and Acceleration dataset: computed according to the kinematics of the train trajectory. It contains the 6 DOF (i.e., Degrees Of Freedom) vehicle pose, along with speed and acceleration, allowing generating the ground truth trajectory for the localization algorithms. * IMU dataset: 6-axes accelerometric and gyroscopic data computed with user-defined IMU models from the ground-truth acceleration and angular velocity data. Figure 1 shows an example of images produced by the RGB and depth cameras, along with the segmented image from the same simulated scene. Figure 2 shows two point clouds of the same scene produced by the virtual LiDAR: in Figure 1(a) each color encodes a different object class, whereas in Figure 1(b) the color encodes the back-scattered intensity. To summarize, the TrainSim framework aims at providing datasets and ground truth data for the following tasks: Visual Odometry, LiDAR Odometry, Image Segmentation, Point Cloud Segmentation, Image Depth Estimation, and Inertial navigation. The generation of datasets for tasks like 2D and 3D object detection is in progress and will be part of future work. The rest of the paper is organized as follows. Section II discusses the related works; Section III presents TrainSim; Section IV reports some experimental results; and Section V states the conclusions and future work. ## II Related Works The design of proper datasets for training and testing purposes is crucial for developing and verifying effective perception algorithms. The tools developed for the automotive domain typically use benchmarks that provide several visual frames captured in different environments, such as the KITTI benchmark [2] and its semantic segmentation variant [10], or the Cityscapes dataset [11]. Most of such datasets are focused on urban scenarios, and the vast amount of images required Fig. 1: Examples of images from the same scene produced by the simulator. Left: RGB camera; center: depth camera (each pixel value encodes the distance between the object represented by the pixel and the camera); right: segmented image (each pixel color encodes a different object class). Fig. 2: (a) Example of a segmented point cloud generated by TrainSim, where each color encodes a different object class; (b) the same point cloud where each color encodes the back-scattered intensity value computed by the LiDAR model. for training is typically obtained by data augmentation [12], mixing real and virtual images. The lack of open datasets in the railway domain represents a severe obstacle to testing and verifying novel algorithms. Zendel et al. [13] pointed out that, excluding the thousands of labeled images taken from street-view or spectator-view, image datasets of railway environments taken directly from the train are nearly nonexistent. Many solutions presented in the literature for the railway domain are tested and verified on private datasets that only include a few hundred data samples for camera and LiDAR frames, as declared by the authors [13, 14, 3]. Simulators offer the possibility to test perception and control algorithms in a variety of situations that would be hard to reproduce in the real world. For this reason, several synthetic generation tools have been presented in the last years to overcome the lack of real datasets, as CARLA [15] for automotive simulation and _AirSim_[16] for unmanned aerial vehicles (UAV), both based on the Unreal Engine 4 (UE4) [17] graphic engine. Another tool is AutonoVi-Sim [18], which supplies LiDAR frames gathered into a virtual world. Unfortunately, most of the existing simulators have been developed for self-driving cars and drones, and there is a lack of tools for the railway domain that support the integration of the LIDAR, Camera, IMU an GNSS technologies. This work presents TrainSim, a train simulation framework for generating realistic datasets of images, point clouds, and inertial data to test and validate novel algorithms for tasks such as inertial navigation, object detection, and semantic segmentation in the railway domain. In particular, the camera model is naturally derived from the graphic engine frame, producing RGB, semantic, and depth images directly from the graphic environment of UE4. On the other hand, the emulation of the LiDAR sensor exploits the ray-casting system of UE4, which allows the detection of objects between two endpoints, making the LiDAR emulation straightforward. Unlike CARLA and AirSim, the proposed approach also generates the backscattered intensity of the LiDAR sensor by exploiting a simplified version of the Labertian-Beckmann model [19] that describes how different surfaces reflect light rays. More details on the images and point clouds generation models are described in Section III-F and Section III-E, respectively. ## III Simulation framework The architecture of TrainSim is depicted in Figure 3 and is composed of three main modules: the _Environment Generation Tool_ (EGT), which manages the creation of the rail-track surrounding environment, the _Environment Manager_ (EM), which instantiates the created environment in Unreal Engine, and the _Simulation Manager_ (SM), which simulates the train movement, emulates the sensors working principles, and generates the various datasets. The environment generation is based on the GeoGen project of Matej Zabsky [20], which is a tool for creating realistic terrains with desired height maps. The virtual environment is generated starting from the train route specified in a file, which contains set points that are either sampled from a real trajectory or synthetically generated by a separate trajectory generator, described in Section III-H. In the following, we refer to a _track_ as the physical structure (a pair of rails) where a train can run, and to the _railway environment_ as all the ensemble of tracks placed in the environment. A track is defined as a sequence of 3D waypoints (referred to the track centerline), called _track points_ and is divided into _blocks_, where each block identifies a specific type of railway structure, namely a straight line, a curve, a station, a tunnel, or a bridge. The _main track_, also called the _route_, is the one traveled by the train, whereas the remaining tracks are referred to as _auxiliary_. Then, a _trajectory_ refers to a specific train journey on a route (i.e., the sequence of positions, velocities, and accelerations of rear and front bogies of the front vehicle sampled at a given frequency). The Environment Generation Tool is responsible for managing the creation of the railway environment and consists of three main modules: * The _Multi-track generator_ creates a number of _auxiliary tracks_ that run parallel to the main track, but can also join it or depart from it with different given rules. It produces a _Railroad_.json file that contains the list of tracks (the main and the auxiliary ones). Each track is divided into blocks (e.g., straight, curve, bridge, etc.) and it is associated with a 3D point sequence and other information needed in the generation of the virtual environment. Refers to Section III-B for further details. * _Landscape Generator_. It creates the area surrounding the tracks, including the terrain and the mountains. It produces a height-map (i.e., a grid of vertices) in which each vertex is associated with a defined height that derives from the elevation of the tracks. Section III-C describes the height-map generation in more detail. * _Object Position Generator_. It generates random spawn points near the tracks, where different types of objects can be placed (e.g., trees, rocks, buildings). Thanks to its modularity, the placement algorithm can easily be extended to add other types of objects to the scene. In particular, positions are selected taking object size into account to avoid reciprocal overlapping. The outputs of the modules are sent to the Environment Manager, responsible for creating the virtual environment Fig. 3: Architecture of the TrainSim. within Unreal Engine by placing the landscapes, the environmental objects, and the railway building structures into the simulated world. It exploits the 3D object models (meshes, materials and textures) of the _TrainTemplate_ plugin [21], which provides high-fidelity models for railways objects, vehicles, stations, tunnels, and bridges. The meshes for other objects (e.g., trees and rocks) are randomly chosen from a set of different meshes, to diversify the simulated environment both for images and point clouds. Furthermore, ballast and landscape materials can be randomly drawn at the start of each simulation, avoiding reusing the same texture in each generated dataset. The Simulation Manager takes as input the train trajectory and a set of files (described in Section III-A) containing the configuration parameters needed to emulate the working principles of specific sensors. It tightly interacts with Unreal Engine sending the sequence of train positions and receiving the data produced by the simulated sensors. It includes three main modules: * _Movement System_. It manages the train movement, advancing the train in each point of the specified trajectory. * _Sensors Simulator_. It consists of a set of blocks, each responsible for emulating a specific sensor. * _Dataset Export_. It exports the generated dataset saving it on the disk. The generated datasets can also be transferred in real-time to a ROS Bridge application for visualization or test purposes by exploiting the ROS communication system. The following sections describe the details of the main architecture components, whereas more details about the remaining modules (e.g., Object Position Generator, Environment Manager) are described in the supplementary material. ### _Input files_ The train route is specified in a file as a sequence of 3D points \(\mathcal{P}=\{P_{k}\,|\,k=1,\dots,N\}\) in local north-east-down (NED) coordinates [22]. This sequence is used to generate the corresponding main track, which is divided into construction blocks of different types (e.g., straight, curve, station, bridge, etc.). Each block type has specific characteristics that constrain the construction of the relative way-point sequence and the velocity profile (e.g., curve blocks have a minimum curvature radius and constrain the maximum travelling speed of the train). An example of a railroad section divided into blocks is illustrated in Figure 4. The train trajectory file specifies the position, the velocity, and the acceleration of both front and rear of the vehicle at each timestamp. The trajectory can either be sampled from real IMU and GNSS sensors, or it can be synthetically generated by a proper tool, briefly described in Section III-H. Each sensor configuration file provides information on a specific sensor, describing its type, features, parameters, and noise models. These data allow the Sensors Simulator to produce a realistic output by applying the noise models to the data acquired in UE4. ### _Multi-tracks generation_ The Multi-track generator randomly creates a number of additional tracks next to the main track to populate the railway environment. This can be useful to test the performance of track discrimination algorithms. The user can also decide to duplicate the point sequence of the main track to have double track instead of a single one. The duplicated track is generated to the right of the main track, since the train hand of drive is on the left at a fixed inter-track distance defined by the user. To generate additional tracks, the module parses the list of railroad blocks of the main track to decide where to begin or end auxiliary tracks, following the constraints imposed by each track block. Figure 4 shows an example of a railroad blocks division, given as input to the Multi-tracks generator. An auxiliary track has three parts: an entering part, a parallel part, and an outgoing part. The entering part is composed of a straight dead-end block, and a curve that joins it to block parallel to the main track. If the railroad block is a curve, the entering part can be composed only of a straight block, as depicted in Figure 5. Some of the building rules are derived from the railway standards [23, 24], such as the inter-track distance or the minimum curve radius, whereas others need to be user-defined, like the number of auxiliary tracks. Once the entering auxiliary block is generated, the iteration proceeds to the next block of the main track, where the parallel parts of the auxiliary track are created at a distance equal to the inter-track distance. When the auxiliary track ends, the outgoing part is generated from the parallel one, creating a curve block followed by a death-end straight block. As for the entering part, if the Fig. 4: Example of a railroad section divided into blocks. Fig. 5: Entering part examples with 2 parallel tracks, a single line (blue or red) represents a single track): (a) the auxiliary track is generated in correspondences of a straight block, hence it is composed of a straight and a curve blocks; (b) the auxiliary track is generated in correspondence of a curve block, hence it can be composed of a straight block only. corresponding railroad block is a curve, the outgoing part can be composed of a single straight track line. The creation of auxiliary tracks follows pseudo-random decisions based on a user-defined probability. In this way, the user can manage the auxiliary tracks generation, creating different scenarios from the same train route. ### _Landscape Generation_ Once the auxiliary tracks have been generated, the landscape generation module creates the terrain of the virtual environment, containing the ground, mountains, and valleys. In UE4, a landscape is defined from a height map, which is a matrix \(M\) of vertices, referred to as _main map_, in which each vertex \(v_{i,j}\) has its own height value \(M_{i,j}\). The main map has a rectangular size that includes all track points. The final height map is produced by GeoGen [20], which is a library that allows manipulating height maps using different operations, such as noising, scalar multiplication, and composition. The landscape is partitioned into three different sections with respect to its distance from the outermost tracks, as illustrated in Figure 6. In particular, if the minimum distance \(d\) of vertex \(v_{i,j}\) to the track is less than or equal to \(d_{near}\), its height is set equal to the one of the most immediate track point. If the distance to the track is greater than \(d_{far}\), the height is sampled from a noise function \(N_{i,j}\), using the method proposed in [20]. Finally, if \(d\) is between the two thresholds, the assigned height grows linearly between the two values according to the distance function \(f(d)\) illustrated in Figure 7. The two distance bounds \(d_{near}\) and \(d_{far}\) can be set by the user. Please note that \(d_{near}\) has a minimum value imposed by the railway construction standards [23, 24], namely \(1.5~{}m\). This solution allows easily placing a number of environmental objects (e.g., trees) around the railway structure. More specifically, given a vertex \(v_{i,j}\) in the main map, let \(P_{n}(v_{i,j})\) be the track point with the minimum distance to \(v_{i,j}\) and let \(d_{i,j}\) such a distance. For the sake of clarity, the point coordinates are expressed in the East-North-Up (ENU) reference system, and \(P_{n}^{U}\) represents the Up component (i.e., the height). Then, the height value \(M_{i,j}\) associated with vertex \(v_{i,j}\) is computed as \[M_{i,j}=P_{n}^{U}(v_{i,j})*[1-f(d_{i,j})]+N_{i,j}*f(d_{i,j}). \tag{1}\] Furthermore, the sub-module generates a valley patch (i.e., a sub-matrix of vertices) that is superimposed to the main map every time a railway bridge is present in the trajectory, so allowing to possibly create a river in that specific position. In station blocks, the value of \(d_{near}\) is increased to accommodate buildings and other structures. Once the whole main map is generated, it is divided into sub-maps, so that the user can decide to save only those sub-matrices near the track, in order to save memory. An example of sub-matrices is depicted in Figure 8. In the proposed implementation, a sub-matrix has \(1009\times 1009\) vertices. A scale factor on the E-N axes equal to \(100\) is required since the UE4 measurement unit is expressed in \(cm\). With this setting, the distance between two vertices in the horizontal and vertical direction results to be \(1~{}m\). Figure 9 shows an example of a single landscape element generated with the proposed procedure. Fig. 8: Example of sub-matrices (red squares) that can be selected based on their distance from the track (blue line). Fig. 6: Example showing how the terrain surrounding the track is partitioned in three areas: the area denoted as _Same_ has a height equal to the closest track point; the area denoted as _Noise_ has a noisy height; while the area in the middle (_Smooth_) smoothly changes the height between the two values. Fig. 7: Distance function used to set the height of the vertices located in the smooth region at a distance from the track between \(d_{near}\) and \(d_{far}\). Fig. 9: Example of a UE4 landscape element generated from a height-map. ### _Movement System_ The movement system is responsible for updating the position of the train along the route, following the train trajectory specified in the corresponding input file. Each trajectory point includes the position, speed, and acceleration of the front and rear bogies, as well as the corresponding timestamp. To reproduce the train motion according to the given trajectory, the positions of the bogies have to be computed at each frame by interpolating the position of the two consecutive points in the trajectory that are before and after the tick absolute time. This solution, however, gives rise to two distinct problems: 1. The interpolation introduces an error on the virtual position of the train that increases with the train speed (the higher the speed, the higher the distance between trajectory points). 2. If graphic frames are visualised at a time that is different from the timestamps associated with the trajectory points, then the data produced by virtual IMUs and GNSS receiver are not synchronized with those produced by visual sensors (cameras and LiDARs), hence they are not consistent. To address these problems, the _real time_ associated with the train trajectory has been decoupled from the _simulated time_ at which UE4 produces a visual frame. While the difference between the time stamps of any two consecutive trajectory points is constant and equal to the sampling period \(T_{S}=t_{k}-t_{k-1}\), the time elapsed between two consecutive frames can vary depending on the machine running the graphic engine. Hence, each frame produced by the graphic engine must be associated with a trajectory point and its corresponding timestamp, so ignoring the simulation time. Figure 10 compares the timelines associated with the trajectory, the simulator frames, and the acquisitions from a LiDAR sensor, visualizing the time stamps associated with each frame. In the example shown in Figure 10, the LiDAR is acquired with a period that is twice the one used for the trajectory. Black dashed arrows show the timestamps associated with the graphic frames, while brown dashed arrows show the graphic frames associated with the LiDAR acquisitions. In the current implementation, the acquisition period of a visual sensor must be a multiple of the sampling time of the trajectory points. ### _LiDAR Sensor Model_ Light Detection And Ranging (LiDAR) sensors are active devices that emit light rays and compute the time needed for the rays to be backscattered to the sensor receivers, or the phase change of the backscattered ray. If the emitted rays are backscattered, the distance between the LiDAR and the object hit by the ray can be computed from the travelling time and the speed of light, or from the phase difference between the emitted ray and the backscattered one. Namely, a LiDAR sensor emits several laser beams (or a flash light) and uses a matrix of receivers to create a depth map of the surrounding environment, referred to as a _point cloud_. Common LiDARs emit a vertical array of laser beams that rotates around the vertical axis to acquire the surrounding scene. The angular inclination of each laser beam defines the vertical resolution of the sensor, whereas the horizontal angular step at which the rays are emitted defines the horizontal resolution. The json input file for a LiDAR specifies the horizontal resolution, the horizontal and vertical field of view (**FOV**), the number of beams (from which it is possible to derive the vertical resolution knowing the vertical FOV), the range of the laser beams, and the frame rate. The LiDAR working principle is emulated by exploiting the ray tracing system of UE4. In particular, the SingleLineTraceByChannel function of UE4 generates a ray from a starting point to an ending point given as inputs. If there are objects along the ray, this function returns the closest 3D point in which the ray intersects the first object surface. It also returns a reference to the object hit, from which it is possible to retrieve other object features stored in the system. Hence, the relative position of the hit point is computed by subtracting the absolute position of the starting point. A single LiDAR frame acquisition is implemented as two tested loops, in which the outer loop iterates over the horizontal angle, and the inner loop iterates along the vertical angles. For each ray, the starting point is the central position of the sensor, while the ending point is the point lying on a beam ray at a distance corresponding to the sensor range. The sensor is implemented as a UE4 actor component that is positioned on the front vehicle of the train and follows its position at each simulator tick. A point cloud is produced with the specified period, which must be a multiple of the period at which the trajectory points are generated. For example, if the trajectory period is equal to 10 \(ms\) and the LiDAR period is equal to 100 \(ms\), a point cloud is generated every ten frames produced by the graphic engine. Figure 11 depicts a sample point cloud captured in the simulated environment. Note that, in the real world, if the LiDAR is moving, the 3D points belonging to a full scan refer to different LiDAR positions, whereas, in the simulated framework, since a graphic frame is a state of the environment frozen in time, all the acquired 3D points refer to the same LiDAR position. To compensate for such a mis-alignment due to motion, most Fig. 10: Timelines corresponding to the train trajectory, the simulator frames, and LiDAR acquisitions. Black dashed arrows show the timestamps associated with the graphic frames, while brown dashed arrows show the graphic frames associated with the LiDAR acquisitions. of LiDAR sensors incorporate an IMU that automatically compensates for the LiDAR movements during a scan. For this reason, the virtual LiDAR implemented in the simulator does not take this phenomenon into account. To make the data more realistic, the distance associated with each LiDAR beam is perturbed by adding a Gaussian noise with zero mean and given variance. The object identifier returned by the SingleLineTraceByChannel function is used to assign each point a label, so allowing to create point cloud datasets for semantic segmentation. Note that each object in the virtual environment is associated with an identifier that specifies the class of the object and the instance (e.g., Rock_0). Thus, an instance segmentation of the point clouds can be obtained as long as each mesh placed in the environment is mapped within the desired set of semantic classes. Real LiDAR sensors also provide the intensity of each backscattered ray, in terms of light energy, which can be used to discern objects in the environment. As shown in Figure 12, a fraction of the incident ray is reflected in the opposite direction (red arrow) with an angle equal to the incidence angle, but opposite with respect to the normal to the object surface, whereas another fraction is diffused in all directions (green arrows). The backscattered ray detected by the LiDAR is the diffused ray reflected back in the same direction of the incident ray. As proposed by Tian et al. [19], we used the Labertian-Beckmann model to compute the backscattered intensity as a function of three factors: 1. The distance between the sensor and the object; 2. The incident angle between the emitted ray and the normal to the object surface; 3. The material of the object. In particular, each object is associated with a diffusive and reflective coefficient and a maximum incidence angle, over which the object results to be completely reflective. The model can also be extended to consider other parameters of the environment, as air density and humidity, which, at present, are not taken into account. To reproduce the backscattering effect in the LiDAR simulation, the incident angle and the material of the object are needed. The object reference gathered by the SingleLineTraceByChannel function provides the normal to the object surface, which is used to compute the ray incident angle. Furthermore, a mapping between each object class in the virtual environment and the material parameters needed in the Labertian-Beckmann model is defined and exploited to compute the response of each single object to the LiDAR rays. In this work, the parameters of different material are taken from the study presented by Tian et al. [19]. A fine tuning in the railway environment would lead to a higher realism of the backscattered intensity of the simulated point cloud. Figure 2a shows an example of a point cloud where each point is colored with the class of the corresponding object, and Figure 2b shows the same point cloud where each color encodes the backscattered intensity value normalized as an integer in the range \([0,255]\). ### _Camera Sensor Model_ Unreal Engine 4 allows the user to create cameras to gather images of the virtual environment from user defined-locations. In particular, a camera can be placed in front-top of the locomotive to capture an image at each tick of the graphic engine. As for the LiDAR sensor, the camera capture period can be specified as a multiple of the one used for the trajectory. The user can also define a number of parameters of the camera, such as shutter speed, aperture, ISO, and resolution. Additionally, UE4 allows generating a depth image of the scene, where each pixel value encodes the distance between the camera and the object represented by the pixel. The distance is normalized into a range \([0,depth_{max}]\), where all the values above \(depth_{max}\) are cut off and set equal to \(depth_{max}\). The value of \(depth_{max}\) is set to \(100~{}m\) by default and can be redefined by the user. UE4 also allows defining post-processing routines that exploit custom stencils to create a segmented version of an RGB image. Each type of object placed in the virtual environment is assigned a specific custom stencil value used to distinguish each object in a segmented image. Figure 1 shows an RGB image (left image) with the corresponding depth image (middle image) taken from the same scene, and along with the corresponding segmented one (right image), where each object class is identified by a different color. At last, TrainSim allows defining different ambient aspects and weather conditions used to test visual based-algorithms in a wide range of operating conditions. In particular, it is possible to define: * the Sun position, for creating images with different shadows and light intensity. The framework defines three different time slots, morning, evening, and night, as shown in Figure 13; Fig. 11: Example of a point cloud captured from the simulated environment, where each point is colored according to the height value of the point itself. Fig. 12: Object (blue rectangle) response to an emitted light ray (yellow arrow). Part of the light energy is diffused in all directions (green arrows), while an other part (red arrow) is reflected in the opposite direction of the emitted ray. * the fog, with a desired intensity, by inserting the _ExponentialHeightFog_ UE4 actor in the environment. ### _IMU model_ The proposed simulation framework includes a model of a 9-axis inertial measurement unit (IMU) with accelerometers, gyroscopes, and magnetometers, for estimating the current position, velocity, and orientation of the train by means of inertial navigation algorithms [22]. To reproduce realistic data with high fidelity, the IMU model allows the user to specify noise properties, calibrated bias, and other parameters that affect the quality of the measures. The simulated measured quantity \(\widetilde{a}\) in the IMU reference frame is computed from the ground-truth quantity \(a\) in the NED frame by means of the accelerometer model \(\mathcal{A}\) to obtain \(\widetilde{a}=\mathcal{A}(a,\theta)\), where \(\theta\) is the orientation of the IMU, necessary to return readings in the IMU frame. Function \(\mathcal{A}\) depends on the following factors: 1. The gravitational acceleration \(g\), added along the Down component of \(a\) and converted to the IMU frame using the rotation matrix \(C_{NED}^{IMU}(\theta)\) computed from the orientation \(\theta\). 2. The misalignment matrix \(Mis\) (due to geometrical imperfections of the orientation of the individual accelerometer axes) and a constant calibrated bias \(\epsilon\), used to alter the ground-truth acceleration. 3. A drift term \(\delta\), which depends on noise parameters, such as Bias Instability, Noise Density, Random Walk, and environmental causes, as temperature-induced bias. 4. A quantization factor Q, used to replicate the resolution of the sensor. The resulting function for the accelerometer is then: \[\widetilde{a}=\mathcal{A}(a,\theta)=\text{Q}\left(Mis\ C_{NED}^{IMU}(\theta)( a+g)+\epsilon+\delta\right). \tag{2}\] Similar formulations are used for simulating the outputs of gyroscopes and magnetometers. Changing the noise, bias, resolution, or any other parameter in the datasheet of a specific sensor allows simulating different IMUs. ### _Trajectory Generator_ This tool generates pseudo-random train routes and related journey trajectories to be used as input files for the simulator. As a first step, the route is generated as a sequence of curves and straight blocks that follow the constraints imposed by the construction standards. The output is a sequence of spatially evenly distributed points \(\mathcal{P}=\{P_{k}\,|\,k=1,\dots,N\}\). Then, among the straight blocks, some of them are randomly selected as bridges, tunnels, and stations, using constraints and probability provided by the user. It is possible to interpolate the position between points by fitting three smoothing splines [25] on such points \(\mathcal{P}\), one for each axis (north, east, down). This approach allows importing the set \(\mathcal{P}\) from the digital map of an actual route. The second step defines the maximum train velocity on each block of the track, e.g., inside tunnels, on bridges, within stations, and in each curve as a function of its curvature. The details on the generation of such synthetic routes and velocity profiles are omitted for space limitations, also considering that the algorithm could be extended to better adapt to the final dataset produced by the simulator. Then, the tool exploits the kinematic model and the control law of the train, combined with the geometry of the route and the maximum velocity profile, to yield the final trajectory of the front and rear bogies of the vehicle (position, velocity, acceleration, and orientation) with the desired sampling time \(T_{S}\). Note that the orientation at each sampling instant is computed only using the geometry of the track. In fact, since the motion of the train is heavily constrained by the track, the only allowed orientation of a vehicle is the orientation of the track itself. Hence, the yaw, pitch, and roll angles are computed from the line tangent to the track in the current bogie position. At the same time, the angular velocity is obtained by kinematics as a function of the orientation and orientation rates on the three axes. The trajectory of a generic point on a vehicle can be computed from the trajectories of its front and rear bogies. ### _Dataset Export_ The generated dataset can be transmitted for online usage or stored to be employed offline. The proposed online method allows the user to directly connect the UE4 simulator to a ROS network, creating a sensor node that exposes the frame data right after the acquisition, providing a simulation system that can be tested and evaluated online. The ROSIntegration plugin [26] for UE4 is used to create distinct topics for images, point clouds, and inertial data that are transmitted through a TCP connection to a ROS [9] bridge node. Datasets are saved on the disk with the same data format used by other urban open datasets that can be found in the scientific community, such as the KITTI dataset [8]. In this way, most of the automotive algorithms can be tested on the saved train dataset to evaluate their performance in a railway environment without additional pre-processing. ## IV Experimental Results This section presents some experimental results aimed at testing the realism of the the simulated datasets. Section IV-A compares a real point cloud gathered in a static environment with a point cloud generated by TrainSim on a similar static scene re-created on the graphic engine. Section IV-B compares the performance of a state-of-the-art LiDAR odometry algorithm on a sequence LiDAR frames generated by TrainSim Fig. 13: Examples of RGB images generated by TrainSim at different daytimes: left, morning; middle, evening; right, night. and taken from the KITTI dataset. Finally, Section IV-C compares the results of an image semantic segmentation algorithm applied to both the TrainSim generated data and the RailSem19 [13] dataset. ### _LiDAR Working Principle Analysis_ This section aims at evaluating the emulation of the working principles of a LiDAR sensor in TrainSim, considering both distance and backscattered intensity measurements. Real point clouds were acquired using a Scout Mini Robot1 equipped with a Velodyne VLP-162 LiDAR sensor. As illustrated in Figure 14, the robot was positioned on an courtyard in front of a wall (Figure 14, left) and a similar scenario has been re-created in TrainSim (Figure 14, right). Footnote 1: [https://global.agilex.ai/products/scout-mini](https://global.agilex.ai/products/scout-mini) Footnote 2: [https://velodynelidar.com/products/puck/](https://velodynelidar.com/products/puck/) The VLP-16 is a \(360^{\circ}\) rotating LiDAR with 16 vertically aligned laser beams covering a vertical Field Of View (FoV) of \(30^{\circ}\), vertical resolution of \(2^{\circ}\), and horizontal resolution of \(0.2^{\circ}\) for the default rotation speed. In the following, \(\theta\) denotes the yaw orientation angle, while \(\phi_{i}\) and \(\rho_{i}\) denote the vertical angle displacement and the distance reading of the \(i^{th}\) beam, respectively. The Root Mean Square Error (RMSE) between the real and the simulated cloud points was computed to evaluate the realism of the LiDAR simulation, as done in [27]. To be more consistent with some restrictions in the reconstructed environment, the point clouds have been cropped to reduce the horizontal FoV, setting \(\theta\in[-\frac{\pi}{2},\frac{\pi}{2}]\). It is worth noting that the simulated virtual scene has been re-constructed manually, introducing position errors due to measurement errors and shape misalignment imprecisions, which increased the resulting RMSE. Figure 15 illustrates the top and frontal view of the two point clouds (real data are presented in blue and simulated ones in purple). Note that the largest misalignment between points is due to the irregular shape of the real sidewalk, which is simply represented by a flat surface in the simulation. Excluding the sidewalk points from the comparison, the RMSE resulted of \(0.035~{}m\), which is in accordance with the precision of the VLP-16 LiDAR. The presented results were obtained with a simulated point cloud without considering the noise, since the datasheet of the LiDAR device reports the precision only for the distance \(\rho\) and not for the ray angular displacements. Adding to the measured distance \(\rho\) a Gaussian white noise comparable with the VLP-16 precision (i.e., zero mean and variance \(0.015\)) did not significantly change the RMSE, which resulted of \(0.081\)\(m\) considering the whole point cloud and \(0.04\)\(m\) excluding the points belonging to the sidewalk. Concerning the backscattered intensity, there is not a standard way to process data, thus LiDAR manufacturers use different methods to compensate the measurements with respect to various parameters, as distance and incidence angle (see SectionIII-E). Such compensation methods are frequently unknown, making it challenging to precisely reproduce the output backscattered intensity values for a specific LiDAR device. In particular, VLP-16 divides the intensity values into two subranges: values in \([0,100]\) map diffuse reflectors with a reflectance in the range \(0-100\%\), while values in \([101,255]\) represent retro reflectors with an ideal reflection. Unfortunately, the calibration mechanism is not precisely described in the VLP-16 User Manual, restricting the possibility of reproducing an exact representation of the backscattered intensity in TrainSim. For this reason, a qualitative comparison between the intensity values is presented, showing the distribution of the intensities with respect to incidence angles, distances, object materials, roughness, and reflectance. The model to account for the last three parameters has been taken from [19]. In the reference scene, retro-reflected elements are not present, and the intensity values are scaled in the \([0,100]\) range to match the VLP-16 specifications. Figure 16 shows the point clouds gathered from the reference scene and the simulated one, showing three different effects that can be underlined: * the backscattered values of the sidewalk in front of the LiDAR decrease by increasing the angle of incidence; * the metal pole has high-intensity values in the frontal part of the pole, rapidly decreasing on the pole boundaries; * the effect of the angle of incidence on concrete material such as the wall is lower than the effect on metallic or plastic material. Fig. 14: Real-world reference static scene (left) and similar scene re-created in the simulation framework (left). Fig. 15: Top and frontal view of the real and simulated point clouds. ### _LiDAR Odometry Analysis_ This experiment compares the results obtained with simulated point clouds against real ones on an odometry task. Due to the lack of public point cloud datasets acquired from a train, the KITTI [8] urban automotive dataset was select for comparison. The purpose of LiDAR odometry is to predict the motion of the LiDAR sensor from consecutive LiDAR frames. The ego-motion estimation is done by iteratively computing the homogeneous transformation matrix \(T_{k}\) between two consecutive frames \(F_{k}\) and \(F_{k+1}\) that maximize the alignment between the two frames. Formally, the transformation matrix is defined as \(T_{k}=\begin{bmatrix}R_{k}&t_{k}\\ \bar{0}&1\end{bmatrix}\), where \(R_{k}\) is a rotation matrix, \(t_{k}\) is a translation vector, and \(\bar{0}\) is a vector of zeros. The best alignment can be defined as an optimization process aimed at minimizing the following distance function: \[d_{m}(T_{k})=\sum_{i=1}^{N_{k}}{((R_{k}\cdot p_{i}+t_{k}-q_{i})}, \tag{3}\] where \(N_{k}\) is the number of points in frame \(F_{k}\), \(p_{i}\in F_{k}\) is a point in frame \(F_{k}\), and \(q_{i}\in F_{k+1}\) is the point closest to \(p_{i}\) after applying transformation \(T_{k}\) to \(F_{k}\). From the estimated transformation \(T_{k+1}\), computed at time \(k+1\), it is possible to predict the ego-motion of the LiDAR sensor in terms of orientation \(R_{k}\) and translation \(t_{k}\). In this work, the LiDAR Odometry And Mapping (LOAM) [28] algorithm was used for the odometry task. In particular, the LOAM algorithm is divided into two consecutive modules: (i) an odometry algorithm that is computed at a high frequency with low precision, and (ii) a mapping algorithm that is executed at a lower frequency but with a higher accuracy. By default, the odometry algorithm extracts 24 features, whereas the mapping algorithm extracts 240 features to have higher precision, with a ratio of 1:2 between corner and planar features. Both algorithms extract a fixed number of corner and planar features from frame \(F_{k+1}\), find and matches the same features in the frame \(F_{k}\), and iteratively minimize the distance presented in Equation 3 to compute the best alignment transformation \(T_{k+1}\). To distinguish between planar and corner features, a feature factor \(c\) is computed [28] for each point, where a low \(c\) value indicates a planar feature, whereas a high \(c\) value indicates a corner feature. Then, features are ordered based on the \(c\) values and \(N\) corner points are selected taking the highest \(c\) values, and \(2N\) planar points are selected taking the lowest \(c\) values, where \(N\) is a user-defined parameters (set to 8 for the odometry step and to 80 for the mapping step). The estimation error of the LOAM depends on the quality of the extracted features: environments containing repetitive features, such as tunnels or highways (hard to be detected in different frames), or with a low number of peculiar features leads to higher estimation errors. Moreover, since train and car motion mostly evolve in the X-Y plane, with low variations of the Z values, the Z evolution is not observable in such environments, unless the terrain presents substantial Z variations during motion. In this experiment, three different sequences of the KITTI dataset have been chosen. The sequence with the identifier 00 is completely gathered in a urban environment with abundant high-quality features. The second, with identifier 01, is gathered on a highway, which has low-quality features. The third sequence, identified as 09, is divided into two parts: the first is collected in a leaning street with a lot of vegetation on the sides, and the second is gathered in a urban environment. The comparison is made against three different sequences generated by TrainSim, where the environment is composed of vegetation, railway structures (e.g., poles, electrified structures, and rails), fences, and stations. Three different metrics have been chosen to evaluate the LOAM algorithm on the selected sequences: the estimation error of the translation along the \(X\) and the \(Y\) axis on the single transformations between each two consecutive LiDAR frames (**TEX** and **TEY**), and the cumulative position error in the \(X\)-\(Y\) plane computed over the traveled distance (**EOD**). The translation over \(Z\) was not taken into account due to the low variability of the \(Z\) coordinates, whereas the orientation estimations were not reported because the estimation error resulted to be below \(1^{\circ}\). Table I shows the results of the LOAM algorithm applied to the six sequences. The results indicate that the estimation error obtained on the simulated environment is comparable with the one over the KITTI sequences. In particular, the first simulated sequence is surrounded by vegetation and buildings, whereas the second and the third sequences present some fence stripes that introduce repetitiveness in the distribution of the features; in particular, the second sequence contains some heathand that is comparable with the highway environment. The same trend can be seen in the KITTI sequence, where the highest error Fig. 16: Frontal view of the real (right) and simulated (left) point clouds. The color associated with each point encodes the backscattered intensity values. \begin{table} \begin{tabular}{l|c c|c c|c c} Sequence & \multicolumn{2}{c|}{TEX} & \multicolumn{2}{c|}{TEY} & \multicolumn{2}{c}{EOD} \\ & \(\mu\pm\sigma\) & Max & \(\mu\pm\sigma\) & Max & \(\mu\pm\sigma\) & Max \\ \hline Sim 1 & 0.06\(\pm\)0.04 & 0.18 & 0.08\(\pm\)0.05 & 0.17 & 0.25\(\pm\)0.15 & 1.45\% \\ Sim 2 & 0.34\(\pm\)0.18 & 0.69 & 0.54\(\pm\)0.28 & 1.18 & 2.47\(\pm\)2.06 & 22.80\% \\ Sim 3 & 0.15\(\pm\)0.10 & 0.38 & 0.09\(\pm\)0.15 & 0.89 & 0.83\(\pm\)1.26\% & 7.64\% \\ \hline Kitto 0 & 0.14\(\pm\)0.18 & 1.11 & 0.17\(\pm\)0.24 & 1.40 & 0.44\(\pm\)0.80\% & 13.2\% \\ Kitto 01 & 2.17\(\pm\)0.71 & 3.57 & 1.04\(\pm\)0.51 & 2.76 & 11.40\(\pm\)3.68\% & 45.81\% \\ Kitti 09 & 0.26\(\pm\)0.19 & 0.70 & 0.34\(\pm\)0.2 & 0.86 & 1.65\(\pm\)0.86\% & 7.22\% \\ \end{tabular} \end{table} TABLE I: Localization results of the LOAM algorithm [28] applied to three sequences generated with TrainSim and three similar sequences from the KITTI dataset [8]. The values indicate the mean \(\pm\) the variance and the maximum error value of the translation error along the X axis (TEX), the translation error along the Y axis (TEY) computed in a single estimated transformation between two consecutive LiDAR frames expressed in meters, and the overall error over distance (EOD) in percentage. occurs in the KITTI 01 sequence gathered in the highway, while the lowest error is achieved in the KITTI 00, which is entirely acquired in a urban environment. ### _Image Segmentation Analysis_ In several works in the autonomous driving domain (e.g., [29, 30, 31, 32]), synthetic scenarios are used with _domain adaptation_ (DA) techniques to improve the accuracy of a neural model whenever there is a scarce availability of real-world annotated samples, which is particularly true for the railway domain. In particular, during training, such techniques help to select learnable features from synthetic images that enhance the model outcome in real-world testing scenarios. Therefore, this section presents an experiment aimed at evaluating the improvement obtained on a neural model when augmenting the training set with synthetic images generated by TrainSim. To do that, we evaluated the performance of a neural network on a real-world test set by comparing two different training modes: _semi-supervised_ (SS) [33] and _semi-supervised with domain adaptation_ (SSDA) [32]. More specifically, in SS mode, the neural model is trained using only real-world images, following supervised and unsupervised paradigms for labeled and unlabeled samples, respectively. In SSDA mode, instead, the model is trained using the same paradigms for real-world samples, but the training set is augmented with labeled synthetic images. In the experiment presented here, SSDA was performed via a discriminator approach [30] and, for consistency, the SS mode was also implemented by a discriminator approach [31] using the real-world annotated samples as the source dataset. Real-world images were taken from the RailSem dataset [34], containing more than 8000 annotated samples collected from both railway and urban scenarios. In our tests, 6000 samples were used for the training set: \(6000-k\) with annotations and \(k\) without annotations, setting \(k=10\) and \(k=20\) to observe the difference in performance. Other 2000 samples were used for the real-world test set. In SSDA mode, the training set was augmented with 6700 annotated synthetic images collected from different simulated scenarios, similar to those described in Section IV-B, where different materials were randomly applied to the tracked and the landscapes, and various lighting conditions were used to add some variability to the gathered images. Since RailSem and TrainSim define two different sets of object classes, the analysis was conducted on a subset of RailSem classes also present in TrainSim (see Table II and Figure 17), while all the remaining classes were considered as 'background'. The neural architecture selected for the semantic segmentation task is a BiseNetX39 [35], trained by the Adam optimizer [36] with its default settings and a learning rate of 0.003. Batch size and training steps were set to 30 and 8000, respectively. The training was performed by using the classic pixel-wise cross-entropy loss. Input images were resized to (\(H\)=680, \(W\)=720) to reduce the computational cost, while random crop (scale \(1/2\)) and random horizontal flip were used for training set augmentation. Table II reports the performance achieved on the RailSem dataset (details in the caption), showing that the use of TrainSim improves the IoU on crucial classes (rail-track, tracked, and terrain), whereas the performance on other classes is reduced, most likely due to a more accentuated domain shift between synthetic and real-world textures. Figure 17 shows two real-world images taken from RailSem (a) and the corresponding segmented images produced by SS-20 (b) and SSDA-20 (c). In accordance with Table II, the model trained using synthetic samples (SSDA-20) produces more accurate segmentation maps. Despite the benefits discussed above, we also noticed that increasing the number of annotated real-world samples (i.e., more than 50) SSDA yields lower performance than SS (without TrainSim images). We believe this is due to a semantic domain shift between the simulated and real-world images, which become more relevant for a higher number of real-world samples. For instance, TrainSim does not account for complex textures contained in RailSem (e.g., crowded urban and driving scenarios). This forces the model to learn a constrained subset of visual patterns during SSDA, forgetting those that are not well-represented in TrainSim but still useful in real-world scenarios. Please also note that such a domain shift is more accentuated when running an unsupervised DA or without any DA strategy. This motivated us to explore SSDA, where a small subset of real-world data helps alleviate the domain shift. \begin{table} \begin{tabular}{l|c c|c c} Class & SS-10 & SSDA-10 & SS-20 & SSDA-20 \\ \hline pole & **0.104**\(\pm\)0.11 & 0.095 \(\pm\)0.13 & **0.117**\(\pm\)0.16 & 0.102 \(\pm\)0.11 \\ vegetation & **0.344**\(\pm\)0.26 & 0.298 \(\pm\)0.79 & **0.407**\(\pm\)0.17 & 0.366 \(\pm\)0.26 \\ terrain & 0.102 \(\pm\)0.78 & **0.156**\(\pm\)0.14 & 0.164 \(\pm\)0.35 & **0.194**\(\pm\)0.16 \\ sky & 0.740 \(\pm\)0.67 & 0.740 \(\pm\)0.10 & 0.773 \(\pm\)0.13 & **0.780**\(\pm\)0.14 \\ tracked & 0.325 \(\pm\)0.14 & **0.366**\(\pm\)0.41 & 0.391 \(\pm\)0.44 & **0.415**\(\pm\)0.05 \\ rail-track & 0.175 \(\pm\)0.06 & **0.192**\(\pm\)0.18 & 0.197 \(\pm\)0.26 & **0.208**\(\pm\)0.22 \\ \end{tabular} \end{table} TABLE II: Performance of the BiseNet model [35] achieved by a semi-supervised (SS) approach (with only RailSem samples) and a semi-supervised domain adaptation (SSDA) with both real-world and synthetic samples (Railsem + TrainSim). The values denote the Intersection over Union (IoU) and \(std\times 10\) of each class among a 4-fold cross-validation on RailSem. ’10’ and ‘20’ denote the number of real-world annotated samples, randomly extracted from RailSem. Fig. 17: Output predictions of RailSem real-world images. These points open interesting future works for investigating novel DA approaches for railway scenarios. Finally, it is also worth remarking that, to the best of our knowledge, this work is the first one that proposes an SSDA approach for semantic segmentation in railway scenarios. ## V Conclusions This paper presented TrainSim, a visual simulation framework designed to automatically generate a number of realistic railway scenarios and produce labeled datasets from emulated sensors, as LiDARs, cameras, and inertial measurement units. Such datasets are exported in a format suitable for training deep neural models for object detection, semantic segmentation, and depth estimation for camera data, or for processing 3D point clouds from a LiDAR. For each 3D point, the LiDAR model also provides the intensity of the backscattered ray, which can be used to simplify the discrimination of the tracks from other objects with higher diffusion coefficients. The preliminary experiments carried out on the simulated sensors show the effectiveness of the proposed approach, making the simulator a useful tool for investigating and testing new perception algorithms for railway applications. As a future work, we plan to extend the simulator by adding railway switches and meshes with higher fidelity, and upgrading TrainSim to Unreal Engine 5 to exploit its newer features, such as improved photorealism. Finally, the results reported in Section IV-C on image segmentation provide interesting insights for further investigating the use of domain adaptation in railway environments.
2309.03288
Intertwined van-Hove Singularities as a Mechanism for Loop Current Order in Kagome Metals
Recent experiments on Kagome metals AV$_3$Sb$_5$ (A=Cs,Rb,K) indicated spontaneous time-reversal symmetry breaking in the charge density wave state in the absence of static magnetization. The loop current order (LCO) is proposed as its cause, but a microscopic model explaining the emergence of LCO through electronic correlations has not been firmly established. We show that the coupling between van-Hove singularities (vHS) with distinct mirror symmetries is a key ingredient to generate LCO ground state. By constructing an effective model, we find that when multiple vHS with opposite mirror eigenvalues are close in energy, the nearest-neighbor electron repulsion favors a ground state with coexisting LCO and charge bond order. It is then demonstrated that this mechanism applies to the Kagome metals AV$_3$Sb$_5$. Our findings provide an intriguing mechanism of LCO and pave the way for a deeper understanding of complex quantum phenomena in Kagome systems.
Heqiu Li, Yong Baek Kim, Hae-Young Kee
2023-09-06T18:04:51Z
http://arxiv.org/abs/2309.03288v2
# Intertwined van-Hove Singularities as a Mechanism for Loop Current Order in Kagome Metals ###### Abstract Recent experiments on Kagome metals AV\({}_{3}\)Sb\({}_{5}\) (A=Cs,Rb,K) indicated spontaneous time-reversal symmetry breaking in the charge density wave state in the absence of static magnetization. The loop current order (LCO) is proposed as its cause, but a microscopic model explaining the emergence of LCO through electronic correlations has not been firmly established. We show that the coupling between van-Hove singularities (vHS) with distinct mirror symmetries is a key ingredient to generate LCO ground state. By constructing an effective model, we find that when multiple vHS with opposite mirror eigenvalues are close in energy, the nearest-neighbor electron repulsion favors a ground state with coexisting LCO and charge bond order. It is then demonstrated that this mechanism applies to the Kagome metals AV\({}_{3}\)Sb\({}_{5}\). Our findings provide an intriguing mechanism of LCO and pave the way for a deeper understanding of complex quantum phenomena in Kagome systems. _Introduction--_ The vanadium-based kagome metals AV\({}_{3}\)Sb\({}_{5}\) (A=Cs,Rb,K) has generated considerable interest due to the discovery of exotic phases in this family of materials [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. Superconductivity in these materials emerges at \(T_{c}\sim 0.9-2.8K\)[26; 27; 28; 29], with magnetoresistance measurements in ring-structured samples indicating the possibility of novel superconductivity with charge 4e and 6e flux quantization [30]. Additionally, a charge density wave (CDW) is detected below \(T_{CDW}\sim 80-100K\)[26; 31; 32; 33; 34; 35], with scanning tunneling microscopy revealing \(2\times 2\) lattice distortions, emphasizing the important role of van-Hove singularities near \(M\) point of the Brillouin zone. Intriguingly, these materials exhibit spontaneous time-reversal symmetry breaking (TRSB) after the CDW transition, evidenced through techniques such as muon spin relaxation and scanning tunneling microscope [36; 11; 32], alongside a large anomalous Hall effect [37] in the CDW phase without evidence of static magnetic order [38; 27; 31]. These observations indicate an unconventional CDW order in AV\({}_{3}\)Sb\({}_{5}\). The observation of TRSB without static magnetic order leads to the hypothesis of loop current order (LCO), but the mechanism to generate LCO remains unclear. Enormous experimental and theoretical efforts are devoted to determine the properties of CDW in this kagome system [39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68]. The simplest way to model the system is through a three-band model obtained by assigning a single orbital to each site. When the chemical potential is close to the pristine-type vHS, incorporating nearest neighbor (NN) electron interactions and electron-phonon coupling leads to a charge bond order (CBO) ground state rather than LCO [19]. Ref.\(\,\)[61] shows that LCO can be induced by electron interaction, but this necessitates a substantial next-nearest-neighbor (NNN) interaction, a condition not aligned with realistic scenarios. This poses a critical question: what are the conditions for the emergence of LCO in generic kaogme materials? A possible explanation for the lack of LCO in the aforementioned three-band model is because it only accounts for a pristine type of vHS, while in reality the kagome metal AV\({}_{3}\)Sb\({}_{5}\) hosts multiple vHS, including both pristine and mixed types. In this paper, we demonstrate that when two vHS with _different mirror symmetry eigenvalues_ are close to the Fermi level, a simple NN interaction can generate LCO when the coupling between different vHS is taken into account. This ground state has LCO coexisting with CBO dubbed loop current charge order (LCBO). We apply this analysis to AV\({}_{3}\)Sb\({}_{5}\) by considering a tight binding model with multiple vHS. We find that the ground state of AV\({}_{3}\)Sb\({}_{5}\) is LCBO under the conditions described below. This study unveils a mechanism for generating loop current order in systems with multiple vHS. _Conditions imposed by mirror symmetries--_ We first show that mirror symmetries impose important constraints on the wave functions at vHS, which are key ingredients for the emergence of LCBO. Each vHS at momentum \(M\) has little group \(D_{2h}\) with mutually perpendicular mirror planes \(m_{z},m^{\prime},m^{\prime\prime}\), where \(m_{z}\) coincides with kagome plane, \(m^{\prime}\) is parallel to \(\Gamma M\) line and \(m^{\prime\prime}\) is parallel to \(MK\) line. Consider two vHS near the Fermi level denoted by vH1 and vH2, the threefold rotation symmetry makes each vHS have the same symmetries at the three distinct momenta \(M\) denoted by \(M_{A},M_{B},M_{C}\) as in Fig.1(a). We show that mirror symmetries will constrain the wave function of vH1 and vH2 at three distinct \(M\) points to take the form of Table.1 as long as the following conditions are satisfied: (1) The wave functions of vH1 and vH2 have opposite eigenvalues under \(m^{\prime}\) and same eigenvalues under \(m^{\prime\prime}\). (2) vH1 and vH2 consist of the same type of orbital at the kagome sites. We demonstrate this conclusion explicitly using an example relevant to AV\({}_{3}\)Sb\({}_{5}\), where vH1 (vH2) is odd (even) under \(m^{\prime}\) and the orbitals are odd (even) under \(m^{\prime}\) (\(m^{\prime\prime}\)) as the colored orbitals in Fig.1(a). A generic proof for other types of orbitals \begin{table} \begin{tabular}{|c|c|c|} \hline & \(M_{A}\) & \(M_{B}\) & \(M_{C}\) \\ \hline vH1 & \((b^{\prime},0,0)\) & \((0,b^{\prime},0)\) & \((0,0,b^{\prime})\) \\ \hline vH2 & \((0,b,-b)\) & \((-b,0,b)\) & \((b,-b,0)\) \\ \hline \end{tabular} \end{table} Table 1: Weight of wave function in (A,B,C) sublattices for vH1 and vH2 at three distinct \(M\) points imposed by mirror symmetries, where \(b\) and \(b^{\prime}\) are constants. is shown in Fig.S2 in supplementary material [69]. To elaborate on the proof, let us inspect the form of wave function at momentum \(M_{c}\). In this case \(m^{\prime}\) coincides with \(m_{x}\) which maps sublattice A and B to each other and maps sublattice C to itself. Because the wave function of vH2 is even under \(m^{\prime}\) and the orbital at sublattice C is odd under \(m^{\prime}\), the weight of wave function must vanish at \(m^{\prime}\)-invariant sublattice C. Furthermore, wave function components of vH2 at sublattice A and B must have opposite signs to make the wave function even under \(m^{\prime}\) as in Fig.1(b). Therefore, the wave function of vH2 at momentum \(M_{C}\) must take the form \((b,-b,0)\) at A,B,C sublattices respectively, where \(b\) is a constant. A similar analysis can be applied to vH1, which gives the form of \((0,0,b^{\prime})\) instead [69], where \(b^{\prime}\) is another constant. The symmetry-allowed wave functions of vH1 and vH2 are shown in Fig.1(b). The wave function at momenta \(M_{A}\) and \(M_{B}\) can be obtained by threefold rotation. This leads to the wave function structure at each \(M\) point given in Table.1. _Effective model for coupled vHS--_ We construct an effective model that describes the coupling between different vHS. The order parameter for a complex CDW with \(2\times 2\) periodicity is written as: \[\Delta_{\alpha\beta}=\frac{V}{2N_{c}}\sum_{\mathbf{R}}\left(\langle c^{\dagger }_{\mathbf{R},\alpha}c_{\mathbf{R},\beta}\rangle-\langle c^{\dagger}_{ \mathbf{R},\alpha}c_{\mathbf{R}-\mathbf{d}_{\alpha\beta},\beta}\rangle\right) \cos(\mathbf{Q}_{\alpha\beta}\mathbf{\cdot R}), \tag{1}\] Here \(\mathbf{R}\) labels unit cells, \(V\) is the NN interaction strength, \(N_{c}\) is the number of unit cells, \(\alpha,\beta=A,B,C\) denote the kagome sublattices and \(\mathbf{Q}_{\alpha\beta}\) connects different momenta \(M\) as in Fig.1(a), and \(\mathbf{d}_{AB}=\mathbf{a}_{1},\mathbf{d}_{BC}=\mathbf{a}_{2},\mathbf{d}_{CA}= \mathbf{a}_{3}\). In phases that preserve threefold rotation symmetry the order parameters satisfy \(\Delta_{AB}=\Delta_{BC}=\Delta_{CA}\equiv\Delta\). The real part of \(\Delta\) represents CBO, the imaginary part represents LCO and a complex value of \(\Delta\) represents the coexisting phase of LCO and CBO, denoted as LCBO in Fig.1(c). The phase with real \(\Delta>0\) (\(\Delta<0\)) is denoted as CBO\({}^{\star}\) (CBO\({}^{-}\)) as shown in Fig.1(d,e). We can write down an effective model on patches near the three \(M\) points to describe the coupling between different vHS. The coupling between vHS at different \(M\) points is proportional to the order parameter with coupling strength determined by the wave function components at vHS. We choose the basis \(u_{1}(M_{A}),u_{1}(M_{B}),u_{1}(M_{C}),u_{2}(M_{A}),u_{2}(M_{B}),u_{2}(M_{C})\) where \(u_{1},u_{2}\) denotes the wave function for vH1 and vH2 respectively. Let \(\mathbf{k}\) denote the small deviation from \(M\) with \(|\mathbf{k}|<k_{cut}\). Given the form of wave functions in Table.1 and the order parameter in Eq.(1), the effective Hamiltonian with leading terms in \(\mathbf{k}\) is found to take the following form [69]: \[H_{\text{eff}}(\mathbf{k}) =\begin{pmatrix}\epsilon_{1}&s_{1}\Delta&s_{1}\Delta^{*}&-i \lambda k_{1}&0&0\\ s_{1}\Delta^{*}&\epsilon_{1}&s_{1}\Delta&0&-i\lambda k_{2}&0\\ s_{1}\Delta&s_{1}\Delta^{*}&\epsilon_{1}&0&0&-i\lambda k_{3}\\ i\lambda k_{1}&0&0&\epsilon_{2}&s_{2}\Delta^{*}&s_{2}\Delta\\ 0&i\lambda k_{2}&0&s_{2}\Delta&\epsilon_{2}&s_{2}\Delta^{*}\\ 0&0&i\lambda k_{3}&s_{2}\Delta^{*}&s_{2}\Delta&\epsilon_{2}\end{pmatrix},\] \[\equiv\begin{pmatrix}P_{1}&Q^{\dagger}\\ Q&P_{2}\end{pmatrix}. \tag{2}\] Here \(s_{1}=-2|b^{\prime}|^{2}\) and \(s_{2}=2|b|^{2}\) are determined by wave function components in Table.1. \(P_{1},P_{2},Q\) are \(3\times 3\) matrices, \(k_{1}=-\frac{1}{2}k_{x}+\frac{\sqrt{3}}{2}k_{y},k_{2}=-\frac{1}{2}k_{x}-\frac{ \sqrt{3}}{2}k_{y},k_{3}=k_{x}\). \(\epsilon_{1}\) and \(\epsilon_{2}\) denote the energies of vH1 and vH2 respectively. The chemical potential \(\mu\) is set between \(\epsilon_{1}\) and \(\epsilon_{2}\). The matrix \(P_{1}\) (\(P_{2}\)) describes the effect of CDW order on vH1 (vH2) at momenta \(M_{A},M_{B},M_{C}\). The threefold rotation symmetry permutes the three \(M\) points, which requires \((P_{n})_{12}=(P_{n})_{23}=(P_{n})_{31}\) for \(n=1,2\), and whether these matrix elements are related to \(\Delta\) or \(\Delta^{*}\) is determined by the wave function at vHS. The \(\lambda\) term describes the coupling between the two vHS at the same \(M\) point. This term is linear in \(k\) because \(\epsilon_{1}\) and \(\epsilon_{2}\) are exact eigenstates when \(k=0\) in the absence of charge order, hence the \(\lambda\) term should vanish at \(k=0\) and its leading order is linear in \(k\). Mirror symmetries are essential for the form of this effective Hamiltonian. For example, the coefficient in front of the complex CDW order parameter is \(s_{1}=-2|b^{\prime}|^{2}\) in block \(P_{1}\) and \(s_{2}=2|b|^{2}\) in block \(P_{2}\). The relative sign difference in these coefficients comes from the \(-b\) term in Table.1 [69], which is a consequence of mirror symmetries. Another important consequence is that mirror symmetries require the off-diagonal block \(Q\) to be a diagonal matrix. In general the CDW order parameter \(\Delta\) can mix different vHS at different \(M\) points and appear in the off-diagonal elements of \(Q\). However, with the wave function structure in Table.1, the off-diagonal elements of \(Q\) must vanish because they are multiplied by the zeros of Figure 1: (a): Kagome plane of AV\({}_{3}\)Sb\({}_{5}\) (A=Cs, Rb, K). The red (blue) parts denote regions of orbitals with positive (negative) amplitude. The mirrors \(m^{\prime}\) and \(m^{\prime\prime}\) are shown in the figure. The inset shows the Brillouin zone. (b): Real space wave function of vH1 and vH2 at \(M_{C}\) allowed by mirror symmetries. (c): Coexisting loop current order and charge bond order (LCBO). The red bonds represent modulations of \(\langle c^{\dagger}_{\mathbf{r}}c^{\dagger}_{\mathbf{r}}c_{\mathbf{r}}\rangle\) at NN bonds and the arrows represent the direction of current \(I\sim\langle i\langle c^{\dagger}_{\mathbf{r}}c_{\mathbf{r}}\rangle-i\langle c ^{\dagger}_{\mathbf{r}}c^{\dagger}_{\mathbf{r}}c_{\mathbf{r}}\rangle\). (d): Charge bond order with \(\Delta>0\). (e): Charge bond order with \(\Delta<0\). wave function components from either vH1 or vH2 according to Table.1[69]. _Mechanism to generate LCBO_-- We now discuss the last condition for LCBO to be the ground state of a system described by Eq.(2). To derive this, we start from the limit with \(\lambda=0\) and do a perturbation theory on \(\lambda\). Let \(D\equiv|\Delta|\). When \(\lambda=0\), \(H_{\rm eff}({\bf k},\Delta)\) and \(H_{\rm eff}({\bf k},\Delta e^{\frac{2\pi i}{3}})\) have the same eigenvalues because they are related by a gauge transformation \({\cal U}=diag\{1,\omega,\omega^{*},1,\omega^{*},\omega\}\) with \(\omega=e^{\frac{2\pi i}{3}}\). Hence when \(\lambda=0\) the free energy \(F\) is invariant under \(\Delta\rightarrow\Delta e^{\frac{2\pi i}{3}}\), and \(F\) has degenerate minima at \(\Delta=-D\) and \(\Delta=De^{\pm\frac{2\pi i}{3}}\) corresponding to CBO\({}^{-}\) and LCBO respectively. The eigenvalues of \(H_{\rm eff}-\mu\) at both minima are the same, which are given by: \[E_{1}=\epsilon_{2}-\mu-4|b|^{2}D,\ E_{2}=E_{3}=\epsilon_{1}-\mu- 2|b^{\prime}|^{2}D,\] \[E_{4}=E_{5}=\epsilon_{2}-\mu+2|b|^{2}D,\ E_{6}=\epsilon_{1}-\mu +4|b^{\prime}|^{2}D \tag{3}\] When the energy separation between vH1 and vH2 is small, the sign of each eigenvalue is determined by the \(D\) term, hence the negative eigenvalues are \(E_{1},E_{2},E_{3}\). In the low-temperature limit the sum of them determines the free energy. When \(\lambda\) becomes finite, the degenerate minima of \(F\) at \(\Delta=-D\) and \(De^{\pm\frac{2\pi i}{3}}\) corresponding to CBO\({}^{-}\) and LCBO splits. The amount of splitting can be computed by degenerate perturbation theory that captures the evolution of \(E_{1-3}\) with \(\lambda\). Define \(\delta\epsilon\equiv\epsilon_{2}-\epsilon_{1}\) as the separation between vH1 and vH2 and denote \(A\) as the system area. We find that the difference in free energy density \(f=F/A\) between CBO\({}^{-}\) and LCBO is given by: \[f_{\text{CBO}^{-}}-f_{\text{LCBO}}=\] \[\sum_{|{\bf k}|\leq k_{cut}}\frac{-2\lambda^{2}(k_{1}k_{2}+k_{2} k_{3}+k_{1}k_{3})D(|b|^{2}+|b^{\prime}|^{2})}{A(2D(|b|^{2}+|b^{\prime}|^{2})+ \delta\epsilon)(4D(|b|^{2}+|b^{\prime}|^{2})-\delta\epsilon)}\] \[=\frac{3}{16\pi}\frac{\lambda^{2}k_{cut}^{4}D(|b|^{2}+|b^{\prime} |^{2})}{(2D(|b|^{2}+|b^{\prime}|^{2})+\delta\epsilon)(4D(|b|^{2}+|b^{\prime}| ^{2})-\delta\epsilon)}>0. \tag{4}\] Eq.(4) shows that for small energy separation \(\delta\epsilon<4(|b|^{2}+|b^{\prime}|^{2})D\), a finite coupling \(\lambda\) between the two vHS will make LCBO have lower energy and be more favorable than the competing phase CBO\({}^{-}\). Note that the mirror symmetries discussed above are crucial for the validity of Eq.(4). The different mirror eigenvalues between vH1 and vH2 lead to the wave function structure in Table.1, which results in the form of effective Hamiltonian in Eq.(2) with \(s_{1}\) and \(s_{2}\) having opposite signs and \(Q\) being a diagonal matrix. Then Eq.(4) becomes valid, leading to an LCBO ground state. This is the mechanism to generate LCBO in kagome systems. _Application to AV\({}_{3}\)Sb\({}^{-}\)_ We apply the above analysis to AV\({}_{3}\)Sb\({}^{-}\) and explicitly construct the effective Hamiltonian \(H_{\rm eff}\). We start from a tight binding model that captures multiple vHS near the Fermi level. The bands close to the Fermi level in AV\({}_{3}\)Sb\({}^{-}\) are mainly made of \(d\) orbitals at V sites and \(p\) orbitals at Sb sites. We consider the tight binding model introduced in Ref. [65]. This model includes three \(p\) orbitals at each out-of-plane Sb site and one \(d\) orbital at each V site, and this \(d\) orbital is made of a specific linear combination of \(d_{xz},d_{yz}\) orbitals as indicated by the colored orbitals in Fig.1(a) which is odd (even) under \(m^{\prime}\) (\(m^{\prime\prime}\)), denoted as \(\vec{d}\) orbitals. Hence there are three \(\vec{d}\) orbitals and six \(p\) orbitals in each unit cell, leading to a 9-band model \(H_{TB}({\bf k})\). This model considers various hopping processes including \(\vec{d}-\vec{d}\), \(p-\vec{d}\) and \(p-p\) hopping, and the hopping parameters and onsite potentials are obtained by comparing with DFT band structure [65]. The band structure of \(H_{TB}({\bf k})\) is shown in Fig.2(a). Compared with the DFT band structure in Fig.2(b), the 9-band model reproduces two vHS at momentum \(M\) denoted by vH1 and vH2. vH1 is odd (even) under \(m^{\prime}\) (\(m^{\prime\prime}\)) and is mainly made of \(\vec{d}\) orbitals. vH2 is even under both \(m^{\prime}\) and \(m^{\prime\prime}\) and is a superposition of \(\vec{d}\) and \(p\) orbitals. Compared with commonly used three-band models in kagome systems which can only reproduce vH1, this 9-band model has the advantage in capturing the dispersion and wave function composition at both vH1 and vH2, hence it provides a useful platform to study the interplay between different vHS. Next we consider the NN electron interaction given by \[H_{V}=V\sum_{\langle{\bf R}\alpha;{\bf R}^{\prime}\beta\rangle}c^{\dagger}_{{ \bf R},\alpha}c^{\dagger}_{{\bf R},\alpha}c^{\dagger}_{{\bf R}^{\prime},\beta }c^{\dagger}_{{\bf R}^{\prime},\beta}, \tag{5}\] where \(\langle{\bf R}\alpha;{\bf R}^{\prime}\beta\rangle\) denotes NN bonds. With the order parameter \(\Delta_{\alpha\beta}\) defined in Eq.(1), the NN interaction can be mean-field decoupled as [65]: \[H_{V}^{MF} = -\sum_{\bf k}\left(\Delta_{\alpha\beta}(1-e^{i{\bf k}\cdot{\bf d} _{\alpha\beta}})c^{\dagger}_{{\bf k}-{\bf Q}_{\alpha\beta},\beta}c_{{\bf k}, \alpha}+h.c.\right) \tag{6}\] \[+2N_{c}\frac{|\Delta_{\alpha\beta}|^{2}}{V},\] We can write down a mean field Hamiltonian that includes all bands in Fig.2(a) and the CDW order parameter in Eq.(6) with \(\Delta_{AB}=\Delta_{BC}=\Delta_{CA}\equiv\Delta\). To construct the effective patch model \(H_{\rm eff}\), we focus on momenta near the \(M\) points and perform a unitary transformation into the band basis in which the basis functions at \(M\) points are eigenfunctions of the tight binding model. Then we keep only the matrix elements corresponding to the energies and couplings between vH1 and vH2. This leads to a \(6\times 6\) matrix \(H_{\rm eff}({\bf k})\) corresponding Figure 2: (a): Band structure of the \(9\times 9\) tight-binding model \(H_{TB}({\bf k})\) that can reproduce vH1 and vH2. The red color represents the weight of \(\vec{d}\) orbitals in the wave function. (b): Band structure obtained from DFT with vH1 and vH2 highlighted. The figure is adapted from Ref.[65]. to the six patches at vH1 and vH2 near the three \(M\) points. By performing a Taylor expansion in \(\mathbf{k}\) and keeping leading order terms, we obtain \(H_{\text{eff}}\) in Eq.(2) with parameters \(\epsilon_{1}=6.16eV,\epsilon_{2}=6.40eV,b=0.52,b^{\prime}=0.96,\lambda=0.35eV\cdot a\), where \(a=5.48\AA\) is the lattice constant. Because the wave functions at both vHS have significant weight on \(\tilde{d}\) orbitals, the coupling \(\lambda\) between the two vHS receives major contribution from the hopping amplitude \(t_{dd}\) between nearest-neighbor \(\tilde{d}\) orbitals hence \(\lambda\) is generally nonzero. With a finite \(\lambda\), the above theory for LCBO is applicable to \(\text{AV}_{3}\text{Sb}_{5}\), indicating that LCBO is a natural ground state stabilized by NN interaction. _Phase diagram of CDW orders--_ The phase diagram of \(H_{\text{eff}}\) obtained by minimizing the free energy with respect to \(\Delta\) at different chemical potential and interaction strength is shown in Fig.3(a). The LCBO phase is more pronounced near vH2 due to the difference in wave function structures at vH1 and vH2. Eq.(4) requires the eigenvalues \(E_{1-3}\) be negative and \(E_{4-6}\) be positive. Based on Eq.(3), these conditions lead to \(4|b^{\prime}|^{2}D>\delta\epsilon\) when \(\mu\sim\epsilon_{2}\), while when \(\mu\sim\epsilon_{1}\) they lead to \(4|b|^{2}D>\delta\epsilon\). Since \(|b^{\prime}|>|b|\) due to the larger weight of \(\tilde{d}\) orbital at vH1, when \(\mu\sim\epsilon_{2}\) it requires smaller \(D\) and smaller interaction to realize LCBO. This leads to the smaller critical interaction strength near vH2 as shown in the phase diagram. The competition between CBO\({}^{-}\) and LCBO depends on the strength of \(\lambda\). The free energy of the CBO\({}^{-}\) and LCBO phases at \(\mu=\epsilon_{2},V=1.3eV\) as a function of coupling strength \(\lambda\) is shown in Fig.3(b). It shows LCBO and CBO\({}^{-}\) are degenerate when \(\lambda=0\), and a finite \(\lambda\) makes the free energy of LCBO lower than CBO\({}^{-}\), consistent with Eq.(4). _Effects of the other bands--_ In \(\text{AV}_{3}\text{Sb}_{5}\) there are other bands near the Fermi level and their effects need to be investigated. For this purpose, we consider an effective patch model obtained by adding one more band below vH1 (denoted as \(\epsilon_{3}\)) in Fig.2(a) to \(H_{\text{eff}}\), which expands it to a \(9\times 9\) matrix near the \(M\) points. This model includes vH1, vH2 and \(\epsilon_{3}\), and its phase diagram is shown in Fig.4(a). Compared with Fig.3(a) which only includes vH1 and vH2, the main difference in Fig.4(a) arises near vH1, whereas near vH2 which is further away from \(\epsilon_{3}\) the two phase diagrams are similar with LCBO emerging in both cases. We further demonstrate that the emergence of LCBO inferred from the patch model remains valid when all the bands in the tight-binding model are considered and the momentum cutoff is removed. The phase diagram obtained with all bands in \(H_{TB}(\mathbf{k})\) included is shown in Fig.4(b). The summation of momentum in computing the free energy is taken over the Brillouin zone rather than a small patch near the \(M\) point. The LCBO phase exists near vH2, whereas near vH1 the ground state are CBO due to the effect of band structure away from \(M\) points and the other bands that are not taken into account in the patch models. This comparison suggests despite the quantitative difference in these phase diagrams, our main finding of LCBO survives in the full-band model as long as the chemical potential is near vH2. _Discussion--_ We provide a mechanism to realize LCBO in kagome systems based on the coupling between multiple vHS with different symmetry representations. This mechanism is not only applicable to kagome metal \(\text{AV}_{3}\text{Sb}_{5}\), but also applicable to other systems as long as the vHS satisfy the required symmetry conditions such that the effective Hamiltonian takes the form of Eq.(2). In addition to the LCO phase corresponding to the imaginary part of LCBO, the real part of LCBO order parameter can induce lattice distortion with star of David or tri-hexagonal patterns. Experiments in \(\text{AV}_{3}\text{Sb}_{5}\) have observed staggered patterns of lattice distortion among different kagome layers [24]. If the ground state is described by LCBO, we expect the loop current order to be staggered along the c axis as well. Our theory shows LCBO is more favorable when the energy difference \(\delta\epsilon\) between vHS is small. Experiments and first-principle computations suggest pressure can lead to an increase of \(\delta\epsilon\)[40; 68], hence we expect LCBO to disappear under high pressure, which is consistent with the disappearance of CDW under high pressure observed in experiments [67; 28; 43; 62]. The phase diagram of \(\text{AV}_{3}\text{Sb}_{5}\) in Fig.4(b) suggests LCBO emerges when the chemical potential is close to vH2. Thus we predict that electron-doping the material is more likely to induce the LCBO phase. _Acknowledgent--_ This work is supported by the Natural Sciences and Engineering Research Council of Canada Figure 3: (a): Phase diagram of \(H_{\text{eff}}\) at different interaction strength and chemical potential with parameters: \(\epsilon_{1}=6.16eV,\epsilon_{2}=6.40eV,b=0.52,b^{\prime}=0.96,lk_{cut}=0.1eV\) and temperature is 90K. \(PM\) refers to pristine metal without any CDW order. (b): Free energy of LCBO and CBO\({}^{-}\) as a function of coupling \(\lambda\) at a fixed interaction strength. It shows LCBO is favored at finite \(\lambda\). Figure 4: (a): Phase diagram of the effective patch model obtained by including vH1, vH2 and \(\epsilon_{3}\). \(PM\) refers to pristine metal without any CDW order. (b): Phase diagram that takes into account all bands in \(H_{TB}\) and the momentum summation is over the Brillouin zone. The LCBO phase still exists near vH2. (NSERC) and the Center for Quantum Materials at the University of Toronto. H.Y.K acknowledges the support by the Canadian Institute for Advanced Research (CIFAR) and the Canada Research Chairs Program. Y.B.K. is supported by the Simons Fellowship from the Simons Foundation and the Guggenheim Fellowship from the John Simon Guggenheim Memorial Foundation. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto.
2301.00148
Mapping effective connectivity by virtually perturbing a surrogate brain
Effective connectivity (EC), indicative of the causal interactions between brain regions, is fundamental to understanding information processing in the brain. Traditional approaches, which infer EC from neural responses to stimulations, are not suited for mapping whole-brain EC in humans due to being invasive and having limited spatial coverage of stimulations. To address this gap, we present Neural Perturbational Inference (NPI), a data-driven framework designed to map EC across the entire brain. NPI employs an artificial neural network trained to learn large-scale neural dynamics as a computational surrogate of the brain. NPI maps EC by perturbing each region of the surrogate brain and observing the resulting responses in all other regions. NPI captures the directionality, strength, and excitatory/inhibitory properties of brain-wide EC. Our validation of NPI, using models having ground-truth EC, shows its superiority over Granger causality and dynamic causal modeling. Applying NPI to resting-state fMRI data from diverse datasets reveals consistent and structurally supported EC. Further validation using a cortico-cortical evoked potentials dataset reveals a significant correlation between NPI-inferred EC and real stimulation propagation pathways. By transitioning from correlational to causal understandings of brain functionality, NPI marks a stride in decoding the brain's functional architecture and facilitating both neuroscience research and clinical applications.
Zixiang Luo, Kaining Peng, Zhichao Liang, Shengyuan Cai, Chenyu Xu, Dan Li, Yu Hu, Changsong Zhou, Quanying Liu
2022-12-31T08:09:13Z
http://arxiv.org/abs/2301.00148v4
# Mapping the whole-brain effective connectome with excitatory-inhibitory causal relationship ###### Abstract Understanding the large-scale causal relationship among brain regions is crucial for elucidating the information flow that the brain integrates external stimuli and generates behaviors. Despite the availability of neurostimulation and computational methods to infer causal relationships among a limited number of regions, these approaches are not capable of mapping the causal network of the entire brain, also known as the effective brain connectome (EBC). To address this gap, we propose a data-driven framework called Neural Perturbational Inference (NPI) and map the human EBC for the first time. NPI uses an artificial neural network trained to learn large-scale neural dynamics as a surrogate brain. By perturbing each region of the surrogate brain and observing the resulting responses in all other regions, the human EBC is obtained. This connectome captures the directionality, strength, and excitatory-inhibitory distinction of brain-wide causal relationships, offering mechanistic insights into cognitive processes. EBC provides a complete picture of information flow both within and across brain functional networks as well as reveals the large-scale hierarchy of the organization of excitatory and inhibitory ECs. As EBC captures the neurostimulation transmission pathways in the brain, it has great potential to guide the target selection in personalized neurostimulation of neurological disorders. ## 1 Introduction The brain is a complex network of interconnected regions that work in concert to integrate information from the environment with internal dynamic states and generate a wide range of behaviors [1]. Understanding the flow of information among brain regions is essential for comprehending the connection between stimuli and responses. However, current measures of macroscopic inter-region connections, such as structural connectivity (SC) and functional connectivity (FC), fall short of providing information flow within the brain and thus limit the mechanistic understanding of brain functions. SC provides a static representation of physical connections but fails to capture the dynamic nature of brain function [2]. FC examines statistical associations among regional neural signals but is still not a causal relationship [3]. Therefore, it is necessary to measure the effective connectivity (EC), which captures the positive or negative causal impact a given region can have on its downstream regions, thereby depicting the flow of information [4]. Despite the availability of methods that infer EC among a few regions, a whole-brain EC, which we call the effective brain connectome (EBC), is still lacking. In order to understand the complete information flow from receiving external information to multi-sensory integration and behavior generation, an accurate EBC is desperately in need. Experimental manipulation is a straightforward and widely used approach to examine the input-output causality, which is also the EC, among brain regions [5]. By manipulating a specific brain region and simultaneously observing the induced effects at other regions, it provides direct evidence of causality [6, 7]. Several manipulation techniques, such as electrical stimulations and optogenetics, and observation techniques, including electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), have been applied for mapping causal relationships [8, 9]. However, there are technical gaps to map EBC by experimental manipulations. The invasive neurostimulation techniques is hard to be performed at many brain regions. Concurrent stimulation and neural responses observation is also not feasible at the whole-brain scale. Computational approaches provide alternative solutions, which infer EC from regional neural signals with both model-based and model-free methods [10, 11]. However, existing computational methods still have limited ability to offer an accurate EBC. Model-based methods, such as dynamic causal modeling (DCM), rely heavily on model assumptions and thus suffer from inference biases due to the model mismatch. Model-free methods that are typically based on statistics, such as Granger Causality, characterize the direction of EC but is often hard to determine the strength and excitatory-inhibitory distinction of EC [12]. Moreover, as the meaning of EC varies across computational frameworks, how they relate to EC obtained by experimental neuromodulation is still unclear. As a result, human EBC is still lacking. With the proliferation of big data in neuroscience, such as brain imaging or electrophysiological recordings, an increasing number of studies are employing artificial neural networks (ANNs) to unveil hidden information from these vast neural data [13]. Applications encompass neural decoding, neuroimaging reconstruction, and the diagnosis and control of neurological disorders [14, 15, 16]. Despite the remarkable fitting capabilities of ANNs, for example, a recurrent neural network (RNN) well-tracking the nonlinear dynamics of a few nodes [17], training an ANN to capture the large-scale brain dynamics and to represent its underlying connectivity structure remains a challenge. Previous studies have shown that perturbing the input variables of ANN and monitoring alterations in output enables the identification of the causal relationship between a specific input and the output [18]. Such perturbation-based procedure in ANN is analogous to deriving causal relationships through experimental intervention [6, 7]. Enlightened by this analogy, we incorporate the perturbation-based experiments into a data-driven framework to study the causality in the brain. Once the ANN is trained to capture the brain-wide neural dynamics, systematically perturbing this ANN can yield a map of causal relationships among all brain regions, which is the human EBC. Here, we present a novel approach called Neural Perturbational Inference (NPI). NPI trains an ANN to accurately model brain dynamics and then uses it as a surrogate brain to be perturbed. The whole-brain EC is then obtained by perturbing the surrogate brain region by region while simultaneously observing the resulting neural responses in all other regions. We applied NPI to human resting-state fMRI (rsfMRI) signals and obtained the human EBC for the first time. The inferred EBC exhibits whole-brain causal interactions with directionality, strength, and the excitatory-inhibitory distinction, providing a comprehensive view of macroscopic resting-state information flow within and across functional networks. It uncovers the neural mechanism under cognitive processes and deepens our understanding of brain structure-function relationship. In addition, since EBC reflects the transmission pathway of neurostimulation, it has great potential in guiding the target selection in neuromodulation such as in personalized treatment of neurological and psychiatric diseases. ## 2 Results ### The NPI framework NPI is a framework that non-invasively infers EBC from neural signals (Fig. 1). From brain imaging or electrophysiological recordings, the collective neural activities of multiple brain regions are easily available, but how these regions interact to process information is unclear (Fig. 1 a). NPI aims to infer EC among regions in the entire brain, which are directed causal connections. Experimental perturbations such as electrical and magnetic stimulations have widely been used to map EC among a few brain regions [8]. To enable brain-wide perturbation and avoid physically stimulating the real brain, NPI uses a Figure 1: **The NPI framework for inferring EBC.** (**a**) Schematic of the brain network and the neural signal of each brain region. The EBC among regions is unknown and to be inferred. (**b**) A surrogate brain, an ANN, is trained to replace the real brain to be perturbed. The ANN model is trained to capture brain dynamics in terms of predicting the next brain state \(\mathbf{x}(t+1)\) given the current brain state \(\mathbf{x}(t)\). (**c**) After training, ANN is systematically perturbed to infer EC. Perturbation is applied as a selective increase of neural signal at one region. The red background indicates an increase in neural signal compared with not applying perturbation, while the blue indicates a decrease. After perturbing one region, the magnitude of the one-step responses refers to a one-to-all EC. (**d**) The all-to-all EC (EBC) can be inferred by perturbing the ANN region by region. This EBC is a brain-wide map of causal influences that has directionality, strength, and excitatory-inhibitory distinction. (**e**) ANN trained on individual data accurately models BOLD dynamics. The \(r^{2}\) of one-step prediction using ANN is near 1.0 on both the training (0.991) and testing (0.988) data. (**f**) Recurrently feeding the result of one-step prediction as input to ANN produces the generated neural signals. The model FC and empirical FC are respectively calculated from generated BOLD signals and empirical BOLD signals and are averaged across 800 subjects. They are highly correlated with a correlation coefficient of 0.97. (**g**) After perturbing region \(b\), the increased signal in the region \(a\) and decreased signal in the region \(c\) indicate an excitatory EC from \(b\) to \(a\), and an inhibitory EC from \(b\) to \(c\). (**h**) The response to a perturbation is state-dependent. We perturb the ANN at different brain states and take the averaged response to be EC. The distribution of three ECs of one subject is demonstrated. Green bars show the response of left V1 after perturbing left OFC (\(mean=-0.31,std=0.25\)), brown one shows the response of left V3 after perturbing left V2 (\(mean=0.30,std=0.10\)), and the blue one shows the response of right V1 after perturbing left V1 (\(mean=0.76,std=0.20\)). data-driven approach to infer EC. Conceptually, NPI is similar to perturbing the real brain through neurostimulation, but it uses an ANN as a surrogate brain to replace the real brain, which enables efficient whole-brain perturbation and observation (Fig. 1b). This study implements the ANN as a multi-layer perceptron. The ANN is trained to predict brain state at the next time step based on the brain state at the current step by minimizing the one-step-ahead prediction error. After training, the ANN is treated as a surrogate brain. It is then systematically perturbed to extract the underlying EC (Fig. 1c). By perturbing a source region and observing the response of target region at the next time step, the EC from the source region to the target region is inferred based on the change in predicted neural activity with and without perturbation. Systematically perturbing each node in ANN reveals the EBC (Fig. 1d), which characterizes the directionality, strength, and excitatory-inhibitory distinction of the causal influences among all brain regions. It represents the extent to which one brain region can positively or negatively influence others. The ANN in NPI exhibits a high ability to model fMRI dynamics, using rsfMRI data from 800 subjects in the Human Connectome Project (HCP) dataset [19]. After separate ANN training for each subject, the ANN in NPI accurately learns the mapping between two consecutive BOLD signals, as indicated by the coefficient of determination (\(r^{2}\)) that are close to 1 for both the training (\(r^{2}=0.991\)) and the testing data (\(r^{2}=0.988\)) (Fig. 1e). Besides the accurate one-step prediction, the ANN model also captures the interaction relationships among brain regions. We recursively feed the predicted signals back into the ANN model and generate the synthetic BOLD signals over 200 time steps (Fig. 1f). The FC calculated from the generated BOLD signals (model FC) and FC calculated from the empirical BOLD signals (empirical FC) are compared, both of which are averaged across 800 subjects. The model FC and empirical FC are strongly correlated (\(r=0.98\), \(p\leq 10^{-4}\)), suggesting ANN captures the dynamic relationship among brain regions. Together, the evidence from accurate one-step prediction and FC recovery suggest that the trained ANN model is valid to serve as a surrogate brain for virtual perturbation experiments. The parameters in ANN are fixed after training. Perturbations are then applied to explore the underlying causal relationships among nodes. In the ANN, the perturbation is implemented as an increase in the neural activity of a particular source node, while keeping the neural signals of other target nodes unchanged. Both the perturbed and unperturbed signals are input to the ANN, which maps the corresponding next state of the neural signals. The difference in the next state of target regions given the perturbed and unperturbed current state is measured as the EC from the source node to target nodes. Compared with the next state without perturbation, an increased activity of a target region after perturbation indicates an excitatory EC from the source node to the target node, while a decrease indicates an inhibitory EC (Fig. 1f). Since the nonlinear nature of brain dynamics, the response to perturbation varies with the initial states. This is similar to the state-dependent response happened in real stimulation [20]. We thus calculate the averaged response after applying perturbations at multiple initial states as the final EC (Fig. 1g). ### Validation of NPI using an RNN model To show the effectiveness of NPI framework in recovering the underlying EC, NPI is applied to recover the EC from the dynamics generated by an RNN model with known ground-truth SC. A single-layer RNN is used with the weight matrix (the SC of RNN) randomly sampled from a Gaussian distribution with zero mean value (Fig. 2 a). The FC of neural dynamics is calculated as the Pearson's correlation among regional signals. The obtained FC is correlated with the ground-truth SC with the coefficient 0.54 (Fig. 2b,d). The EC of the RNN model can be obtained by directly perturbing the model (Fig. 2c). Perturbing a region is implemented as an increase in the neural signal of this region. After a time step, the averaged changes in neural signals of other regions are calculated as the strengths of EC from the perturbed region to other regions, with the maximum EC scaled to 1 (Fig. S1). Since the state-dependence of perturbation-induced response, the averaged response after perturbing at 1000 randomly chosen initial states is calculated as the final EC. The resulting EC is strongly correlated with SC (\(r=0.94\)), which is reasonable since EC is constrained by SC (Fig. 2d). Due to the modulation of nonlinear brain dynamics and the noise in the neural signals, EC does not exactly equal to SC. Despite that, the correlation between EC and SC is much higher than that between FC and SC, which may result from the spurious connections and the lack of directionality in FC. We next examine whether NPI-inferred EC recovers the EC from direct perturbing RNN model and the ground-truth SC. NPI is applied to the RNN generated neural signals to infer EC (Fig. 2e). The NPI-inferred EC is highly correlated with the EC obtained by direct perturbation and is thus also highly correlated with the SC (Fig. 2f). This suggests that the ANN in NPI accurately predicts the response to a perturbation given an particular initial state. This high correlation also validate the effectiveness of NPI for inferring EC from neural dynamics. To show the direct evidence of the prediction ability of ANN, we examine the ability of ANN in predicting the neural signals at the next step for both training and testing set. The training set consists of consecutive data pairs sampled from generated neural dynamics, as in Fig. 2b. The testing set is constructed by applying perturbations to a region and mapping the next step by the ground-truth RNN model. The signals in the testing set are not sampled from the RNN generated neural dynamics and are thus out-of-distribution (OOD) data. The result shows that the ANN trained on the observed neural signals can also generalize to the OOD testing set (Fig. 2g), which is the foundation of the successful EC inference. The robustness of NPI is also tested with the RNN model. When perturbing the ANN, various perturbation magnitudes are tested and the result shows that the inference performance is robust Figure 2: **Validation of NPI using an RNN model** To validate the effectiveness of NPI, we test its ability to recover ground-truth SC using synthetic data generated from an RNN model with known SC. (**a**) The RNN model with SC sampled from a Gaussian distribution with zero mean value. The maximum weight in SC is scaled to 1. (**b**) The synthetic neural dynamics are generated by the RNN model. FC is then calculated from the generated signals. (**c**) The output EC of a source node is obtained as the magnitude of response at target nodes after perturbing this source node. Perturbing all nodes in turn offers an all-to-all EC. The maximum weight is scaled to 1. (**d**) Both EC (\(r=0.94,p\leq 10^{-3}\)) and FC (\(r=0.54,p\leq 10^{-3}\)) of RNN are strongly correlated with SC of RNN. EC better reconstructs SC than FC. (**e**) NPI infers the EC of RNN from generated neural signals by training and perturbing a surrogate ANN trained to learn the neural dynamics in RNN. (**f**) The inferred EC is strongly correlated with EC obtained by direction perturbing the RNN model and the ground-truth SC in RNN. (**g**) The ability of ANN in predicting the RNN signal at the next time step is assessed by the coefficient of determination. The training set is constructed using consecutive pairs in generated neural signals as in (**b**). The testing set is constructed by the resulting neural activities produced by perturbing the signal of a region and mapping the next step using the RNN model. (**h**) The inference performance is robust against the magnitude of the perturbation (compared with the standard deviation of the signal). (**i**) The inference performance is robust against the standard deviation of the system noise. (**j**) With the increasing number of regions, NPI needs more data to achieve a good inference performance. against the perturbation magnitudes (Fig. 2h). The performance under different levels of system noise is also tested. Despite some decrease in the inference performance with increasing noise, the performance is overall robust (Fig. 2i). When testing NPI on data with different lengths and different numbers of regions, result shows more data is needed to infer EC from the network with a larger number of regions (Fig. 2j). To validate the ability of NPI on BOLD signals, we apply NPI on a publicly available dataset by Sanchez-Romero et al. [21], which has been used to validate many EC inference algorithms. The data generation process involves neural firing rate dynamics followed by a hemodynamic response function (HRF) that transforms the neural signals into BOLD signals (Fig. S2). The entries in SC are all binary values (either 0 or 1). We binarize the NPI-inferred EC and compare it with the underlying SC, evaluating its ability to correctly classify the presence or absence of connections. The performance of classification is measured using the area under the receiver operating characteristic curve (AUC). The result shows that the AUC of NPI is very close to 1.0 and NPI performs significantly better than the two baseline EC inference methods (Granger causality and DCM) (Fig. S2e). ### The human EBC inferred by NPI We apply NPI to rs-fMRI data from 800 subjects in the HCP dataset. The obtained resting-state EC originating from the left hemisphere is plotted in Fig. 3a with the maximum response scaled to 1 (EC for the entire brain is shown in Fig. S3). The positive entries indicate excitatory EC and negative entries indicate inhibitory EC. The excitatory EC has a maximum strength of 1, while inhibitory EC has a maximum strength of 0.22. The brain regions are assigned to seven functional networks (visual network (VIS), somatomotor network (SOM), dorsal attention network (DAN), ventral attention network (VAN), frontoparietal network (FPN), and default mode network (DMN)) according to Yeo et al. [22] (Fig. 3b). Among all the EC entries, 78% of the ECs are significantly different from zero (Fig. S4, \(p\leq 0.05\), FDR corrected). Seed-based EC is then analyzed to examine the topographic organization of functional networks in EBC. The top 15% excitatory and top 15% inhibitory output ECs from seeds in six functional brain networks are plotted, showing a similar structure as networks defined by FC and revealing more information on how the seed regions inhibit other regions (Fig. 3c). The majority of ECs have small and near-zero strengths, with a few having very large strengths. The distribution shows a long-tail property. We fit the strengths to four hypothesized distributions: log-normal, normal, exponential, and inverse Gaussian. According to the Akaike information criterion (AIC), the log-normal distribution is the best fit for both excitatory and inhibitory ECs, as well as for the combination of absolute strength of them (Fig. 3d,f, S1). It is consistent with the distribution of SC found in experimental studies using tract-tracing techniques involving mice and macaques [23, 24]. The log-normal distributions of excitatory and inhibitory ECs are reproducible under the Automated Anatomical Labeling (AAL) parcellation (Fig. S5). The 50 strongest excitatory and inhibitory ECs are respectively plotted in Fig. 3e,g, and the top 20 strongest excitatory and inhibitory ECs are highlighted in Fig. S6a,b. The strongest excitatory ECs are mostly intra-network connections (41 out of 50), either intra-hemisphere or inter-hemisphere. The strongest inhibitory ECs are mostly inter-network connections (36 out of 50), and are all inter-hemisphere connections. In a network, the degree of a node refers to the number of connections it has with other nodes in the network and can be used to measure the centrality or importance of that node in the network. We binarize the EBC at a threshold that larger than 80% of absolute EC strengths (0.063) (Fig. S3b). The ECs with absolute strengths below the threshold are set to 0, while the rest are set to 1. The excitatory and inhibitory ECs are not differentiated in binarized EBC. Since EC is directed and thus asymmetric, the in-degree of a node is different from the out-degree. In binarized EBC, most of the ECs are Figure 3: **The human EBC**. (**a**) The averaged EBC of 800 subjects, with regions organized according to functional networks. Each row represents the EC from a source region in the left hemisphere to the entire cortex. (**b**) Cortical areas are assigned to seven functional resting-state networks: visual network (VIS), somatomotor network (SOM), dorsal attention network (DAN), ventral attention network (VAN), limbic network (LIM), frontoparietal network (FPN), and default mode network (DMN). (**c**) Topological maps of EC from a seed region in seven cortical networks respectively. The seed region is indicated with a black dot on each map. (**d,f**) The strength of both excitatory (d) and inhibitory (f) EC follows a log-normal distribution, as demonstrated by the fitting curve of log-transformed EC strengths by a Gaussian distribution. (**e,g**) The 50 strongest excitatory (e) and inhibitory (g) EC. (**h**) The degree distribution of regions in the binarized EBC. EBC is binarized by a threshold larger than 80% of ECs. The degree is calculated as the mean of the in-degree and out-degree of each region. The in-degree represents the number of input ECs to that region and the out-degree represents the number of output ECs from that region. (**i**) 30 brain regions with the largest degree after binarizing EBC. bidirectional (73%), consistent with previous findings on SC [25]. Regions with the largest averaged in-out degrees are plotted in Fig. 3i. They are dispersed across the cortex in several functional networks (Fig. S6c). ### The organization of excitatory and inhibitory ECs across functional networks EBC distinguishes the excitatory and inhibitory causal influences in large-scale connections for the first time, since previous measures including SC and FC fails to distinguish them. This distinction offers the mechanism under cognitive processes with more details as well as guides the choosing of neurostimulation targets that excite or inhibit the desired regions. Recent discoveries in both the decomposition of large-scale FC [26] and local measurements across multiple brain areas [27] have revealed a principal hierarchy of functional differences across cortical areas, spanning from primary Figure 4: **The organization of excitatory and inhibitory ECs across functional networks (a,b) The excitatory (a) and inhibitory (b) parts of EC from the left hemisphere to the whole cortex. (c,d) The averaged excitatory (c) and inhibitory (d) EC strength across seven functional brain networks. (e,f) The averaged excitatory EC strength (e) is higher for ipsi-lateral and intra-network ECs, while the averaged inhibitory EC strength (f) is higher for contra-lateral and inter-network ECs. (g,h) The averaged excitatory (g) and inhibitory (h) EC strength within and across seven functional brain networks. Networks are ordered by the averaged intra-network excitatory EC strength.** sensorimotor networks to higher-order association networks. Here we examine how excitatory (Fig. 3(a)) and inhibitory (Fig. 3(b)) ECs are organized across functional brain networks. The averaged excitatory/inhibitory EC strength from network \(X\) to network \(Y\) is calculated as the total strengths of excitatory/inhibitory EC from regions in \(X\) to regions in \(Y\) divided by the total number of ECs from regions in \(X\) to regions in \(Y\). The averaged inter-network EC strength for a particular network is calculated as the average strength from or to that network. The average strength and the maximum strength of excitatory ECs are both higher than that of inhibitory EC. In addition, the excitatory ECs have a higher density for ipsi-lateral and intra-network connections, while inhibitory ECs have a higher density for contra-lateral and inter-network connections (Fig. 3(c),d). Within seven networks, the average strength of excitatory and inhibitory ECs are in two opposite hierarchies. The excitatory ECs have higher strengths in primary sensory networks (VIS and SOM) than in association networks (DAN, VAN, FPN, DMN, and LIM), with the highest strengths in the SOM (Fig. 3(e),f). The inhibitory ECs, on the other hand, have higher strengths in association networks than in primary sensory networks, with the highest strength in the LIM. The excitatory inter-network ECs have lower strengths compared with intra-network ECs and have similar strengths in seven networks, while inhibitory inter-network ECs have higher strengths and follow a similar hierarchy as intra-network ECs. ### Information flow within DMN and between DMN and cortex DMN is a functional network that is generally more active during rest or spontaneous thought than during task performing. It is thought to be involved in a wide range of cognition, such as memory consolidation, social cognition, and the integration of information from different brain regions [26, 28, 29]. However, the mechanisms by which the DMN performs these functions are elusive, particularly in terms of macroscopic information integration. Our inferred EBC reveals that DMN has the highest inter-network inhibitory density (Fig. 3(h)), thus playing an inhibitory role in cortical dynamics. We examine the information flows within and across DMN and focuse on the four core regions of the DMN: the medial prefrontal cortex (mPFC), left inferior parietal cortex (LIPC), right inferior parietal cortex (RIPC), and posterior cingulate cortex (PCC) (Fig. 4(a)). Since the MMP atlas does not explicitly contain all four core DMN regions, the BOLD signals from core DMN regions are separately extracted (Table S3) and combined with 379-dimensional signals from MMP, yielding 383-dimensional signals. The NPI applied to the 383-dimensional signals to get the EC within and across DMN. The inferred EC within the core DMN regions and their inter-individual variability are shown in Fig. 4(b),c. All twelve ECs among core DMN regions are significantly existing. There are weak inhibitory ECs from mPFC to the other three regions, suggesting an inhibitory role of mPFC within the DMN. All other ECs within core DMN are excitatory. The DMN's inflow from the cortex and its outflow to the cortex are shown in Fig. 5d,e. Although there are overlaps between the corresponding inflow and outflow, they are not identical. Despite the inhibitory ECs from mPFC to other three DMN regions, mPFC outputs excitatory ECs to a wide range of cortical regions, suggesting the role of the mPFC in integrating information from the DMN and spreading to other parts of the cortex, which is possibly linked to memory consolidation and consciousness formation [30, 31]. ### EBC explains SC and FC To better understand how brain structure supports its rich functionality, we examine the relationship among whole-brain SC, EC, and FC. We find that EC is strongly correlated with both SC and FC, and all three connectomes demonstrate a modular structure with higher connections within functional networks (Fig. 6a-d). The correlation between EC and SC is higher than that between FC and SC, indicating that EC better explains SC than FC (Fig. 6b, Fig. S7c). For a strong EC, there is usually a strong SC, suggesting the SC provides a structural basis for effective information flow. Figure 5: **Information flow within DMN and between DMN and cortex** (**a**) The core DMN regions include the medial prefrontal cortex (mPFC), left inferior parietal cortex (LIPC), right inferior parietal cortex (RIPC), and posterior cingulate cortex (PCC). (**b**) The EC among core DMN regions is depicted, with excitatory EC represented in red and inhibitory EC represented in blue. The thickness of the arrows reflects the strength of the EC. (**c**) The distribution of the strengths of EC within DMN across subjects. The distribution is colored as the color of the source regions. Twelve ECs within DMN are all significantly existing (\(p\leq 0.05\), FDR corrected). (**d,e**) The input (d) and output (e) EC between the core DMN regions and the cortex are depicted on a topographical map. FC has widely been used to understand the important inter-regional interactions in diseases, cognitive tasks, and behaviors. However, the incomplete understanding of the formation mechanism of FC limits the study of the neural basis underlying FC differences. Most connections in SC and EC have very small strengths, while FC has a larger average strength, which may result from spurious connections caused by confounds (Fig. 6d). The entries with strong EC tend to have strong FC, while entries with small EC can have either strong or weak FC (Fig. 6d), which also indicates vast indirect connections and confounds in FC. To measure the role of indirect EC in the formation of FC, we examine the similarity of input and output EC of every pair of regions and see how they relate to FC. The input EC correlation between regions \(X\) and \(Y\) is calculated as the correlation between the \(X\)-\(th\) column and the \(Y\)-\(th\) column of the EBC (Fig. 6e). A high input EC correlation indicates that two regions have many shared inputs, which may contribute to enhanced FC between them. Similarly, output EC correlation between regions \(X\) and \(Y\) is calculated as the correlation between the \(X\)-\(th\) row and the \(Y\)-\(th\) row of the EBC (Fig. 6g). A high output EC correlation suggests that two regions have many shared Figure 6: **The structural, functional, and effective connectome of human brain** (**a**) The SC of the left hemisphere is obtained from diffusion tensor imaging (DTI) data and averaged across 800 subjects. (**b**) The correlation between the strength of the same connections in SC and EC (\(r=0.36\), \(p<10^{-4}\)). (**c**) The FC of the left hemisphere for 800 subjects is calculated as Pearson’s correlation coefficient of fMRI signals among cortical regions. (**d**) The correlation between the strength of the same connections in FC and EC (\(r=0.66\), \(p<10^{-4}\)). (**e**) We define the input EC correlation of the region \(X\) and \(Y\) as the correlation between the \(X\)-\(th\) and the \(Y\)-\(th\) column of EBC, which measures the similarity of inputs to region \(X\) and region \(Y\). (**f**) The correlation between the strength of the FC input EC correlation (\(r=0.85\), \(p<10^{-4}\)). (**g**) We define the output EC correlation of the region \(X\) and \(Y\) as the correlation between the \(X\)-\(th\) and the \(Y\)-\(th\) row of EBC, which measures the similarity of outputs from region \(X\) and region \(Y\). (**h**) The correlation between the strength of the FC and output EC correlation (\(r=0.76\), \(p<10^{-4}\)). outputs. Our results showed that input EC correlation is a better predictor of FC than output EC correlation and EC itself (Fig. 6f,h), suggesting that FC is determined by both direct connections and shared inputs. ## 3 Discussion Our brain is a distributed and interactive network that allows for receiving stimulus inputs and generating response outputs [32, 33]. Understanding this input-output causal relationship among the entire brain requires mapping the EBC. In this study, we presented a data-driven framework called NPI to infer the EBC with directionality, strength, and excitatory-inhibitory distinction of causal relationships. We applied NPI to resting-state human fMRI data and obtained the human EBC, which uncovered the whole-brain organization of excitatory and inhibitory ECs within and across functional networks. ### NPI is a general data-driven framework to infer causal relationship Despite EC as a common terminology in the neuroscience literature [10, 11, 34], the definition of EC is ambiguous: its meaning and connotation vary across different methods. For example, EC from region \(X\) to region \(Y\) in the context of Granger causality refers to the importance of the history of \(X\) in predicting the activity of \(Y\), while EC in DCM is defined as the coupling parameters in a mechanistic state-space model. There is an ongoing debate about how these definitions relate to the underlying flow of information in the brain. In this study, we define EC as the magnitude of the neural response induced by a perturbation to a specific brain region. This definition aligns with the statistical definition of causality: \(X\) has a causal effect on \(Y\) if an externally applied perturbation of \(X\) can result in a significant change in \(Y\)[5, 35]. This definition is well consistent with the experimental methods that infer EC by actually perturbing a region and observing the remote neural response, which is believed to recruit the underlying physical connections and reflect the information flow in the brain. The performance EC inference relies on the ability of ANN in predicting the neural response after applying perturbations. After training ANN through one-step-ahead prediction, it successfully learned the mapping between two consecutive signals in both simulated dataset and real BOLD signals. More importantly, the trained ANN generalize to the perturbed unseen input (out-of-distribution samples) (Fig. 2g). This excellent fitting ability comes from the great expressive power of over-parameterized ANN models, which has been extensively studied in the field of deep learning [36, 37]. Therefore, the perturbation-induced signal change predicted by ANN is believed to be equivalent to the signal change in the real brain with similar a stimulation. In the future, the prediction ability of ANN can be validated using concurrent stimulation and observation such as concurrent TMS-fMRI. Predicting the next state using only the current state requires the system to be nearly Markovian. In fMRI signals, the time repeat is usually large, so that the current fMRI state contains nearly all the information needed to predict the next state, suggesting that fMRI signals are almost Markovian [38]. Thus, the present BOLD signal is sufficient to predict one-step-ahead BOLD response. The prediction of EEG data may require many history steps, as the time length of one step is much smaller in EEG [38]. To adapt NPI for other data modalities, such as EEG data, the ANN architecture and the way applying perturbation may need be adjusted to utilize past information in more time steps. The NPI framework inherits a major limitation of data-driven methods: it is data-hungry (Fig.2j). Since the small models usually have limited ability in data fitting and generalization, an ANN with a large number of parameters is needed to ensure great generalization ability, which is the key to an accurate inference. However, a large model also needs more data to be trained. It is necessary to find a trade-off in the model size, where it has enough expression ability and can be trained with a moderate amount of data. The inference method that needs fewer data is a future direction. While the MLP used in this study is one option for the prediction model, any model that is able to accurately describe the underlying brain network dynamics could potentially be used in the NPI framework [16, 39]. Prediction models with better data efficiency are in need. On the other hand, more prior knowledge in neuroscience may be incorporated to build surrogate models with fewer parameters and thus needs less data. ### Insights from the human resting-state EBC Macroscopic information flow plays a crucial role in processing sensory inputs and mediating cognitive functions, such as attention, memory, and behaviors. This information flow involves both feedforward processing and feedback signaling, with sensory information being transmitted to and further processed by higher-level brain regions before flowing back [40, 41]. Multimodal feedforward and feedback information flow and higher-level information integration occur in parallel. The NPI-inferred EBC provides a complete picture of this parallel information flow in the brain. In human EBC, the strength of ECs followed a log-normal distribution for both excitatory and inhibitory ECs, indicating most ECs are weak, with only a few being strong. It is consistent with the distribution of SC in many species, as EC is constrained by SC [23, 24]. In network science, this log-normal distribution is considered as the result of a trade-off between minimizing wiring costs and energy consumption while maintaining efficient inter-regional communication [42]. The EBC has the merit of discrimination between excitatory and inhibitory ECs between brain regions. Our results show that the majority of ECs in the EBC are excitatory, which have a larger maximum strength and averaged strength compared with inhibitory ECs. Additionally, the spatial distribution of excitatory and inhibitory ECs vary, with excitatory ECs being concentrated in local communities such as within functional networks and within a hemisphere, while inhibitory ECs had a higher averaged strength across communities. The averaged strengths of ECs within and across functional networks also vary, where excitatory ECs have a higher averaged strength within unimodal networks such as the visual and somatomotor networks, and inhibitory ECs have a higher averaged strength within transmodal networks, particularly within the frontoparietal and default mode networks. Moreover, the hierarchy of averaged excitatory and inhibitory connection strengths is similar to the large-scale cortical hierarchy from unimodal networks that process information from a single modality such as visual and motor networks to transmodal networks that integrate information from various sources such as the DMN [43]. It suggests that unimodal networks require extensive excitatory ECs to recurrently process primary information within the networks. Transmodal networks, on the contrary, need inhibitory ECs to modulate and integrate information across various networks [44, 45, 46]. DMN is located at one end of the cortical hierarchy and is known to process transmodal information unrelated to immediate sensory inputs [26, 29]. How DMN receive information from and transmit information to other cortical regions are unclear. Our results show the EC within the core DMN regions with excitatory-inhibitory distinction, where the result is similar to previous DCM-based findings in some way [47]. The mPFC receives excitatory ECs from other three regions in DMN, but output inhibitory ECs to all of them. On the other hand, mPFC has a wide range of excitatory ECs to the cortex, suggesting the role of mPFC in integrating information from other DMN regions and broadcasting to a wide range of cortex, which is in line with the role of global broadcasting of information suggested in the global neuronal workspace theory of consciousness formation [30, 31]. ### The relationship among SC, FC, and EBC How static brain structure supports rich brain functionality is a central question in neuroscience. However, the structure-function relationship at the macroscopic scale remains poorly understood due to the difficulties in observation with a high spatiotemporal scale and the lack of suitable tools for EC inference. Traditional measures of brain networks, including SC and FC, are insufficient to describe the property that we are interested in: the actual inter-region interactions, which should ideally be a causal interaction with directionality, strength, and excitatory-inhibitory distinction. So far, the SC derived from MRI could not capture the directionality of connections nor distinguish the excitatory or inhibitory connections [2]. The FC derived from statistical correlation still does not describe information flow in the brain, since correlation is affected by input signals from other nearby regions and does not reflect the directionality of connections [48, 49]. NPI-inferred EC represents how one brain region causally influences another. This influence is a compositive effect that depends not only on SC but also on brain dynamics and regional heterogeneity. After a perturbation, the response is expected to propagate along physical connections (SC), while the magnitude of the propagated response is modulated by the nonlinear brain dynamics, as well as the current brain state. The interpretation of NPI-inferred EC replies on the specific temporal and spatial scale of observed data, as the EC at a large spatiotemporal scale can be the integration of finer-scale connections. The regions at a large spatial scale can actually contain many sub-populations, with excitatory and inhibitory ECs among sub-populations. The EC between two large regions is thus a composite effect after the cancellation of excitatory and inhibitory ECs among sub-populations. On a large temporal scale, the EC can be the composite effect of indirect information flow that passed other regions at a finer timescale. Therefore, EC should be considered at a specific spatiotemporal scale as it changes with the scales of observation. In the NPI framework, the ANN learns brain dynamics from neural signals. The EC obtained by perturbing ANN is thus the EC at the spatiotemporal scale that the neural signals are sampled. For the RNN model, we observed the signals at the finest spatial scale and also a small timescale. The EC is thus highly correlated with SC. The EBC inferred from BOLD signals, on the other hand, represents the EC at a large spatiotemporal scale which may integrate the effects of underlying SC, excitatory-inhibitory balance, neural dynamics, hemodynamics, and regional heterogeneity. Despite that, the obtained EBC is strongly correlated with whole-brain SC, suggesting the shared network topology in brain structure and function. A potential future direction is to investigate how SC and macroscopic brain dynamics interact to support EC. Like EC, FC is also a composite effect that integrates various factors, but it does not reflect a directed influence. We calculate the input EC correlation and output EC correlation and find that the input EC correlation better explains FC than EC itself. This deepens the interpretation of the FC network and suggests FC is the result of both similar EC inputs and direct EC. This motivates us to advance our concentration from FC to EC when studying functional interactions. ### Future applications of NPI framework Although we apply NPI to resting-state fMRI data in this study, NPI is a versatile framework. Adjusting the ANN architecture and the paradigm of virtual perturbation allows adapting NPI to various data modalities with multiple spatiotemporal scales. For example, NPI can be applied to neural data spanning from individual neurons to neural populations and large-scale EEG and fMRI. Integrating EC across multiple scales could potentially deepen our understanding of structure-function relationships and reveal the mechanisms through which high-level intelligence emerges from multi-scale connections, offering a more comprehensive understanding of brain function. Besides neural signals, NPI also has the potential to be extended to other types of data, such as traffic flow and social network data, as long as they can be represented as time series. In clinical applications, NPI has the potential to uncover biomarkers for neurological diseases and guide the target selection in neurostimulation. Applying NPI to the neural signals of patients with neurological diseases can reveal the patients' EBC. Comparing the EBC of patients and healthy people helps in identifying reorganizations of information flow in the disease and providing mechanical biomarkers. As neural stimulation is transmitted along the information flow, NPI also aids in the selection of control nodes for personalized neurostimulation. Neurostimulation to specific brain regions has been increasingly used to treat brain diseases, such as subthalamic nucleus for Parkinson's disease [50] and ventral capsule/ventral striatum (VC/VS) region for depression [51]. The desired control node in specific neural disorders is often located in deep brain regions, which are hard to access for direct stimulation. In clinical practice, regions on the brain surface are often chosen as control regions to indirectly influence target regions in the deep brain. However, due to inter-individual differences in brain connectivity, selecting the optimal control region remains a challenge. NPI provides personalized EC and is thus useful in guiding the selection of the stimulation region. NPI also serves as a framework to test the effect of different neurostimulation paradigms. For the purpose of EC inference, we only perturbed one node at a time. Real neurostimulation may achieve a better performance when stimulating multiple brain regions at a time. NPI offers a convenient framework for predicting the effect of multi-region perturbation, as well as the effect of repeated stimulation and stimulation response under different brain states. ## 4 Methods ### The NPI framework The NPI framework consists of two steps: i) training an ANN to predict the whole-brain neural dynamics as a surrogate brain, and ii) applying perturbations to each input node of the trained ANN as virtual neurostimulation to brain regions. First, an ANN \(f(\cdot)\) is applied to model the regional neural dynamics, \[\hat{\mathbf{x}}_{t+1}=f(\mathbf{x}_{t},\theta),\] where \(\theta\) is the vector of all unknown parameters in the ANN model. \(\mathbf{x}_{t}\) is the vectorized fMRI data across the cortical areas at time step \(t\), and \(\hat{\mathbf{x}}_{t+1}\) is the predicted fMRI data at time step \(t+1\) by the ANN model. Notably, the ANN can be realized with a variety of network architectures as long as it has the sufficient expressive power to fit the whole-brain neural dynamics. In this study, we use a multi-layer perceptron (MLP) architecture to realize the ANN model. The number of units in the first layer and the last layer of ANN is set to the number of brain regions. The number of units in hidden layers can vary with the complexity of the neural data applied for inferring EC. In this study, a five-layer MLP with the number of units \(379,800,1000,1000,800,379\), and the \(ReLU\) activation function is used. The ANN model is trained by minimizing the one-step-ahead prediction error (predicting the BOLD signal at the next time step given the signal at the current time step). The fMRI data are organized as pairs of two consecutive time points, \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t+1}\). The objective function \(\mathcal{L}(\theta)\) can be explicitly formulated as: \[\mathcal{L}(\theta)=\|f(\mathbf{x}_{t},\theta)-\mathbf{x}_{t+1}\|.\] The Adam optimizer is used to learn the network parameter \(\theta\). The ANN model is trained separately for each subject using all four sessions of fsfMRI data. The data from each sessions are divided into numerous input-output pairs, where the input the signal at a particular time point and the output is the signal at the next time point. The training pairs from all four sessions are combined, and 5% of these pairs are randomly extracted as the testing set for that specific subject. Once the ANN is trained, we perform virtual perturbations to each input node of ANN to infer the EC of brain regions. The perturbation is implemented as a selective increase of the BOLD signal at one specific region. By observing the induced response at all other regions, the one-to-all EC is inferred. The EC from the region \(A\) to the region \(B\) can thus be expressed as: \[\delta_{A\to B}=\mathbb{E}[(B_{t+1}\mid A_{t}=A_{t}+\Delta)-(B_{t+1}\mid A_{t}= A_{t})],\] where \(\Delta\) is the strength of perturbation that is applied to the region \(A\). The mapping from the signal at time \(t\) to the signal at time \(t+1\) is realized by the trained ANN model. The perturbation is only perturbed to one region at a time, where the signals of all other regions remain the same. When applying to the real BOLD signal, we set \(\Delta\) as half of the standard deviation of the BOLD activity. We also test different perturbation strengths and find the inference performance is robust to the perturbation strength. ### Synthetic data from RNN model To validate the performance of NPI, we generate synthetic data using RNN models with fixed ground-truth SC. We denote the state of the \(i\)th neuron as \(x_{i}\) and \(x=(x_{1},...x_{n})\) is a \(n\)-dimensional vector that represents the states of all the \(n\) neurons in the network. The dynamics of \(x\) are given by the following equation: \[\frac{d}{dt}x=-x+Wh(x)+\sigma v\] where \(W\) is the connectivity matrix (SC) and \(h()\) is the tanh activation function. The entries of the weight matrix \(W\) are independent identically distributed centered Gaussians \(\mathcal{N}(0,n^{-1/2})\). The initial state is sampled from a Gaussian distribution \(\mathcal{N}(0,1)\). The \(\sigma\) is the standard deviation of a Gaussian noise \(v\sim\mathcal{N}(0,1)\). The neural dynamics are simulated with the Euler method where \(dt=0.01\). The perturbations applied to the model are implemented in the same way in NPI. Only the signal of the source region is increased by a particular value, while the signal of all other regions remains the same compared with the unperturbed state. When testing the prediction ability of ANN, the input-output pairs in training data is constructed by consecutive pairs of the generated neural signals. The input in the test dataset is constructed by applying a perturbation to signals randomly chosen from the generated neural signals. The corresponding output is then mapped by the RNN dynamics. These inputs in the test dataset are out-of-distribution samples that are not in the distribution of the training data, which are consistent with the situation for applying NPI to the real BOLD signals. ### Empirical data and the brain atlas Human Connectome Project (HCP) S1200 release [19] which includes resting-state fMRI (rs-fMRI) data from 800 subjects is used in this study. The preprocessing of fMRI is based on the multi-modal inter-subject registration (MSMAII) [52]. The rsfMRI data is recorded with TR 0.72s. Each subject comprises two sessions on separate days and each day contains two runs of 15-min rsfMRI, resulting in four runs in total. Every consecutive time point is organized as input-output pairs. The pairs for four runs are mixed to train the model. When testing prediction performance, the testing set comprises 5% randomly sampled pairs in all shuffled input-output pairs and the rest data are used to train the model. The brain is parcellated into 379 regions according to the Multi-Modal Parcellation atlas (MMP 1.0) [53], consisting of 180 cortical regions in each hemisphere and 19 subcortical regions. The analysis is based on the EC among 360 cortical regions, where subcortical regions are included in the training to reduce the bias of EC inference from unobserved regions. The parcellation is performed by averaging the BOLD signals across voxels in each cortical region. To validate the robustness of the NPI framework in inferring EC across spatial scales of parcellations, we replicate the EBC results with the AAL atlas, a parcellation with 116 regions [54]. ### Construction of the whole-brain SC, FC, and EC The resting-state fMRI time series is preprocessed according to the HCP minimal preprocessing pipeline [55]. The denoising process is performed using ICA-FIX which cleans the structured noise by a process combining independent component analysis and the FSL tool FIX. The denoised data are further processed using the Python package nilearn [56] that extracts the data at 0.1 to 0.01 Hz. FC is calculated as Pearson's correlation coefficient between the time series of each pair of brain regions. The FC of one subject is obtained by averaging the FC of four runs. The structural connectivity constructed by Demirtas et al. [57] is used. It is derived using FSL's bedpostx and probtrackx2 analysis workflows that count the number of streamlines intersecting the white matter and gray matter. The SC matrix is scaled to range from 0 to 1 and then log-transformed. The EC for each subject is obtained from the NPI framework that is trained using the four runs of fMRI of each subject. It is then averaged across 800 subjects and scaled as the strongest connection has strength one. ### Parcellation of functional networks The parcellated 360 cortical regions are assigned to seven functional networks, according to the resting-state networks defined in Yeo et al. [22]. The seven functional networks are the visual network (VIS), the somatomotor network (SOM), the dorsal attention network (DAN), the ventral attention network (VAN), the frontoparietal control network (FPN), and the default mode network (DMN). Each region is assigned to the functional network with the largest number of voxels to that region belongs. We place the seed in the left-hemisphere core brain region of each of the seven functional networks (seeds are shown in Table S2). Then we calculate the seed-based FC using Pearson's correlation between the seed region and the rest regions. ### EC between DMN and other cortical regions We analyze the EC within core regions in DMN and EC between a region in DMN and the rest of the brain regions. Since the MMP atlas does not contain all the core DMN regions, we extract the BOLD signals in four DMN regions according to MSDL parcellation [58]: mPFC, LIPC, RIPC, and PCC, with coordinates in Table S3. These four signals are combined with the BOLD signals from the MMP parcellation of 379 cortical regions, resulting in the signal with 383 regions. The ANN is then trained on the combined signals. After the model training, the EC within DMN and the EC between DMN and other regions are extracted for further investigations. Acknowledgments.This work is supported by the National Natural Science Foundation of China (62001205), National Key R&D Program of China (2021YFF1200804), Shenzhen Science and Technology Innovation Committee (20200925155957004, KCXFZ2020122117340001, SGDX2020110309280100), Shenzhen Key Laboratory of Smart Healthcare Engineering (ZDSYS20200811144003009), Guangdong Provincial Key Laboratory of Advanced Biomaterials (2022B1212010003). We thank Prof. Changfeng Wu, Prof. Jing Jiang, Prof. Kai Du, Prof. Yu Mu, Prof. Dayong Jin, Prof. Haiyan Wu, Chen Wei, Shengyuan Cai, Kaining Peng, and Xin Xu for stimulating discussions and advice.
2309.05612
Flag-Shaped Blockers of 123-Avoiding Permutation Matrices
A blocker of $123$-avoiding permutation matrices refers to the set of zeros contained within an $n\times n$ $123$-forcing matrix. Recently, Brualdi and Cao provided a characterization of all minimal blockers, which are blockers with a cardinality of $n$. Building upon their work, a new type of blocker, flag-shaped blockers, which can be seen as a generalization of the $L$-shaped blockers defined by Brualdi and Cao, are introduced. It is demonstrated that all flag-shaped blockers are minimum blockers. The possible cardinalities of flag-shaped blockers are also determined, and the dimensions of subpolytopes that are defined by flag-shaped blockers are examined.
Megan Bennett, Lei Cao
2023-09-11T16:53:25Z
http://arxiv.org/abs/2309.05612v1
# Flag-shaped blockers of 123-avoiding permutation matrices ###### Abstract. A blocker of 123-avoiding permutation matrices refers to the set of zeros contained within an \(n\times n\) 123-forcing matrix. Recently, Brualdi and Cao provided a characterization of all minimal blockers, which are blockers with a cardinality of \(n\). Building upon their work, a new type of blocker, flag-shaped blockers, which can be seen as a generalization of the \(L\)-shaped blockers defined by Brualdi and Cao, are introduced. It is demonstrated that all flag-shaped blockers are minimum blockers. The possible cardinalities of flag-shaped blockers are also determined, and the dimensions of subpolytopes that are defined by flag-shaped blockers are examined. \({}^{*}\) Corresponding Author, Department of Mathematics, Halmos College of Arts & Sciences and Farquhar Honors College, Nova Southeastern University, Fort Lauderdale, FL 33328, USA ([email protected]). Supported by the NSU PanSGA grant, Farquhar Honors College Travel Funding, and NSU prdg-334925. \({}^{**}\) Department of Mathematics, Halmos College of Arts & Sciences, Nova Southeastern University, Fort Lauderdale, FL 33328, USA ([email protected]). Supported by NSU pfrdg-334925. using letters \(a,b,c,d,e,f\) below: \[\left[\begin{array}{c|c|c|c|c|c|c}a&b&c&d&e&f\\ \hline b&c&d&e&f&a\\ \hline c&d&e&f&a&b\\ \hline d&e&f&a&b&c\\ \hline e&f&a&b&c&d\\ \hline f&a&b&c&d&e\\ \end{array}\right].\] **Lemma 1.1** (Lemma 2.2 in [3]).: _The number of \(0\)'s in an \(n\times n\)\(123\)-forcing \((0,1)\)-matrix is at least \(n\). An \(n\times n\)\(123\)-forcing \((0,1)\)-matrix with exactly \(n\)\(0\)'s contains exactly one \(0\) from each cyclic-Hankel permutation matrix._ One particular importance of all blockers with exact \(n\) zeros is the \(L\)-shaped blockers, a set of \(n\) adjacent positions denoted \(L_{n}(s,r)\), where \(s\) corresponds to the width of the blocker, \(r\) corresponds to the height, and \(r+s=n+1\). Clearly, an \(n\times n\)\((0,1)\)-matrix \(A\) with a row or column of all \(0\)'s is \(123\)-forcing, since there do not exist any permutation matrices \(P\leq A\). If the zeros are not contained in one row or one column, \(L\)-shaped blockers must contain either the \((1,n)\) or \((n,1)\) position. In [3], it was shown that all blockers with exact \(n\) zeros can be obtained from \(L\)-shaped blockers by shifting some zeros along the cyclic-Hankel diagonals. The following examples illustrate these results. **Example 1.2**.: For \(n=6\), one possible \(L\)-shaped blocker is \[L_{6}(4,3)=\left[\begin{array}{c|c|c|c|c|c}1&1&0&0&0&0&0\\ \hline 1&1&1&1&1&0\\ \hline 1&1&1&1&1&0\\ \hline 1&1&1&1&1&1\\ \hline 1&1&1&1&1&1\\ \hline 1&1&1&1&1&1\\ \end{array}\right].\] Every permutation matrix \(P\leq L_{6}(4,3)\) contains one of the two \(1\)'s from row \(1\), one of the three \(1\)'s from column \(6\), and then necessarily one of the \(1\)'s from the \(2\times 3\) submatrix formed by rows \(2\) and \(3\), and columns \(3,4\), and \(5\), as given by the Frobenius-Konig theorem [5], thereby resulting in a \(123\)-pattern. Thus \(A\) is a \(123\)-forcing matrix; equivalently, \(A\) blocks all \(6\times 6\)\(123\)-avoiding permutation matrices. Another example of a \(123\)-forcing matrix with \(6\)\(0\)'s obtained from \(L_{6}(4,3)\) by shifting some zeros along the cyclic-Hankel diagonals is \[\left[\begin{array}{c|c|c|c|c|c}1&1&1&0&0&0\\ \hline 1&0&1&1&1&0\\ \hline 1&1&1&1&1&1\\ \hline 1&1&1&1&0&1\\ \hline 1&1&1&1&1&1\\ \hline 1&1&1&1&1&1\\ \end{array}\right].\] The main purpose of this paper is to characterize \(123\)-forcing matrices (equivalently, blockers of \(123\)-avoiding permutation matrices) \(A\) with \(0\)'s in certain shapes. We now briefly summarize the content of this paper. In Section 2, we define flag-shaped blockers of \(123\)-avoiding permutation matrices and show that they are minimum blockers. In Section 3, we give all possible cardinalities (number of zeros) of an \(n\times n\) flag-shaped blocker of 123-avoiding permutation matrices and propose a conjecture that any \(n\times n\) minimum blocker of 123-avoiding permutation matrices contains at least \(n\) zeros and at most \(rs\) zeros, where \(|r-s|\leq 1\) and \(r+s=n+1.\) In Section 4, we explore the dimensions of the subpolytopes of a 123-avoiding polytope determined by a flag-shaped blocker. ## 2. Defining Flag-Shaped Blockers A flag-shaped blocker of an \(n\times n\) matrix is a set of adjacent positions, denoted \(B_{n}(m,t)\), in the following form. \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/123-avoiding}\\ \includegraphics[width=142.26378pt]{images/123-avoiding}\\ \includegraphics[width=142. **Lemma 2.1**.: _Let \(Q\) be a set of positions of an \(n\times n\) matrix. If \(Q\) is a flag-shaped blocker \(B_{n}(m,t)\) with cardinality \(n+t(n-m)\), where \(0\leq t\leq m-1\) and \(1\leq m\leq n\), then \(Q\) is a blocker of all \(n\times n\)\(123\)-avoiding permutation matrices._ Proof.: Without loss of generality, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(7,3)\) for illustrative purposes, though the general proof follows in the same manner. We know that if there exists a \(123\)-avoiding permutation matrix that does not intersect \(Q\), then it must contain one of the positions in yellow submatrix in the below matrix. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|}\left[\begin{array}{c|c|c|c|c|c|c|c|c|}a&b&c &d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&e&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \end{array}\right]\] We also know that at least one of the green positions in the above figure must be used to construct the permutation matrix, as the set of green and red positions forms a blocker given by the Frobenius-Konig theorem, which intersects every permutation matrix. The yellow submatrix in the upper right corner shown in the figure below will always contain more rows, \(n-m+1\), than columns, \(n-m\). Thus, it is necessary that we use at least one position from the green submatrix in the upper left corner in the figure below to construct the permutation matrix. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\left[\begin{array}{cccc|c|c|c|c|c|c|c|}a&b&c &d&e&f&g&h&i&j\\ b&c&d&e&f&g&h&i&j&a\\ c&d&e&f&g&h&i&j&a&b\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \end{array}\right]\] However, this means the permutation matrix constructed cannot be \(123\)-avoiding without intersecting the blocker, as at least one element from each of the yellow submatrices below must be utilized when constructing a permutation matrix that does not intersect \(Q\). Thus, \(Q\) is a blocker. \[\left[\begin{array}{c|cccc|cccc \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Exactly \(n-t-1\) out of \(n\) columns have been accounted for, and elements in these columns that we use to construct the \(123\)-avoiding permutation matrix will never intersect the blocker since there are no positions of the blocker in these columns. This implies that there are exactly \(n-t-1\) rows in the matrix that pose no issues when constructing the permutation matrix. Now, all that is left to consider is row \(1\) and the last \(t\) rows of the matrix, for a total of \(t+1\) rows. By definition, the last \(t\) rows contain no positions of the blocker, so it is always possible to use \(t\) positions from northeast to southwest for the portion of the \(123\)-avoiding permutation matrix in these last \(t\) rows. So far, the \(123\)-avoiding permutation matrix being constructed is disjoint from the blocker. However, the last element of the permutation matrix must come from row \(1\). This element will intersect the blocker, so there exists a permutation matrix that intersects the blocker at most once, as shown in the following examples, where the yellow positions represent the intersections of the blocker and the \(123\)-avoiding permutation matrix. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Thus, for any element \(b\) in the flag-shaped blocker that resides in the first row of an \(n\times n\) matrix, there exists at least one \(123\)-avoiding permutation matrix that intersects the blocker at most once at \(b\). By definition, this means that every element of the blocker in the first row of the matrix is a necessary position of the blocker. **Theorem 2.3**.: _Let \(Q\) be a set of positions of an \(n\times n\) matrix. If \(Q\) is a flag-shaped blocker \(B_{n}(m,t)\) with cardinality \(n+t(n-m)\), where \(0\leq t\leq m-1\) and \(1\leq m\leq n\), then \(Q\) is a minimum blocker of all \(n\times n\)\(123\)-avoiding permutation matrices._ Proof.: Without loss of generality, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(7,3)\) for illustrative purposes, though the general proof follows in the same manner. By Lemma 2.2, every element of \(Q\) in the first row of the matrix is necessary for the minimum blocker, so we can consider the following matrix. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \end{tabular} We can use the green \(j\) to attempt to construct a 123-avoiding permutation matrix, allowing us to focus on the \(9\times 9\) submatrix at the lower left corner. In this submatrix, we have another flag-shaped blocker. By repeating the above steps, we can show that all the blocker elements in row 2 must be included in the minimum blocker. This process of reducing the size of the flag-shaped blocker can be repeated until we are left with an \(L\)-shaped minimal blocker, all positions of which are necessary. Thus, all positions of \(Q\) are necessary. Since removing any element of \(Q\) makes it no longer a blocker, \(Q\) is a minimum blocker. ## 3. The Cardinality of Flag-Shaped Blockers Flag-shaped blockers have a cardinality of \(n\) only when \(m=n\) or \(m=1\). More generally, the cardinality of a flag-shaped minimum blocker is given by the expression \(n+t(n-m)\), where \(0\leq t\leq m-1\) and \(1\leq m\leq n\). With this expression, we are able to determine which cardinalities can and cannot be achieved by a flag-shaped blocker. **Theorem 3.1**.: _Let \(n,m,\) and \(t\) be nonnegative integers, such that \(0\leq t\leq m-1\) and \(1\leq m\leq n\), and let \(p:=n+t(n-m)\), where \(n\leq p\leq n+(\lceil\frac{n}{2}\rceil-1)(n-\lceil\frac{n}{2}\rceil)\). There exists a flag-shaped blocker \(B_{n}(m,t)\) with cardinality \(p\) if and only if \(p-n\leq m-1\) or if \(p-n\) is a composite number._ Proof.: \(\Rightarrow\) Suppose there exists a flag-shaped blocker with cardinality \(p\). If \(p-n\) is a prime number greater than \(m-1\), then this implies that \(t(n-m)\) is also prime and greater than \(m-1\). However, \(t(n-m)\) can only be prime if either \(t=1\) or \(n-m=1\). The requirement that \(t(n-m)>m-1\) prevents either case from occurring. Thus, \(p-n=t(n-m)\) cannot be prime, a contradiction. This implies that if there exists a flag-shaped blocker with cardinality \(p\), then either \(p-n\leq m-1\) is true or \(p-n\) is a composite number. \(\Leftarrow\) For \(n-m=1\), \(p=n+t\) and \(0\leq t\leq m-1\), so \[n\leq p\leq n+m-1\] \[0\leq p-n\leq m-1,\] and all conditions on \(p,n,m\), and \(t\) for a flag-shaped blocker, as defined in Section 2, are met. This means that when \(p-n\leq m-1\), there does exist a flag-shaped blocker with cardinality \(p\). Now, suppose that \(p-n\) is a composite number. If \(n-m\geq 2\) and \(t\geq 2\), then \(p-n=t(n-m)\) must be a composite number such that \(1\leq m\leq n\) and \(0\leq t\leq m-1\). All conditions on \(p,n,m\), and \(t\) for a flag-shaped blocker are satisfied, so there does exist a flag-shaped blocker with cardinality \(p\). Since we know the range of cardinalities for flag-shaped blockers, we briefly expand our view to consider all minimum blockers, not simply flag-shaped blockers. Because all minimal blockers are minimum blockers, the lower bound for the cardinality of a minimum blocker of an \(n\times n\) matrix is \(n\), as each letter of the Hankel-cyclic decomposition must appear at least once in the blocker. Determining the upper bound is more complicated, since the shapes of all minimum blockers have not yet been fully characterized. Nonetheless, we provide the following conjecture for the upper bound of the cardinality of all minimum blockers and show that flag-shaped blockers can achieve this upper bound. **Conjecture 3.2**.: _The upper bound for the cardinality of a minimum blocker of all \(n\times n\)\(123\)-avoiding permutation matrices is \(r\times s\), where \(|r-s|\leq 1\) and \(r+s=n+1\)._ There exist minimum flag-shaped blockers with exactly \(r\times s\) positions. These can occur when an adjacent set of positions forming a blocker given by the Frobenius-Konig theorem, such that \(|r-s|\leq 1\), is placed at the northwest or southeast corner of a matrix, since these blockers are special cases of flag-shaped blockers, as described in Section 2. The upper bound can also be expressed in terms of only \(m\) and \(n\) by denoting \(m=\lceil\frac{n}{2}\rceil\) and \(t=m-1=\lceil\frac{n}{2}\rceil-1\) for a maximum cardinality of \(n+(\lceil\frac{n}{2}\rceil-1)(n-\lceil\frac{n}{2}\rceil)\). **Example 3.3**.: _Minimum blockers of \(10\times 10\) matrices with cardinality \(r\times s=30\)._ \[\begin{array}{|c|cccc|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&\bar{h}&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{array}\quad\mbox{and}\quad\begin{bmatrix}a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c&d\\ \hline e&f&g&h&i&j&a&b&c&d&e\\ \hline f&g&h&i&j&a&b&c&d&e&f\\ \hline g&h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{bmatrix}\] Furthermore, we are able to determine which cardinalities are unable to be obtained by flag-shaped blockers by using the sieve of Eratosthenes to find all values of \(p-n\) that are prime, where \(m-1<p-n\leq rs-n\) and \(|r-s|\leq 1\). The last thing we will prove in this section is that Conjecture 3.2 is true for the \(n=3\) case. That is, we can show that the upper bound of a minimum flag-shaped blocker of a \(3\times 3\) matrix is \(r\times s=2\times 2=4\). Proof.: Let \(A\) be a \(3\times 3\)\((0,1)\)-matrix avoiding \(123\)-permutation. If the permanent of \(A\) is \(0\), then \(A\) contains a rectangular \(r\times s\) zero submatrix with \(r+s=4.\) According to Frobenius-Konig theorem, any zero not in the \(r\times s\) submatrix is not necessary, so a minimum blocker of \(A\) at most contains four zeros when \(r=s=2.\) If the permanent of \(A\) is nonzero, then \(A\) contains three \(1\)'s on the diagonal. \[\left[\begin{array}{c|c}1&&\\ \hline&1&\\ \hline&&1\end{array}\right]\] If there is a pair of \(1^{\prime}\)s symmetric with respect to the diagonal, we can construct a \(123\)-avoiding permutation matrix by replacing the two \(1^{\prime}\)s on the diagonal by the pair of \(1^{\prime}\)s. \[\left[\begin{array}{c|c}1&&\\ \hline&1&\\ \hline&1&\\ \hline\end{array}\right]\Rightarrow\left[\begin{array}{c|c}1&&\\ \hline&1&\\ \hline 1&\\ \hline\end{array}\right]\] There are three pairs of elements and we just need to block one element in each pair, so a minimum blocker has cardinality \(3\). ## 4. The Polytope Generated by \(123\)-Avoiding Permutation Matrices One of our motivations for studying blockers of \(123\)-avoiding permutation matrices is to gain a better understanding of the polytope generated by \(123\)-avoiding permutation matrices, denoted \(\Omega_{n}(\overline{123})\), whose dimension is \((n-1)^{2}\). In [4], Brualdi and Cao show that each minimal blocker of \(n\times n\)\(123\)-avoiding permutation matrices determines a facet of \(\Omega_{n}(\overline{123})\), and a face of \(\Omega_{n}(\overline{123})\) lives in dimension \((n-1)^{2}-1\). We seek to learn more about the dimension of the faces of \(\Omega_{n}(\overline{123})\) determined by flag-shaped blocker. The dimension of a face of \(\Omega_{n}(\overline{123})\) determined by a minimum blocker is equivalent to the number of linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once, and no more. We present an inductive argument to find a more precise upper bound for the dimension. **Lemma 4.1**.: _Given a flag-shaped minimum blocker \(B_{n}(m,t)\) of all \(n\times n\)\(123\)-avoiding permutation matrices with \(m=n-1\), there is a \(t\times 1\) submatrix of adjacent positions containing the \((n,n)\) position that cannot be used to construct a \(123\)-avoiding permutation matrix that intersects the blocker exactly once._ Proof.: Consider an \(n\times n\) matrix containing a flag-shaped blocker such that \(m=n-1\). Our claim is that the elements at the intersection of column \(n\) and the last \(t\) rows cannot be used to construct any \(123\)-avoiding permutation matrix that intersects the blocker exactly once. Suppose we use any position from the intersection of column \(n\) and the last \(t\) rows to attempt to construct a \(123\)-avoiding permutation matrix that intersects the blocker exactly once. Now, consider the submatrix formed by deleting the column and row in which this position resides. We must utilize \(t-1\) additional positions from the last \(t-1\) rows of the submatrix to attempt to construct a \(123\)-avoiding permutation matrix. The specific positions we use are not important in this case, so long as they do not form a \(123\)-pattern with the first position we chose from column \(n\). We can now delete the last \(t-1\) rows from the submatrix. Notice that the new submatrix is not square, as it has \(t-1\) more columns than rows. No matter which \(t-1\) columns we delete to form an \((n-t)\times(n-t)\) submatrix, we can never construct a \(12\)-avoiding permutation matrix in this submatrix without intersecting the blocker more than once. This is due to the fact that the remaining flag portion of the blocker will always form at least an \((n-m+1)\times 2\) submatrix with at least two blocker positions on each full diagonal of the \((n-t)\times(n-1)\) submatrix. Thus, this \((n-t)\times(n-1)\) submatrix must contain a \(12\)-pattern. However, this \(12\)-pattern paired with the position from column \(n\) forms a \(123\)-pattern. Thus, we cannot construct a \(123\)-avoiding permutation matrix using any position from the intersection of column \(n\) and the last \(t\) rows. **Theorem 4.2**.: _Given a flag-shaped minimum blocker \(B_{n}(m,t)\) of all \(n\times n\)\(123\)-avoiding permutation matrices, there is a \(t\times(n-m)\) submatrix of adjacent positions containing the \((n,n)\) position that cannot be used to construct a \(123\)-avoiding permutation matrix that intersects the blocker exactly once._ Proof.: Define \(p:=n-m\). Lemma 4.1 describes the case where \(p=1\). Now, suppose that when \(2\leq p\leq k\), there are \(pt\) positions that cannot be used to construct a \(123\)-avoiding permutation matrix that intersects the blocker exactly once. Suppose \(p=k+1\). Consider any of the \(t\) positions of an \(n\times n\) matrix in the intersection of the \(n\)th column and the last \(t\) rows. In order to construct a \(123\)-avoiding permutation matrix using any of these positions, a necessary condition is that we can construct a \(123\)-avoiding permutation matrix in the submatrix obtained by deleting the row and column the position that intersects the blocker only once resides in. However, the \((n-1)\times(n-1)\) submatrix we obtain contains a flag-shaped blocker (perhaps with more blocker positions than necessary) with \(p=k\). Then by the induction hypothesis, there are \(kt\) positions we cannot use to construct a \(123\)-avoiding permutation matrix in the \((n-1)\times(n-1)\) submatrix that intersects the blocker only once. We also know there are \(t\) positions from the intersection of the last column and the last \(t\) rows that we cannot use to construct a \(123\)-avoiding permutation matrix. Thus, there are a total of \(kt+t=(k+1)t\) positions we cannot use to construct a \(123\)-avoiding permutation matrix that intersects the blocker only once. Then by mathematical induction, we have shown that there are \(pt=(n-m)t\) positions of an \(n\times n\) matrix that we cannot use. Thus, there is a \(t\times(n-m)\) submatrix at the lower right corner of the matrix from which we cannot use any positions to construct a \(123\)-avoiding permutation matrix that intersects the blocker exactly once. **Example 4.3**.: _To illustrate Theorem 4.2, consider the flag-shaped blocker \(B_{10}(8,3)\). We are unable to construct a \(123\)-avoiding permutation matrix that intersects the blocker once, and no more, using any of the yellow positions._ _The permutation matrix consisting of all \(i\)'s is the only \(123\)-avoiding permutation matrix using the yellow \(i\). Clearly, this intersects the blocker twice. Thus, we can focus on just the yellow \(f\) or one of the yellow \(g\)'s or \(h\)'s to illustrate the issue that arises when attempting to use any of these positions to form a \(123\)-avoiding permutation matrix. Without loss of generality, we will consider the \(h\) in the last column._ _We can delete column \(10\) and row \(9\) since we do not have to use another position from either in our construction of a \(123\)-avoiding permutation matrix. We also know that we will use one position from row \(8\) and one from row \(10\) for the permutation matrix, so we can focus on the \(7\times 9\) submatrix formed by deleting the last \(t=3\) rows and the last column._ \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\left[\begin{array}{c|c **Theorem 4.4**.: _A flag-shaped minimum blocker \(B_{n}(m,t)\) of all \(n\times n\)\(123\)-avoiding permutation matrices determines a face of \(\Omega_{n}(\overline{123})\) with dimensions at most \((n-1)^{2}+1-t(n-m)\)._ Proof.: Suppose there are \(a\) linearly independent permutation matrices that intersect the blocker exactly once and that do not use any positions from the \(t\times(n-m)\) submatrix at the lower right corner below and to the right of the flag-shaped blocker. It is possible to find \(t(n-m)\) linearly independent permutation matrices (which do not intersect the blocker exactly once according to Theorem 4.2), each of which uses a unique position of the \(t\times(n-1)\) submatrix. These \(t(n-m)\) permutation matrices are linearly independent from one another and from the \(a\) linearly independent permutation matrices found earlier. Thus, we have a total of \(a+t(n-m)\) linearly independent permutation matrices, and this total cannot exceed the total number of linearly independent permutation matrices of an \(n\times n\) matrix, which is \((n-1)^{2}+1\). Therefore, the number of linearly independent permutation matrices that intersects the blocker exactly once, given by \(a\), is \[a\leq(n-1)^{2}+1-t(n-m).\] We have shown that there are at least \(t(n-m)\) positions of an \(n\times n\) matrix that we cannot use to construct a \(123\)-avoiding permutation matrix that intersects a minimum flag-shaped blocker exactly once. It is important to note that not all flag-shaped blockers will define a face of \(\Omega_{n}(\overline{123})\) with dimension \((n-1)^{2}+1-t(n-m)\). However, there exist flag-shaped blockers that do define a face with precisely this dimension. Before describing these blockers, we first note an important aspect of of row and column blockers. **Lemma 4.5**.: _For each minimum blocker of all \(n\times n\)\(123\)-avoiding permutation matrices that are composed of an entire row or column of an \(n\times n\) matrix, there exist \((n-1)^{2}+1\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once each._ Proof.: Every \(n\times n\) permutation matrix must intersect a row or column blocker exactly once. The polytope \(\Omega_{n}(\overline{123})\) lives in dimension \((n-1)^{2}+1\), so there must exist the same number of linearly independent \(123\)-avoiding permutation matrices. **Theorem 4.6**.: _A rectangular flag-shaped blocker \(B_{n}(m,m-1)\) of all \(n\times n\)\(123\)-avoiding permutation matrices determines a face of \(\Omega_{n}(\overline{123})\) with the maximum dimension \((n-1)^{2}+1-t(n-m)\)._ Proof.: We use induction on \(p:=n-m\), starting by showing that when \(p=1\), it is possible to find \((n-1)^{2}+1-t\cdot 1\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once. For illustrative purposes, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(9,8)\) while describing the general proof. Additionally, notice that \(t=m-1\) for all rectangular flag-shaped blockers, and \(m=n-1\) when \(p=1\), so we are looking to construct \((n-1)^{2}+1-(n-2)\cdot 1\) linearly independent \(123\)-avoiding permutation matrices. Using the \((1,n)\) position, there are \((n-2)^{2}+1\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once. This is because the \((n-1)\times(n-1)\) submatrix obtained by deleting the first row and last column contains \([(n-1)-1]^{2}+1\) such permutation matrices according to Lemma 4.5. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Using the \((2,n)\) position, there are an additional \(n-1\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once, which are obtained using the \(n-1\) blocker positions in the first row. The \(123\)-avoiding permutation matrices corresponding to each position will contain a unique position of the matrix, making them linearly independent from one another. For example, consider the \(123\)-avoiding permutation matrix obtained by using the yellow \(e\) from the blocker in the example below. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Thus, in total we have \[(n-2)^{2}+1+n-1=(n-1)^{2}+1-(n-2)\cdot 1\] linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once, as desired. We move on to the induction step. Suppose that when \(p=k\), where \(2\leq k\leq n-3\), the corresponding rectangular flag-shaped blocker achieves the maximum dimension \[(n-1)^{2}+1-(m-1)(k)=(n-1)^{2}+1-(m-1)(n-m-1),\] since \(t=m-1\) and \(k=n-m-1\) for these blockers. We will show that if \(p=k+1\), the rectangular flag-shaped blocker achieves the maximum dimension \[(n-1)^{2}+1-(m-1)(k+1)=(n-1)^{2}+1-(m-1)(n-m).\] For illustrative purposes, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(5,4)\) while describing the general proof. Using the inductive hypothesis, it is possible to find \[[(n-1)-1]^{2}+1-(m-1)[(n-1)-m]=(n-2)^{2}+1-(m-1)(n-m-1)\] linearly independent 123-avoiding permutation matrices that intersect the blocker exactly once using the \((1,n)\) position. \[\begin{array}{|c|cccc|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ c&d&e&f&g&h&i&j&a&b\\ d&e&f&g&h&i&j&a&b&c\\ e&f&g&h&i&j&a&b&c&d\\ f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{array}\] Using the \((2,n)\) position and the blocker positions in the first row, we can find \(m\) additional linearly independent 123-avoiding permutation matrices that intersect the blocker exactly once and that use a unique position of the matrix. For example, consider such a permutation matrix utilizing the yellow \(d\) from the first row of the below matrix. \[\begin{array}{|c|cccc|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b\\ d&e&f&g&h&i&j&a&b&c\\ e&f&g&h&i&j&a&b&c&d\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline\end{array}\] Furthermore, the \(n-m-1\) unused positions to the right of the blocker in the first row can also be used to construct linearly independent 123-avoiding permutation matrices that intersect the blocker exactly once and that use a unique position of the matrix. For example, consider such a permutation matrix utilizing the green \(g\) from the first row below, where the yellow represents the intersection of the blocker and the 123-avoiding permutation matrix. \[\left[\begin{array}{c c c c c c c c c c c c}a&b&c&d&e&f&g&h&i&j\\ b&c&d&e&f&g&h&i&j&a\\ c&d&e&f&g&h&i&j&a&b\\ d&e&f&g&h&i&j&a&b&c\\ e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \end{array}\right]\] Similarly, there are \(n-t-2=n-m-1\) positions in the intersection of the last column and row 3 through row \(n-m+1\) that can be used to construct linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once and that use a unique position of the matrix. For example, consider such a permutation matrix utilizing the green \(c\) from the last column below. \[\left[\begin{array}{c c c c c c c c c c c c}a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \end{array}\right]\] Thus, in total we have found \[(n-2)^{2}+1-(m-1)(n-m-1)+m+2(n-m-1)=(n-1)^{2}+1-(m-1)(n-m)\] linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once, as desired. While Theorem 4.4 presents the upper bound of the dimension of a face of \(\Omega_{n}(\overline{123})\) as determined by a flag-shaped minimum blocker, it does not establish the lower bound. We propose one lower bound, though we first consider a theorem regarding \(L\)-shaped blockers, a special type of flag-shaped blocker. **Theorem 4.7** (Theorem 3.7 in [4]).: _A minimum blocker of all \(n\times n\)\(123\)-avoiding permutation matrices determines a facet of the polytope \(\Omega_{n}(\overline{123})\) whose extreme points are the \(n\times n\)\(123\)-avoiding permutation matrices that intersect the blocker exactly once._ **Corollary 4.8**.: _For \(L\)-shaped minimum blockers of all \(n\times n\)\(123\)-avoiding permutation matrices, there exist \((n-1)^{2}\) linearly independent \(n\times n\)\(123\)-avoiding permutation matrices that each contain exactly one blocker position._ We can utilize Corollary 4.8 to determine a lower bound for the dimension of a face of \(\Omega_{n}(\overline{123})\). **Theorem 4.9**.: _A flag-shaped minimum blocker \(B_{n}(m,t)\) of all \(n\times n\)\(123\)-avoiding permutation matrices determines a face of \(\Omega_{n}(\overline{123})\) with dimension at least \((n-1)^{2}+1-(t+2)(n-m)\)._ Proof.: Inducting on \(p:=n-m\), we first show that when \(p=1\), it is possible to find \((n-1)^{2}+1-(t+2)\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once. For illustrative purposes, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(9,3)\) while describing the general proof. Using the \((1,n)\) position, there are \((n-2)^{2}\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once. This is because the \((n-1)\times(n-1)\) submatrix obtained by deleting the first row and last column contains \([(n-1)-1]^{2}\) such permutation matrices according to Lemma 4.8. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline\hline b&c&d&e&f&g&h&i&j&a&b\\ \hline c&d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&\ell\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{array}\] Using \((2,n)\) in combination with the first \(m-t-1\) positions from row \(1\), in addition to \((n-t,n)\) in combination with the blocker positions in row \(1\), we have an additional \(m=n-1\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker once and that use a unique position of the matrix. Consider the following two matrices as examples, where yellow positions represent the intersection of the permutation matrix with the blocker. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline i&j&a&b&c&d&e&f&g\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline\end{array}\] and \[\begin{array}{|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a&b\\ \hline c&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g\\ \hline i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h\\ \hline\end{array}\] Additionally, we can construct \(n-t-3\) more linearly independent \(123\)-avoiding permutation matrices that intersect the blocker only once by utilizing the positions at the intersection of column \(n\) and rows \(3\) through \(n-t-1\). Consider the following example. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} In total we have \[(n-2)^{2}+n-1+n-t-3=(n-1)^{2}+1-(t+2)\] linearly independent 123-avoiding permutation matrices that intersect the blocker exactly once, as desired. Moving onto the induction step, suppose that when \(p=k\), where \(2\leq k\leq n-3\), the corresponding flag-shaped blocker lives in dimension at least \[(n-1)^{2}+1-(t+2)(k).\] We will show that if \(p=k+1\), the flag-shaped blocker at least achieves the minimum dimension \[(n-1)^{2}+1-(t+2)(k+1).\] For illustrative purposes, we consider a \(10\times 10\) matrix with the flag-shaped blocker \(B_{n}(7,2)\) while describing the general proof. Using the inductive hypothesis, it is possible to find \[[(n-1)-1)]^{2}+1-(t+2)(k)=(n-2)^{2}+1-(t+2)(n-m)\] linearly independent 123-avoiding permutation matrices that intersect the blocker exactly once using the \((1,n)\) position. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Utilizing the first \(m-t-1\) positions of row 1 in conjunction with the \((2,n)\) position, we can construct \(m-t-1\) additional linearly independent 123-avoiding permutation matrices intersecting the blocker only once and containing a unique position of the matrix. Consider the below example using \(c\) from row 1 along with \(a\) from row 2. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Furthermore, we can use the row 1 blocker positions along with the \((m-t+1,n)\) position to obtain \(t+1\) additional linearly independent 123-avoiding permutation matrices that intersect the blocker once each. Consider such a permutation matrix using the \(f\) from the first row and the \(e\) from the last column below. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Using the \(m+1\) through \(n-2\) positions from row one, we can obtain \(k-2=n-m-2\) additional linearly independent 123-avoiding permutation matrices that intersect the blocker once and that use a unique position. Consider the below example. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) \\ \hline \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) \\ \hline \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) \\ \hline \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) \\ \hline \(e\) & \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) \\ \hline \(f\) & \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline \(g\) & \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) \\ \hline \(h\) & \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) \\ \hline \(i\) & \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) \\ \hline \(j\) & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) & \(f\) & \(g\) & \(h\) & \(i\) \\ \hline \end{tabular} Thus far, we have used 3 positions from column \(n\) in our constructions. There are \(n-t-3\) additional positions in this column that we may utilize, each of which will result in a permutation matrix that contains a unique position of the matrix. Consider two such examples below. \[\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b\\ \hline d&e&f&g&h&i&j&a&b&c\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{array}\quad\text{and}\quad\begin{bmatrix}a&b&c&d&e&f&g&h&i&j\\ \hline b&c&d&e&f&g&h&i&j&a\\ \hline c&d&e&f&g&h&i&j&a&b&e\\ \hline e&f&g&h&i&j&a&b&c&d\\ \hline f&g&h&i&j&a&b&c&d&e\\ \hline g&h&i&j&a&b&c&d&e&f\\ \hline h&i&j&a&b&c&d&e&f&g&h\\ \hline j&a&b&c&d&e&f&g&h&i\\ \hline\end{bmatrix}\] After adding and rewriting, in total we have constructed \((n-1)^{2}+1-(t+2)(k+1)\) linearly independent \(123\)-avoiding permutation matrices that intersect the blocker exactly once, as desired. Our results relating to the upper and lower bound for the dimension of a face of \(\Omega_{n}(\overline{123})\) provide insight into the geometric properties of the polytope, though there is still work to be done in terms of more precisely determining the dimension of the faces of \(\Omega_{n}(\overline{123})\).
2309.11667
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation
As 3D human pose estimation can now be achieved with very high accuracy in the supervised learning scenario, tackling the case where 3D pose annotations are not available has received increasing attention. In particular, several methods have proposed to learn image representations in a self-supervised fashion so as to disentangle the appearance information from the pose one. The methods then only need a small amount of supervised data to train a pose regressor using the pose-related latent vector as input, as it should be free of appearance information. In this paper, we carry out in-depth analysis to understand to what degree the state-of-the-art disentangled representation learning methods truly separate the appearance information from the pose one. First, we study disentanglement from the perspective of the self-supervised network, via diverse image synthesis experiments. Second, we investigate disentanglement with respect to the 3D pose regressor following an adversarial attack perspective. Specifically, we design an adversarial strategy focusing on generating natural appearance changes of the subject, and against which we could expect a disentangled network to be robust. Altogether, our analyses show that disentanglement in the three state-of-the-art disentangled representation learning frameworks if far from complete, and that their pose codes contain significant appearance information. We believe that our approach provides a valuable testbed to evaluate the degree of disentanglement of pose from appearance in self-supervised 3D human pose estimation.
Krishna Kanth Nakka, Mathieu Salzmann
2023-09-20T22:22:21Z
http://arxiv.org/abs/2309.11667v1
# Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation ###### Abstract As 3D human pose estimation can now be achieved with very high accuracy in the supervised learning scenario, tackling the case where 3D pose annotations are not available has received increasing attention. In particular, several methods have proposed to learn image representations in a self-supervised fashion so as to disentangle the appearance information from the pose one. The methods then only need a small amount of supervised data to train a pose regressor using the pose-related latent vector as input, as it should be free of appearance information. In this paper, we carry out in-depth analysis to understand to what degree the state-of-the-art disentangled representation learning methods truly separate the appearance information from the pose one. First, we study disentanglement from the perspective of the self-supervised network, via diverse image synthesis experiments. Second, we investigate disentanglement with respect to the 3D pose regressor following an adversarial attack perspective. Specifically, we design an adversarial strategy focusing on generating natural appearance changes of the subject, and against which we could expect a disentangled network to be robust. Altogether, our analyses show that disentanglement in the three state-of-the-art disentangled representation learning frameworks if far from complete, and that their pose codes contain significant appearance information. We believe that our approach provides a valuable testbed to evaluate the degree of disentanglement of pose from appearance in self-supervised 3D human pose estimation. ## 1 Introduction Monocular 3D human pose estimation has been at the heart of computer vision research for decades, and tremendous results can now be achieved in the supervised learning setting [22, 14, 15, 27, 38, 29, 23, 37, 28, 33, 21]. Unfortunately, obtaining 3D pose annotations for real images remains very expensive, particularly in the wild. As such, self-supervised learning approaches have received an increasing attention in the past few years [32, 31, 12, 5]. One of the common factors across all these methods is their aim to learn a latent representation of the image that disentangles the person's pose from their appearance. In practice, as shown in Figure 1, this has been achieved by leveraging access to either multiple views [31, 32] or video sequences [5, 12] during training. In either case, one then only needs access to a small amount of supervised data to effectively train a pose regressor from the pose-related portion of the latent code to the actual 3D pose, because this portion of the latent code should in theory contain only pose-relevant information. Despite the impressive progress of these self-supervised 3D human pose estimation methods, several fundamental questions about their learnt representations remain unanswered. For example, to what extent are the pose and appearance latent vectors disentangled? Do these two representations contain truly complementary information, or do they share some signal? How do the different sources of self-supervision, i.e., multiple views or temporal information, affect the disentanglement of these representations? In this paper, we seek to provide a deeper understanding of such disentangled representations by analyzing the resulting latent spaces in two ways. First, we study the disentanglement of the latent pose and appearance vectors with respect to the self-supervised representation learning network. In this context, we analyze both the images synthesized by altering the appearance codes in different ways, and the influence on pose and appearance of different channels in the latent pose codes. Second, we investigate the disentanglement with respect to the supervised 3D pose regressor. To this end, we follow an adversarial attack strategy, aiming to modify the subject's appearance so as to affect the regressed 3D pose. However, instead of exploiting a standard adversarial attack technique [20, 18, 10], against which disentangled pose networks were never meant to be robust, we design a dedicated framework that should be much more favorable to such networks. Specifically, we seek to alter only the latent appearance vector so as to affect the 3D pose regressed from the latent pose vector extracted from the image synthesized using the modified appearance vector with the original pose one. Our experiments on the state-of-the-art disentangled representation learning frameworks, NSD [31], CSSL [12] and DRNet [5], evidence that, across the board, _disentanglement is not complete and the pose codes of these frameworks contain appearance information_. Our work provides the tool to study the effectiveness of different disentanglement-based training strategies and will serve as a valuable testbed to analyze the extent of disentanglement in future frameworks. **Contributions.** To summarize, our contributions are twofold. (1) We systematically analyze the latent pose and appearance representations in several representative disentangled networks. Our experiments lead to an interesting find Figure 1: **Disentanglement-based Representation Learning.** Given a reference frame and another frame from either a different view or a different time instant, an encoder learns a representation separated into two components, appearance and pose, in a self-supervised fashion. A pose regressor is then trained using limited annotated data to map the latent pose vector to a 3D human pose. ing that the latent pose vectors contain almost all of the subject's appearance information. (2) We introduce an adversarial strategy to understand the disentanglement of 3D pose from natural appearance changes. Our code and trained networks will be made publicly available upon acceptance. ## 2 Related Work #### 2.0.1 Disentanglement-based 3D Human Pose Estimation. Disentangling pose and appearance in 3D pose estimation was first proposed in DRNet [5], where a discriminator was employed to distinguish if the time-varying features from two images represented the same subject or not. Furthermore, the distance between the time-invariant, i.e., appearance, component of one subject at two different time instants was minimized, and the time-varying pose features were encouraged to be indistinguishable across subjects, thereby ensuring that appearance information did not leaked into the pose features. In [31, 32], disentanglement was achieved via the use of multiple views during training, leveraging the intuition that, for one subject, the pose features extracted from one view and rotated to a different view at the same time instant should be the same as those directly extracted from that view, and the appearance features at different time instants should be similar so as not to contain pose information. More recently, [12] designed a contrastive loss to force the latent appearance features in temporally-distant frames to remain close while encouraging the pose features to be different from each other. All these methods learn the disentangled representation from unsupervised data, and then train a shallow regressor to predict 3D pose from the pose latent vector using a limited amount of pose labels. In this work, we study how disentangled the appearance and pose features extracted by these methods truly are. To this end, we provide analyses based on diverse image synthesis experiments and on adversarial attacks. #### 2.0.2 Adversarial Attacks. Deep neural networks were first shown to be vulnerable to adversarial examples in [36]. Following this, several attacks have been proposed, using either gradient-based approaches [10, 18] or optimization-based techniques [3, 26, 25, 4, 7]. To study the disentanglement of pose and appearance in 3D human pose estimation, we seek to analyze if appearance changes can affect the regressed 3D pose. In principle, we could use any of the above-mentioned attack strategy to do this. However, they offer no control on the generated perturbations, and thus could potentially incorporate structures that truly suggest a different pose. In other words, the disentangled networks cannot be expected to be robust to such attacks. Therefore, we design an attack strategy to which they can be expected to be robust. Specifically, we synthesize an image by modifying only the appearance code of the network of interest, and show that the 3D pose regressed from that image will typically differ from the original one. Our attacks can be thought of as inconspicuous ones, as the generated image looks natural, with only appearance changes to the subject. Other works [41, 16, 30, 35, 2, 34] have designed strategies to generate realistic adversarial images, typically focusing on face recognition datasets and using GANs [9, 24, 1]. Our approach nonetheless fundamentally differs from those in both methodology and context; our main goal is not to attack disentangled 3D human pose networks but to study their level of disentanglement. Therefore, we design an attack strategy that is most favorable for these networks, and against which they can be expected to be naturally robust. **Measuring Disentanglement.** In other contexts than human pose estimation, several works have proposed metrics to quantify the degree of disentanglement of latent vectors [8, 6, 19]. These methods are of course also applicable to the self-supervised learning frameworks that we will analyze, and we will report these metrics in our experiments. However, these metrics do not provide any understanding of where disentanglement fails. This is what we achieve with our diverse analyses. ## 3 Disentangled Human Pose Estimation Networks Given an image as input, 3D human pose estimation aims to predict the 3D positions of \(J\) body joints, such as the wrists, elbows, and shoulders. When no annotations are available for the training images, an increasingly popular approach consists of learning a latent representation that disentangles appearance from pose in a self-supervised fashion. Here, we review disentanglement-based 3D human pose estimation frameworks that we will analyze in Sections 4 and 5. Existing disentanglement-based frameworks essentially all follow the same initial steps. The input image \(\mathbf{I}\) is first passed through a spatial transformer network \(\mathcal{S}\) to extract the bounding box corresponding to the human subject. An encoder \(E\) then takes the cropped bounding box \(\mathbf{I}_{c}\) as input and outputs a latent vector \(\mathbf{h}\) comprising two components, that is \(E:\mathbf{I}_{c}\rightarrow[\mathbf{h}_{a},\mathbf{h}_{p}]\). The first component, \(\mathbf{h}_{a}\), aims to encode the subject's appearance while the second, \(\mathbf{h}_{p}\), should represent the subject's pose. The networks are trained without any 3D pose annotations, and thus supervision is achieved via image reconstruction. Specifically, a decoder \(D\) takes the complete the latent vector \(\mathbf{h}\) as input and and outputs a reconstructed version of the cropped image \(\tilde{\mathbf{I}}_{c}\), with an additional mask \(\mathbf{M}\) corresponding to the subject's silhouette. The cropped image is further merged with a pre-computed background image \(\mathbf{B}\) to obtain the final reconstructed input image \(\tilde{\mathbf{I}}\). The main difference between existing frameworks lies in the way they encourage the disentanglement of pose and appearance. Specifically, the different frameworks train the encoder \(E\) and decoder \(D\) as follows: **NSD [31].** The neural scene decomposition (NSD) approach leverages the availability of multiple views during training. Given a pair of images from two views at time \(t\), NSD passes one image to the encoder to obtain an appearance vector \(\mathbf{h}_{a}^{t}\) and a pose vector \(\mathbf{h}_{p}^{t}\). The pose vector \(\mathbf{h}_{p}^{t}\), shaped as a 3D point cloud, is rotated to the second view using the ground-truth camera calibration between the two views to obtain a transformed pose vector \(\mathbf{h}_{p,r}^{t}\). Furthermore, to factor out appearance from pose, NSD replaces the appearance vector \(\mathbf{h}_{a}^{t}\) by an appearance vector \(\mathbf{h}_{a}^{t_{i}}\) of the same subject at a different time instant \(t_{1}\). The decoder \(D\) then takes as input \(\mathbf{h}=[\mathbf{h}_{p,r},\mathbf{h}_{a}^{t_{1}}]\) and aims to reconstruct the image from the second view at time \(t\). **CSSL [12].** Instead of using multiple views, contrastive self-supervised learning (CSSL) exploits temporal information from videos to learn a latent representation of pose and appearance. To achieve disentanglement, CSSL encourages the distance between the latent pose vectors \(\mathbf{h}_{p}^{t_{1}}\) and \(\mathbf{h}_{p}^{t_{2}}\) of two frames, \(t_{1}\) and \(t_{2}\), to reflect their temporal distance. Furthermore, similarly to NSD, CSSL swaps the appearance vectors \(\mathbf{h}_{a}^{t_{1}}\) and \(\mathbf{h}_{a}^{t_{2}}\) of the two video frames when performing image reconstruction so as to force them to learn time-invariant information, thus encoding appearance. **DRNet [5].** The disentangled representation network (DRNet) uses a similar strategy to that of CSSL, consisting of randomly choosing two temporal frames, \(t_{1}\) and \(t_{2}\), from a video. However, DRNet aims to achieve disentanglement in two ways: (1) By minimizing the distance between the two appearance vectors \(\mathbf{h}_{a}^{t_{1}}\) and \(\mathbf{h}_{a}^{t_{2}}\); and (2) by exploiting an adversarial network to make the pose vector \(\mathbf{h}_{p}\) independent of the subject's appearance. Specifically, this is achieved by training the additional discriminator to output the subject's identity given the pose vector as input, and training the encoder \(E\) in an adversarial fashion to fool the discriminator. Once trained on a large corpus of unannotated images in a self-supervised manner, the frameworks discussed above employ a 2 layer pose regressor \(\phi:\mathbf{h}_{p}\rightarrow\mathbf{q}\) to predict the 3D pose \(\mathbf{q}\) from the latent pose vector \(\mathbf{h}_{p}\). This pose regressor is trained with a small amount of supervised data, while freezing the weights of the encoder. Due to space limitations, we provide additional details about training in the supplementary material. ## 4 Disentanglement w.r.t. the Self-Supervised Network In this section, we study the disentanglement of pose and appearance within the self-supervised representation learning network itself. To this end, we first analyze the impact of the latent appearance vector on the images synthesized by the network's decoder. We then turn to investigating the influence on pose and appearance of different channels in the latent pose vector. ### Effect of the Appearance Vector on Synthesized Images Our first analysis consists of visualizing the images generated by the network's decoder. In particular, we leverage the intuition that, if the pose and appearance vectors were disentangled, altering the appearance vector while keeping the pose one fixed should yield images with a different subject's appearance but the same pose. We investigate this via the two strategies discussed below. First, we synthesize novel images by mixing the appearance and pose information from two subjects, S8 and S7. The top two rows of Figure 2 show the images synthesized with DRNet1_without mixing the appearance vectors; these images look similar to the original ones, depicting two clearly different subject's appearances_. By contrast, the images in the third to fifth row of the figure, obtained by using S7's pose vector and S8's appearance one, still contain appearance information of S7. This is particularly the case for the images synthesized using DRNet and NSD, in which the subject's shirt has taken the red color of that of S7, although we use only S7's pose code in the synthesis process. CSSL is less subject to such failures, but they nonetheless occur in some cases, such as in the third and fourth columns. As a second experiment, we replace the appearance vector with a zero vector. We then combine this zero appearance vector with the pose vector obtained from the original image shown in the first row of Figure 3. As can be seen from the second row, even though we use the same zero appearance vector to generate images of different subjects, the synthesized images retain almost all the appearance information of the original images, except near the head region. Both of these experiments evidence that the pose code contains a significant amount of appearance information and that the disentanglement is thus not complete. Nevertheless, both experiments also show that modifying the appear Figure 2: **Synthesizing novel images.** We take the pose information from S7 (first row) and the appearance information from S8 (second row) and synthesize novel images in the third, fourth and fifth rows using DRNet, CSSL and NSD. The synthesized images retain some appearance information (red shirt) of S7 although we only use S7’s pose code in the synthesis. ance code indeed does not impact the subject's pose in the synthesized image. To further verify whether the appearance codes are truly free of pose information, we visualize the appearance codes of all images of a S7 using t-SNE in Figure 4. The resulting plot shows nicely-separated clusters, which can be observed to correspond to action categories. This suggests that, although modifying the appearance code does not visually change the subject's pose in the synthesized images, the appearance codes still contain information about the subject activity, and thus about their pose. ### Effect of the Pose Vector on Synthesized Images In this section, we study the impact of the pose vector on the synthesized images and further provide evidence of the presence of appearance information in the pose code. To this end, we identify channels encoding appearance information in the pose code. Our approach is based on the idea that two images depict Figure 4: **tSNE visualization of appearance codes.** The appearance codes of images from same subject S7 are clustered according to the action performed by the subject. This indicates that the appearance code still contains information about the pose. Best viewed in color and zoomed in. Figure 3: **Replacing the appearance code with a fixed zero vector.** In the first row, we show the original synthesized images for four subjects. In the second row, we set the values in the appearance vector to zero and use the same pose vectors as in the first row. Despite using a similar zero appearance vector, the outputs do not appear similar in content and instead retain almost all the appearance information of the original images. ing different subjects in similar poses should ideally have similar latent pose Figure 5: **Detecting appearance channels in the pose latent vector.** We take images depicting different subjects in a similar pose, for which we could expect the pose codes to be close. However, as shown on the right, the latent pose vectors obtained by NSD contain channels with large differences, likely to encode appearance information. Figure 6: **Influence of the pose code channels.** To synthesize the images in the middle portion of the figure, we take the appearance code corresponding to image A, and vary the pose code in two ways. Specifically, in the top (or bottom) portion of the figure, we replace the \(K\) channels with lowest (or highest) appearance probability with the corresponding ones from the pose code extracted from image B. (a) For NSD, replacing the \(K=500\) lowest appearance probability channels yields an image (highlighted with a red box) depicting B’s pose and A’s appearance. Similarly, replacing the \(K=200\) highest appearance probability channels produces B’s appearance and A’s pose. (b) We observe similar trends for DRNet, although the separation of appearance and pose inside the pose code is not as clear as for NSD. codes. The channels that have large differences therefore indicate the presence of appearance information. To illustrate this, we use the two images shown in Figure 5(a) and plot the absolute difference between the corresponding pose codes obtained by NSD2 in Figure 5(b), ordering the channels by the magnitude of the difference. The latent pose indeed disagree in many channels. We define the probability of a channel to encode appearance information to be proportional to the absolute pose vector difference for that channel. Below, we then analyze the effect of altering the \(K\) channels with highest or lowest appearance probability. Footnote 2: Similar plots for the other networks are provided in the supplementary material. To this end, we take two images A and B, as shown in the left and right ends of Figure 6, fix the appearance code as that of A. We then replace the channels with either the \(K\) lowest or highest appearance probability in the pose code of A with the corresponding values from the pose code of B. Note that all disentangled networks have a pose code of dimension 600, and therefore \(K=600\) means replacing all the channels of the pose vector. As shown in Figure 6(a) for NSD, by replacing the \(K=500\) lowest appearance probability channels yields an image (highlighted with a red box) with A's appearance and B's pose. Furthermore, replacing the \(K=200\) highest appearance probability channels synthesizes an image with B's appearance and A's pose. Both these results indicate that the top 100-200 highest probability appearance channels in the pose code indeed encode the appearance information for NSD. It is worth noting that with \(K=600\) the image depicts both the pose and appearance of B, confirming our previous experiments in Figure 2. Figure 6(b) for DRNet shows the channels are not as clearly separated in pose and appearance ones in this method. Nevertheless, the pose codes still combines pose and appearance information. We present similar analysis and visualizations for CSSL in the supplementary material. ## 5 Disentanglement w.r.t. the 3D Pose Regressor The previous set of analyses have focused on the self-supervised representation learning networks themselves, evidencing that the latent pose vector is contaminated with appearance information. Here, we further investigate the disentanglement w.r.t. the supervised 3D human pose regressor, which takes the latent pose vector as input. Note that, since the 3D pose regressor is disassociated from the appearance vector at network level, studying the appearance and pose vector disentanglement in this context is not straightforward. Therefore, we consider the pose estimation network comprised of the self-supervised encoder and the supervised decoder as a standalone network and study the effects of the input image appearance on its 3D pose output. To this end, we introduce an adversarial perturbation strategy that explicitly focuses on modifying only the appearance information in the input image. Below, we first describe our attack framework, and then analyze its effects on the disentangled pose estimation networks. ### Appearance-only Attack Framework Our goal is to perturb only the subject's appearance in the input image; perturbing the image such that the subject's pose visually changes would of course make the pose regressor output a different pose but would not allow us to verify the disentanglement of pose and appearance. To enforce such a constraint on our perturbations, we follow a strategy that, intuitively, should constitute a weak attack and thus be favorable to the disentangled network. Specifically, we only perturb the latent appearance vector, which we combine with the _original_ pose one to generate an adversarial image. We then extract a new latent pose vector from this image and predict the 3D human pose from it. If the pose regressor could discard the appearance information, it would thus not be affected by this perturbation. As shown in Algorithm 1, we generate an adversarial image \(\mathbf{I}_{adv}\) as using a generator network \(G\). In practice, we take \(G\) to be either the disentangled network of interest or another disentangled network, and we will report results with both strategies. First, we pass the original input image to the generator's spatial transformer \(G_{s}\) and extract the cropped image \(\mathbf{I}_{c}\) using the resulting bounding box. We then encode the cropped image \(\mathbf{I}_{c}\) into an initial latent pose vector \(\tilde{\mathbf{h}}_{p}^{0}\) and latent appearance vector \(\tilde{\mathbf{h}}_{a}^{0}\) using the generator's encoder \(G_{e}\). The combined latent vector \(\tilde{\mathbf{h}}=[\tilde{\mathbf{h}}_{a}^{0},\tilde{\mathbf{h}}_{p}^{0}]\) is then passed as input to the generator's decoder \(G_{d}\), which outputs the reconstructed image \(\tilde{\mathbf{I}}_{c}^{0}\) and a mask \(\mathbf{M}^{0}\). The cropped output \(\tilde{\mathbf{I}}_{c}^{0}\) is then combined with the pre-computed background image \(\mathbf{B}\) to resynthesize an image \(\mathbf{I}_{adv}^{0}\) at full resolution. This image then acts as input to the target pose estimation network, which encompasses an encoder \(E\), that may differ from the generator one \(G_{e}\), and a pose regressor. This forward pass produces an initial pose estimate \(\phi(\mathbf{h}_{p}^{0})\). Note that the output of the target network given \(\mathbf{I}_{adv}^{0}\) as input has empirically a small mean per-joint position error (MPJPE) of around 20 mm with respect to the prediction \(\mathbf{q}\) obtained from the original image \(\mathbf{I}\). This is because, at this point, no attack has been performed. To attack only the subject's appearance in the adversarial input, we fix the pose vector \(\tilde{\mathbf{h}}_{p}=\tilde{\mathbf{h}}_{p}^{0}\) to generate images of depicting the subject in their original pose. Furthermore, we also fix the mask to its initial value \(\mathbf{M}=\mathbf{M}^{0}\). We then compute an appearance-only perturbation by optimizing the latent appearance vector \(\tilde{\mathbf{h}}_{a}\) in an iterative manner until it either achieves an MPJPE error with respect to the original prediction \(\mathbf{q}^{0}\) higher than a threshold, or reaches a maximum number of iterations. Note that our previous set of experiments in Section 4 have evidenced that modifying the appearance vector does not change the observed subject's pose, which validates our use of the network's decoder to generate the appearance-modified image. ### Appearance-only Attack Results **Qualitative Results.** In Figure 7, we visualize the results of different models on the attacked images. For all disentangled representation frameworks, small changes in appearance produce wrong predictions. In particular, as shown in the third row, a small change in the shirt color leads to a completely different pose for all models. This demonstrates that the pose estimation network is dependent on the subject's appearance in the input image that its intermediate latent pose vector is not completely disentangled from appearance. #### 4.2.2 Quantitative Study. We provide the results of our appearance-only attacks in Table 1 using the network decoder as the generator. We report the MPJPE at the initial iteration and after the attack for each subject. Specifically, the initial Figure 7: **Appearance-only Attack Examples.** Given an input image **(a)** with ground-truth pose **(d)**, we first reconstruct **(b)** the images using a generator. By optimizing the latent appearance vector, we obtain an adversarial image **(c)** that aims to fool the pose regressor so that it outputs a 3D pose **(f)** that differs significantly from the original predictions **(e)**. error corresponds to the error between the predictions obtained from the original image \(\mathbf{I}\) and from the synthesized image \(\mathbf{I}_{adv}^{0}\), without any latent attack. It is around 21.8 mm on average. This shows that the generator faithfully reconstructs the input image and can therefore be employed to perform the attack. After the attack, the performance decrease across all the disentangled models In other words, all models are vulnerable to our appearance-based attacks and typically reach an MPJPE of at least 175 mm. This indicates that the latent pose vector \(\mathbf{h}_{p}\) is not invariant to appearance changes and therefore that the appearance-pose disentanglement is not complete. We provide ablative study using the same NSD decoder as the generator for all disentangled networks in the supplementary material. To further evaluate quantitatively the sensitivity of a disentangled network to our appearance-only attacks, we computed three image-based metrics, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Square Error (MSE), to compare the attacked images with those synthesized with the original framework. As shown in Table 2, the three metrics indicate that the images obtained by attacking DRNet are more similar to the original synthesized ones than those obtained by attacking NSD or CSSL. This suggests that DRNet can be attacked with smaller changes, and thus contains more appearance information in its pose vectors. Altogether, our experiments evidence that disentangling pose and appearance in an unsupervised manner for 3D human pose estimation remains far from being solved. Our attacks thus provide a valuable testbed to valuate the effectiveness of future disentanglement-based frameworks. ## 6 Discussion **Evaluating Disentanglement.** Several methods [8, 6, 19] have been proposed for assessing the degree of disentanglement of latent variables. In particular, we \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Subject**} & \multicolumn{2}{c}{**NSD**} & \multicolumn{2}{c}{**DRNet**} & \multicolumn{2}{c}{**CSSL**} & \multicolumn{2}{c}{**Average**} \\ \cline{2-7} & Initial & Final & Initial & Final & Initial & Final & Initial & Final \\ \hline **S1** & 21.0 & 179.7 & 23.9 & 169.7 & 21.5 & 176.9 & 21.6 & 174.2 \\ **S5** & 19.6 & 180.0 & 14.1 & 166.7 & 25.3 & 186.5 & 19.6 & 177.1 \\ **S6** & 22.3 & 179.8 & 23.5 & 177.9 & 26.8 & 196.7 & 23.4 & 184.7 \\ **S7** & 18.8 & 179.2 & 17.6 & 177.5 & 24.1 & 191.8 & 20.3 & 182.3 \\ **S8** & 16.8 & 178.6 & 21.7 & 198.9 & 30.5 & 186.9 & 23.0 & 187.8 \\ \hline \hline **Average** & **19.7** & **179.5** & **20.2** & **177.5** & **25.6** & **207.5** & **21.8** & **176.8** \\ \hline \hline \end{tabular} \end{table} Table 1: MPJPE before and after our appearance-based attacks. We report the results of three networks and observe that disentangled networks are vulnerable to our attacks. \begin{table} \begin{tabular}{c c c c} \hline \hline Metric & NSD & DRNet & CSSL \\ \hline SSIM\(\uparrow\) & 0.947 & 0.963 & 0.943 \\ PSNR\(\uparrow\) & 24.65 & 26.45 & 24.37 \\ MSE\(\downarrow\) & 0.012 & 0.007 & 0.013 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison of adversarial images with the original synthesized images. These number show that the images obtained by attacking DRNet are closer to the original synthesized ones, and thus that the DRNet pose vectors tend to contain more appearance information. report the two complementary state-of-the-art metrics of [19], Distance Correlation (DC) and Information over Bias (IOB) to evaluate disentanglement. DC is bounded in [0,1] and measures the correlation between the two latent spaces; IoB measures the amount of information from the input image that is encoded in a given latent space. In Table 3, we provide these metrics, averaged over 400 images, for the pose (P) and appearance (A) latent spaces and for different disentanglement strategies. DC(A, P) contain large values indicating that the appearance and pose are correlated. Furthermore, the IOB(I, P) values are larger than the IOB(I, A), which suggests that the pose code encodes more input information than the appearance code. Note that DC(A, P) cannot be used as a standalone metric to interpret disentanglement because low values of DC can also indicate noise in one latent space. While DRNet achieves the best DC(A, P) score, its value of 0.90 IOB(I, A) suggests that the appearance code encodes minimal information. Although these metrics quantify disentanglement, they offer little understanding of the disentanglement issues, and IOB is difficult to interpret because it is unbounded and requires training an external decoder network whose optimal architecture is unknown. By contrast, our analyses enable a finer-grain understanding of the pose and appearance latent spaces of representation learning strategies for human pose estimation, and provide visual results that are easier to interpret. **Does data-augmentation help to learn appearance-invariant features?** Recently, powerful data augmentation (DA) strategies, such as AugMix [11], CutMix [39] and others [13, 40], have been proposed to improve the generalization power and robustness of neural networks. Furthermore, classical adversarial training [20, 17] can be viewed as a form of data augmentation with adversarial images. Here, we therefore study if data augmentation constitutes a promising direction towards more effectively disentangling self-supervised 3D human pose estimation networks. Since the network architectures we consider are much more complicated than the image recognition ones used in the above-mentioned DA works, we employ a simpler DA strategy consisting of augmenting the output of the spatial transformer with RGB jitter. We then re-run the analyses we presented before, focusing here on CSSL. Specifically, in Figure 8, we show the images synthesized when mixing S7's pose vector with S8's appearance. Note that, with DA, the images better retain the appearance of S8. Furthermore, in Figure 10, we show images obtained by making use of a zero appearance vector. With DA, all the synthesized images depict a similar subject appearance. Altogether, this suggests that DA helps the disentanglement process in CSSL, which is further confirmed by the DC(A, P) value that improves from 0.77 to 0.62. This value of 0.62 nonetheless still indicates a relatively high correlation between the latent spaces. To further analyze this, we computed a similar t-SNE plot as that of Figure 9, and observed that the actions are still clustered, evidencing that the appearance code still contains some pose information. Similarly, we also ran our appearance-only attacks on the CSSL model trained with DA, and observed the attacks to remain successful, suggesting that the pose vector remains contaminated by appearance information. To evaluate quantitatively whether DA nonetheless improved this, we report the PSNR, SSIM, and MSE metrics between the attacked images and the original synthesized ones in Table 4. The values indicate that the images obtained by attacking the network Figure 8: **Synthesizing novel images with CSSL (DA).** As in Figure 2, we take S7’s pose vector and S8’s appearance one and synthesize novel images with CSSL, either without (top) or with (bottom) DA during training. The image synthesized with CSSL (DA) retain S8’s appearance without residual red shirt color from S7. Figure 10: **Zero appearance vectors with CSSL (DA).** In first row, we show the original image synthesized with CSSL. While, without DA (middle), the synthesized images obtained with a zero appearance vector retain the original subject’s appearance, with DA (bottom), all the subjects have a similar the appearance. This suggests that DA helps to remove appearance information from the pose vectors. Figure 9: **tSNE visualization of CSSL (DA) appearance codes.** The appearance codes of images from same subject are stilll clustered according to the action performed by the subject. without DA are more similar to the original synthesized ones. In other words, CSSL (DA) requires larger changes in the input image to attack the 3D pose regressor. Altogether, these results indicate that DA constitutes a promising direction to improve disentanglement, and we leave the development of more effective DA strategies as future work. ## 7 Conclusion In this work, we have analyzed the latent vectors extracted by self-supervised disentangled networks for 3D human pose estimation. Specifically, we have studied the disentanglement of pose and appearance from the perspective of both the representation learning network, and the supervised 3D human pose regressor. In the former case, our analyses via diverse image synthesis strategies have evidenced that the state-of-the-art disentanglement-based representation learning networks do not truly disentangle pose from appearance, and in particular that the latent pose codes contain significant appearance information. In the latter, we have shown that disentanglement-based networks were not robust to appearance-only adversarial attacks, despite these attacks being designed to be as favorable as possible to the disentanglement-based frameworks. We believe that our analysis methodology and our semantic attacks will be beneficial to improve disentanglement-based representation learning in the future, and thus positively impact self-supervised 3D human pose estimation.
2309.16108
Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words
Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct and independent information. Furthermore, the model must demonstrate robustness to sparsity in input channels, as they may not be densely available during training or testing. In this paper, we propose a modification to the ViT architecture that enhances reasoning across the input channels and introduce Hierarchical Channel Sampling (HCS) as an additional regularization technique to ensure robustness when only partial channels are presented during test time. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and utilizes a learnable channel embedding that is added to the patch tokens, similar to positional embeddings. We evaluate the performance of ChannelViT on ImageNet, JUMP-CP (microscopy cell imaging), and So2Sat (satellite imaging). Our results show that ChannelViT outperforms ViT on classification tasks and generalizes well, even when a subset of input channels is used during testing. Across our experiments, HCS proves to be a powerful regularizer, independent of the architecture employed, suggesting itself as a straightforward technique for robust ViT training. Lastly, we find that ChannelViT generalizes effectively even when there is limited access to all channels during training, highlighting its potential for multi-channel imaging under real-world conditions with sparse sensors. Our code is available at https://github.com/insitro/ChannelViT.
Yujia Bao, Srinivasan Sivanandan, Theofanis Karaletsos
2023-09-28T02:20:59Z
http://arxiv.org/abs/2309.16108v4
# Channel Vision Transformers: ###### Abstract Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct and independent information. Furthermore, the model must demonstrate robustness to sparsity in input channels, as they may not be densely available during training or testing. In this paper, we propose a modification to the ViT architecture that enhances reasoning across the input channels and introduce Hierarchical Channel Sampling (HCS) as an additional regularization technique to ensure robustness when only partial channels are presented during test time. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and utilizes a learnable channel embedding that is added to the patch tokens, similar to positional embeddings. We evaluate the performance of ChannelViT on ImageNet, JUMP-CP (microscopy cell imaging), and So2Sat (satellite imaging). Our results show that ChannelViT outperforms ViT on classification tasks and generalizes well, even when a subset of input channels is used during testing. Across our experiments, HCS proves to be a powerful regularizer, independent of the architecture employed, suggesting itself as a straightforward technique for robust ViT training. Lastly, we find that ChannelViT generalizes effectively even when there is limited access to all channels during training, highlighting its potential for multi-channel imaging under real-world conditions with sparse sensors. Our code is available at [https://github.com/insitro/ChannelViT](https://github.com/insitro/ChannelViT). ## 1 Introduction Vision Transformers (ViT) have emerged as a crucial architecture in contemporary computer vision, significantly enhancing image analysis. However, application to specific imaging domains, such as microscopy and satellite imaging, poses unique challenges. Images in these fields often comprise multiple channels, each carrying semantically distinct and independent information. The complexity is further compounded by the fact that these input channels may not always be densely available during training or testing, necessitating a model capable of handling such sparsity. In response to these challenges, we propose a modification to the ViT architecture that bolsters reasoning across the input channels. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and incorporates a learnable channel embedding that is added to the patch tokens, akin to positional embeddings. This simple modification enables the model to reason across both locations and channels. Furthermore, by treating the channel dimension as the patch sequence dimension, ChannelViT can seamlessly handle inputs with varying sets of channels. Despite these advancements, two main challenges persist. While ChannelViT can leverage existing efficient implementations of ViT with minimal modifications, the increase in sequence length introduces additional computational requirements. Moreover, if ChannelViT is consistently trained on the same set of channels, its ability to generalize to unseen channel combinations at test time may be compromised. To address these challenges, we introduce Hierarchical Channel Sampling (HCS), a new regularization technique designed to improve robustness. Unlike channel dropout, which drops out each input channel independently, HCS uses a two-step sampling procedure. It first samples the number of channels and then, based on this, it samples the specific channel configurations. While channel dropout tends to allocate more distribution to combinations with a specific number of channels, HCS assigns a uniform weight to the selection of any number of channels. HCS consistently improves robustness when different channels are utilized during testing in both ViT and ChannelViT. Notably, our evaluation on ImageNet shows that using only the red channel, HCS can increase the validation accuracy from 29.39 to 68.86. We further evaluate ChannelViT on two real world multi-channel imaging applications: microscopy cell imaging (JUMP-CP) and satellite imaging (So2Sat). In these applications, different channels often correspond to independent information sources. ChannelViT significantly outperforms its ViT counterpart in these datasets, underscoring the importance of reasoning across different channels. Moreover, by treating different channels as distinct input tokens, we demonstrate that ChannelViT can effectively generalize even when there is limited access to all channels in the dataset during training. Lastly, we show that ChannelViT enables additional insights. The learned channel embeddings correspond to meaningful interpretations, and the attention visualization highlights relevant features across spatial and spectral resolution, enhancing interpretability. This highlights the potential of ChannelViT for wide-ranging applications in the field of multi-channel imaging. ## 2 Related work Vision transformer and its applications to multi-channel imagingVision Transformer (ViT) has demonstrated state-of-the-art performance in various computer vision tasks Dosovitskiy et al.; Touvron et al. (2021); Carion et al. (2020); Zhu et al. (2020). Recently, researchers have started adopting ViT for multi-spectral imaging. For example, in satellite imaging, Kaselimi et al. (2022) showed that a ViT-based classifier outperforms CNN models, especially on imbalanced classes. Additionally, Tarasiou et al. (2023) proposed acquisition-time-specific temporal positional encodings to model satellite images over time, while Cong et al. (2022) demonstrated the benefits of using distinct spectral positional encodings with ViT. Moreover, Scheiephereri et al. (2022) found that ViT, when combined with self-supervised pre-training, performs on-part with state-of-the-art benchmarks. In the field of cell biology, Sivanandan et al. (2023) utilized ViT with self-supervised pre-training to learn representations of cells across multiple fluorescence channels. Furthermore, Hatamizadeh et al. (2022a,b) leveraged ViT for segmenting 3D MRI images. Hussein et al. (2022) proposed to train multiple ViTs, one for each input channel, for epileptic seizure predictions. In contrast to previous work, we address a practical challenge in multi-channel imaging, where different datasets often have different available channels.1 To tackle this challenge, we propose Figure 1: Illustration of Channel Vision Transformer (ChannelViT). The input for ChannelViT is a cell image from JUMP-CP, which comprises five fluorescence channels (colored differently) and three brightfield channels (colored in B&W). ChannelViT generates patch tokens for each individual channel, utilizing a learnable channel embedding chn to preserve channel-specific information. The positional embeddings pos and the linear projection \(W\) are shared across all channels. ChannelViT, which unifies the modeling across data with different input channels and offers robust performance at test time, even when only a subset of the channels is available. Robustness for Vision TransformerRobustness can be defined in different ways. One aspect is the vulnerability to adversarial attacks. Mahmood et al. (2021) found that ViTs are as susceptible to white-box adversarial attacks as CNNs. To improve robustness, Robust ViT incorporates more robust components like global pooling (Mao et al., 2022). Additionally, Chefer et al. (2022) propose regularization of the relevancy map of ViT to enhance robustness. Zhou et al. (2022); Zhang et al. (2021); Song et al. (2022) augments transformers with feature-wise attention to improve robustness and performance. Another approach focuses on generalization over distribution shifts Sagawa et al. (2019); Liu et al. (2021). Bao and Karaletsos (2023) introduces a context token inferred from ViT's hidden layers to encode group-specific information. In our work, we specifically focus on improving the generalization performance across different channel combinations, which is a common scenario in multi-channel imaging. We argue that the original ViT is sensitive to changes in input channels, as it computes a single patch token across all channels. In contrast, ChannelViT creates separate patch tokens for each channel, making it inherently more robust to variations in channel availabilities. To further enhance channel robustness, we introduce hierarchical channel sampling (HCS) during training. This methodology draws inspiration from prior studies on channel dropout Srivastava et al. (2014); Tompson et al. (2015); Hou and Wang (2019). However, instead of dropping out intermediate channels, our approach introduces a two-stage sampling algorithm designed to selectively mask out the input channels. ## 3 Method ChannelViT is a modification of the original Vision Transformer (ViT) architecture proposed by Dosovitskiy et al.. Unlike the original architecture, which condenses each multi-channel image patch into a single 'word' token, ChannelViT segregates channel-specific information into multiple tokens. This simple yet effective modification yields three key advantages: 1. ChannelViT facilitates reasoning across both positions and channels with Transformer; 2. By transforming the channel dimension into the sequence length dimension, ChannelViT can seamlessly manage inputs with varying sets of channels; 3. ChannelViT can utilize existing efficient implementations of ViT. In the following paragraphs, we explore the architecture and implementation of ChannelViT in detail. Figure 1 provides a visual overview of the model. ### Channel Vision Transformer (ChannelViT) Patch embeddingsConsider an input image \(x\) with dimensions \(H\times W\times C\). Given a patch size of \(P\times P\), this image can be reshaped into a sequence of non-overlapping patches \[[x[c_{1},p_{1}],\dots,x[c_{1},p_{N}],\,x[c_{2},p_{1}],\dots,x[c_{2},p_{N}], \quad\dots\quad,x[c_{C},p_{N}],\dots,x[c_{C},p_{N}]]\,,\] where \(x[c_{i},p_{n}]\) corresponds to the \(n\)-th \(P\times P\) image patch at channel \(c_{i}\) and \(N=HW/P^{2}\). As the Transformer encoder requires a sequence of one-dimensional vectors, each patch is flattened into a 1D vector. Unlike ViT, which generates a single token for a multi-channel image patch, ChannelViT produces one token from every single-channel image patch. Tied image filtersWe apply a learnable linear projection \(W\in\mathbb{R}^{P^{2}\times D}\) to the flattened patches. It is important to note that in a regular ViT, each channel has its own weights in the linear projection layer. In ChannelViT, our preliminary experiments suggest that tying the image filters across channels offer superior performance compared to united image filters (Appendix D.2). Therefore, we tie the learnable projection \(W\) across channels. The intuition behind this is that the low-level image filters can be shared across channels (Ghiasi et al., 2022), and tying the parameters can improve the model's robustness across channels. Channel-aware and position-aware patch embeddingsDespite tying the linear filter across channels, it remains essential to preserve channel-specific information, given the distinct characteristics of different channels (Appendix D.3). We introduce learnable channel embeddings \([\texttt{chn}_{1},\ldots,\texttt{chn}_{C}]\), where \(\texttt{chn}_{c}\in\mathbb{R}^{D}\). In line with the original ViT, we also incorporate learnable positional embeddings to maintain positional information of each patch. We denote the positional embeddings as \([\texttt{pos}_{1},\ldots,\texttt{pos}_{N}]\), where \(\texttt{pos}_{n}\in\mathbb{R}^{D}\). It's worth noting that these position embeddings are also shared across channels, enabling ChannelViT to recognize the same image patch across different channels. Finally, we prepend a learnable classifier token \(\texttt{CLS}\in\mathbb{R}^{D}\) to the sequence to encode global image features. The resulting input sequence can be written as \[\big{[}\texttt{CLS},\ \texttt{pos}_{1}+\texttt{chn}_{1}+W\cdot x[c_{1 },p_{1}],\ \ \ \ldots,\ \ \texttt{pos}_{N}+\texttt{chn}_{1}+W\cdot x[c_{1},p_{N}],\] \[\ \ \ \ldots,\ \ \texttt{pos}_{1}+\texttt{chn}_{C}+W\cdot x[c_{C},p_{1}],\ \ \ \ldots,\ \ \texttt{pos}_{N}+\texttt{chn}_{C}+W\cdot x[c_{C},p_{N}]\big{]}.\] Transformer encoderThe above input sequence is fed into a Transformer encoder, which captures dependencies between image patches by embedding each patch based on its similarity to others Vaswani et al. (2017). Specifically, the Transformer encoder comprises alternating layers of multiheaded self-attention blocks and MLP blocks. Layer normalization, as proposed by Ba et al. (2016), is performed before each block, and residual connections He et al. (2016) are established after each block. We use the final layer representation of the CLS token to represent the input image. For classification tasks, a linear classifier is employed, followed by a Softmax function, to predict the corresponding label. We utilize the standard cross entropy loss as our training objective. ### Hierarchical channel sampling (HCS) Training ChannelViT directly presents two challenges: 1) The sequence length becomes proportional to the number of channels, leading to a quadratic surge in the number of attentions required for computation; 2) Training exclusively on all channels may result in the model not being prepared for partial channels at test time, thereby affecting its generalization capability. To mitigate these issues, we propose applying hierarchical channel sampling (HCS) during the training process. Specifically, for an image \(x\) with \(C\) channels, we proceed as follows: 1. First, we sample a random variable \(m\) uniformly from the set \(\{1,2,\ldots,C\}\). This \(m\) represents the number of channels that we will utilize during this training step; 2. Next, we sample a channel combination \(\mathcal{C}_{m}\) uniformly from all channel combinations that consist of \(m\) channels; 3. Finally, we return the image with only the sampled channels \(x[\mathcal{C}_{m}]\). HCS shares similarity to channel dropout Tompson et al. (2015), but it differs in terms of the prior distribution imposed on the sampled channels. In channel dropout, each channel is dropped based on a given probability independently. The probability of having \(m\) channels varies drastically for different \(m\)s, which can negatively impact the final performance (Figure 4). In contrast, HCS ensures that the sampling procedure equally covers each \(m\). HCS can also be interpreted as simulating test-time distributions during training. Compared to group distributionally robust optimization (Sagawa et al., 2019), HCS minimizes the mean loss rather than the worst-case loss. This approach is logical when considering channel robustness, as having more channels will naturally enhance performance. We don't want the model to over-focus on the worst-case loss, which typically corresponds to situations when we sample very few channels. ## 4 Experiments We evaluate ChannelViT across three image classification benchmarks: ImageNet Deng et al. (2009), JUMP-CP Chandrasekaran et al. (2022), and So2Sat Zhu et al. (2019). In Figure 2 (top), we illustrate the correlation among different input channels for each dataset. As observed, ImageNet exhibits a strong correlation among the three RGB channels. For JUMP-CP, while there is a strong correlation within the fluorescence channels and within the brightfield channels, there is minimal to no correlation between the brightfield and the fluorescence channels. A similar group structure among the channels is observed for So2Sat. Due to space constraints, our primary focus in the main paper is on the comparison between ViT and ChannelViT. For additional comparisons with MultiViT (Hussein et al., 2022), please refer to Appendix E.1. Comparisons with FANs (Zhou et al., 2022) can be found in Appendix E.2. Jump-CpThis is a microscopy imaging benchmark released by the JUMP-Cell Painting Consortium. The objective is to predict the applied perturbation based on the cell image. The dataset includes a total of 160 perturbations. We focused on a compound perturbation plate 'BR00116991', which contains 127k training images, 45k validation images, and 45k testing images. Each cell image contains 8 channels, comprising both fluorescence information (first five channels) and bright-field information (last three channels). So2SatThis satellite imaging benchmark encompasses half a million image patches from Sentinel-1 and Sentinel-2 satellites, distributed across 42 global urban agglomerations. Each image patch incorporates 18 channels, with 8 originating from Sentinel-1 and the remaining 10 from Sentinel-2. The primary objective of this dataset is to facilitate the prediction of the climate zone for each respective image patch, with a total of 17 distinct climate zones being represented. Implementation detailsWe utilize the Vision Transformer (ViT) implementation provided by Facebook Research2. During training, we minimize the cross entropy loss. To ensure a fair comparison, both ViT and ChannelViT are subjected to identical optimization settings. These settings encompass the use of the Adam optimizer, a learning rate scheduler featuring linear warmup and cosine decay, and a cosine scheduler for the weight decay parameter. **For a more detailed description of the hyper-parameter settings, we direct readers to the Appendix.** \begin{table} \begin{tabular}{l c c c c c} \hline \hline Backbone & \begin{tabular}{c} Use hierarchical \\ channel sampling? \\ \end{tabular} & \begin{tabular}{c} Val Acc. \\ on RGB \\ \end{tabular} & \begin{tabular}{c} Val Acc. \\ on R-only \\ \end{tabular} & \begin{tabular}{c} Val Acc. \\ on G-only \\ \end{tabular} & \begin{tabular}{c} Val Acc. \\ on B-only \\ \end{tabular} \\ \hline \multicolumn{6}{l}{_Models trained on three channels (RGB)_} \\ ViT-S/16 & ✗ & 71.49 & 29.39 & 33.79 & 21.18 \\ ViT-S/16 & ✓ & 73.01 & 68.86 & 69.78 & 67.59 \\ ChannelViT-S/16 & ✓ & **74.64** & **69.90** & **70.30** & **68.48** \\ \hline \multicolumn{6}{l}{_Expert models trained on only one channel_} \\ ViT-S/16 (R-only) & N/A & — & 70.04 & — & — \\ ViT-S/16 (G-only) & N/A & — & — & 70.61 & — \\ ViT-S/16 (B-only) & N/A & — & — & — & 69.47 \\ \hline \hline \end{tabular} \end{table} Table 1: Validation accuracy on ImageNet under different testing conditions (using all three channels or only one channel). We observe that 1) hierarchical channel sampling significantly boosts single-channel performance at test time; 2) ChannelViT consistently outperforms the ViT baseline. The expert models, trained using only one channel, represent the upper bound of potential performance. Figure 2: Correlation patterns among image channels (left) and the learned channel embeddings (right) for ImageNet, JUMPCP, and So2Sat. ImageNet displays a strong correlation among the three RGB input channels while JUMPCP and So2Sat show minimal correlation between different signal sources (Fluorescence vs. Brightfield, Sentinel 1 vs Sentinel 2). ### ImageNet Table 1 showcases our results on ImageNet, using ViT small as the representation backbone and a patch size of 16 by 16. We observe that without applying hierarchical channel sampling, ViT-S/16 achieves a validation accuracy of 71.49 using all three channels but fails to generalize when only one channel is provided at test time. Simulating this test-time channel drop during training via hierarchical channel sampling (HCS) significantly improves performance. For instance, the validation accuracy for using only the red channel improves from 29.39 to 68.86, demonstrating the effectiveness of HCS as a regularizer for enforcing channel robustness. Lastly, while there is limited room for improvement due to the strong correlations among the input RGB channels, ChannelViT still consistently outperforms the corresponding ViT baseline (by 1.2 on average), narrowing the gap (\(1.30\to 0.48\)) to the expert models that are trained using only one channel. Figure 4: HCS vs. input channel dropout on JUMP-CP (trained on all 8 channels). On the left, we present the accuracy of ViT-S/16 and ChannelViT-S/16 under varying input channel dropout rates and HCS. The accuracy is evaluated across all channel combinations, with the mean accuracy reported for combinations with an equal number of channels (represented on the horizontal axis). On the right, we illustrate the probability distribution of the sampled channel combinations during the training process. We observe 1) ViTs trained with input channel dropout tend to favor channel combinations that are sampled the most; 2) ChannelViT with input channel dropout outperforms ViT with input channel dropout; 3) HCS surpasses input channel dropout in terms of channel robustness. Figure 3: Relevance visualizations for ViT-S/16 and ChannelViT-S/16 trained on ImageNet. For each image, we generate the relevance heatmap for two distinct classes (espresso and wine for the top image, elephant and zebra for the bottom image) using the methodology described in Chefer et al. (2021). It’s observed that ChannelViT precisely allocates its attention to the relevant channel (red channel for predicting red wine). In the case of predicting a zebra, where the black and white contrast pattern is present across all channels, ChannelViT utilizes all channels for its prediction. ### JUMP-CP: microscopy cell imaging We present our results on the microscopy cell imaging benchmark, JUMP-CP, in Table 2. This benchmark involves a 160-way classification task. Due to computational constraints, we utilize ViT-S as our representation backbone. We consider both the standard resolution with a patch size of 16x16 and a high-resolution model with a patch size of 8x8. In the first part of our analysis, we train all models using only the five fluorescence channels and evaluate their performance on the test set under various input channel combinations. Our observations are as follows: 1) HCS significantly enhances the channel robustness for both ViT and ChannelViT; 2) High-resolution models consistently outperform their low-resolution counterparts; 3) With the exception of the 5-channel evaluation with a patch size of 8x8, ChannelViT consistently outperforms ViT. In the latter part of our analysis, we utilize all available channels for training, which includes three additional brightfield channels for each image. For ViT, the high-resolution ViT-S/8 model improves from 60.29 to 66.44, demonstrating the importance of the additional brightfield information, while the improvement for ViT-S/16 is marginal (from 55.51 to 56.87). When focusing on ChannelViT, we observe a significant performance boost over its ViT counterpart. ChannelViT-S/16 outperforms ViT-S/16 by 11.22 (68.09 vs 56.87) and ChannelViT-S/8 outperforms ViT-S/8 by 8.33 (74.77 vs. 66.44). These improvements are consistent across different channel combinations. As we have seen in Figure 2, fluorescence and brightfield channels provide distinct information. ChannelViT effectively reasons across channels, avoiding the need to collapse all information into a single token at the first layer, thereby enhancing performance. Lastly, we delve into a comparative analysis between input channel dropout and hierarchical channel sampling, as depicted in Figure 4. It is evident from our observations that the ViT model, when trained with HCS, consistently surpasses the performance of those trained with input channel dropout across all channel combinations. Furthermore, we discern a pronounced correlation between the performance of models trained with input channel dropout and the probability distribution of the number of channels sampled during training. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & ViT-S/16 & ChannelViT-S/16 & ViT-S/16 & ChannelViT-S/16 & ViT-S/8 & ChannelViT-S/8 \\ \cline{3-8} \multicolumn{1}{c}{Use hierarchical} & \multirow{2}{*}{\(\mathbf{\chi}\)} & \multirow{2}{*}{\(\mathbf{\chi}\)} & \multirow{2}{*}{\(\mathbf{\chi}\)} & & \multirow{2}{*}{\(\mathbf{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ }}}}}}}}}}}}}}\) & & \\ \multicolumn{1}{c}{channel sampling?} & & & & & & \\ \hline \multicolumn{8}{l}{_Training on 5 fluorescence channels_} \\ \multicolumn{8}{l}{} & 5 channels & 48.41 & 53.41 & 55.51 & 56.78 & **60.29** & 60.03 \\ \multicolumn{8}{l}{} & 4 channels & 0.85 & 15.13 & 43.59 & 45.94 & 48.80 & **49.34** \\ \multicolumn{8}{l}{} & 3 channels & 1.89 & 5.12 & 33.14 & 35.45 & 37.13 & **38.15** \\ \multicolumn{8}{l}{} & 2 channels & 1.46 & 1.22 & 25.24 & 26.57 & 27.40 & **27.99** \\ \multicolumn{8}{l}{} & 1 channel & 0.54 & 1.25 & 20.49 & 21.43 & 21.30 & **21.58** \\ \hline \multicolumn{8}{l}{_Training on all 8 channels (5 fluorescence channels \& 3 brightfield channels)_} \\ \multicolumn{8}{l}{} & 8 channels & 52.06 & 66.22 & 56.87 & 68.09 & 66.44 & **74.77** \\ \multicolumn{8}{l}{} & 7 channels & 5.91 & 41.03 & 49.35 & 61.02 & 59.01 & **68.42** \\ \multicolumn{8}{l}{} & 6 channels & 1.81 & 24.57 & 42.38 & 53.45 & 51.29 & **61.26** \\ \multicolumn{8}{l}{} & 5 channels & 2.46 & 14.20 & 35.78 & 45.50 & 43.39 & **53.05** \\ \multicolumn{8}{l}{} & 4 channels & 2.38 & 8.56 & 29.84 & 37.37 & 35.60 & **43.87** \\ \multicolumn{8}{l}{} & 2 channels & 2.70 & 5.65 & 24.94 & 29.68 & 28.59 & **34.19** \\ \multicolumn{8}{l}{} & 2 channels & 2.63 & 3.24 & 21.54 & 23.77 & 23.32 & **25.73** \\ \multicolumn{8}{l}{} & 1 channel & 3.00 & 2.08 & 19.92 & 20.84 & 20.41 & **21.20** \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracy of 160-way perturbed gene prediction on JUMP-CP. Two training settings are considered: one using only 5 fluorescence channels and the other incorporating all 8 channels, which includes 3 additional brightfield channels. During testing, all possible channel combinations are evaluated and we report the mean accuracies for combinations with the same number of channels (See Appendix E for detailed error analyses). We observe that cross channel reasoning is crucial when the inputs have independent information (fluorescence vs. brightfield). Data EfficiencyIn the realm of microscopy imaging, we often encounter situations where not all channels are available for every cell due to varying experiment guidelines and procedures. Despite this, the goal remains to develop a universal model capable of operating on inputs with differing channels. ChannelViT addresses this issue by treating different channels as distinct input tokens, making it particularly useful in scenarios where not all channels are available for all data. Table 3 presents a scenario where varying proportions (0%, 25%, 50%, 75%, 100%) of the training data have access to all eight channels, with the remaining data only having access to the five fluorescence channels. The performance of ViT and ChannelViT is evaluated at test time using both the five fluorescence channels (top section) and all eight channels (bottom section). Our observations are as follows: 1) When only a limited amount of 8-channel data (25%) is available, both ChannelViT and ViT show a decrease in performance when utilizing eight channels at test time compared to five channels; 2) As the availability of 8-channel data increases, the performance of the ViT baseline on the fluorescence evaluation steadily declines (from 55.51 to 45.75), while the performance of ChannelViT sees a slight improvement (from 56.78 to 57.60); 3) When evaluated on all eight channels, ChannelViT significantly outperforms ViT, with an average gap of 9.62. Channel-specific attention visualizationAttention heatmaps, generated by Vision Transformers (ViTs), have emerged as a valuable tool for interpreting model decisions. For instance, Chefer et al. (2021) introduced a relevance computation method, which assigns local relevance based on the Deep Taylor Decomposition principle and subsequently propagates these relevance scores through the layers. However, a limitation of ViTs is their tendency to amalgamate information across different channels. In the realm of microscopy imaging, discerning the contribution of each fluorescence channel to the predictions is vital due to their distinct biological implications. Figure 5 (right) presents the class-specific relevance visualizations for ViT-S/8 and ChannelViT-S/8. For the top cell labeled KCNH76, ChannelViT appears to utilize information from the Mito channel. For the bottom cell labeled KRAS, ChannelViT seems to utilize information from the ER and RNA channels for its prediction. Compared to ViT, ChannelViT facilitates the examination of contributions made by individual channels. In Figure 5 (left), we further compute the maximum attention score (averaged over 100 cells) for each cell label (perturbed gene) and each input channel. Our observations indicate that ChannelViT focuses on different channels for different labels (corresponding to perturbed genes), with the Mito channel emerging as the most significant information source. This heatmap, which describes the discriminability of different labels over different channels, can also aid in better understanding the relationships between different gene perturbations. ### So2Sat: Satellite Imaging Our results on the So2Sat satellite imaging benchmark are presented in Table 4. We evaluate two official splits: random split and city split, training both ViT-S/8 and ChannelViT-S/8 models using hierarchical channel sampling across all channels (Sentinel 1 & 2). \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{Combine fluorescence-only data and 8-channel data for training} \\ \cline{2-6} \% fluorescence-only data & 100\% & 75\% & 50\% & 25\% & 0\% \\ \% 8-channel data & 0\% & 25\% & 50\% & 75\% & 100\% \\ \hline \multicolumn{6}{c}{_Evaluating on 5 fluorescence channels_} \\ ViT-S/16 & 55.51 & 52.55 & 51.65 & 49.53 & 45.75 \\ ChannelViT-S/16 & **56.78** & **58.01** & **58.19** & **58.42** & **57.60** \\ \hline \multicolumn{6}{c}{_Evaluating on all 8 channels_} \\ ViT-S/16 & — & 50.29 & 52.47 & 54.64 & 56.87 \\ ChannelViT-S/16 & — & **57.97** & **61.88** & **64.80** & **68.09** \\ \hline \hline \end{tabular} \end{table} Table 3: ViT vs. ChannelViT when we have varying channel availability during training. Both models are trained using HCS. The accuracy is evaluated using five fluorescence channels (top) and all eight channels (bottom). ChannelViT consistently outperforms ViT across all settings, and the performance gap notably widens as access to more 8-channel data is provided. Upon evaluation, ChannelViT demonstrats superior performance over its ViT counterpart, with an improvement of 1.28 for the random split and 0.53 for the more challenging city split. In the realm of satellite imaging, Sentinel 1 channels are derived from a Synthetic Aperture Radar operating on the C-band, while Sentinel-2 is a multispectral high-resolution imaging mission. It's worth noting that Sentinel-2 data can be cloud-affected, underscoring the importance of models that can robustly operate under partial signals using only Sentinel 1. In both random and city splits, ChannelViT significantly outperforms ViT (59.75 vs. 50.62 in random split and 47.39 vs. 41.07 in city split). Lastly, we explore the efficiency of ChannelViT in combining satellite training data with different signals. As depicted in Figure 6, we consider varying proportions (10%, 25%, 50%, 75%, 100%) of the training data with access to both Sentinel 1 & 2 signals, while the remaining data only has access to Sentinel 1 signals. The models are evaluated using all Sentinel 1 & 2 signals. Our observations consistently show ChannelViT outperforming ViT. Interpreting the channel embeddings learned by ChannelViTFigure 2 presents the correlations between the input channels. It's noteworthy that the first four channels of Sentinel-1 correspond to: 1) the real part of the VH channel; 2) the imaginary part of the VH channel; 3) the real part of the VV channel; and 4) the imaginary part of the VV channel. These four input channels are uncorrelated, as evidenced by the bottom left corner of the So2Sat visualization heatmap. However, upon examining the correlations between the learned channel embeddings, we observe a high correlation between the real and imaginary parts of both VV and VH channels. This intuitively aligns with the fact that the real and imaginary parts are equivalent in terms of the information they pro \begin{table} \begin{tabular}{l l l} \hline \hline & Sentinel 1 & Sentinel 1 \& 2 \\ & (Channel 0-7) & (Channel 0-17) \\ \hline _Random split (Zhu, 2021)_ & & \\ ViT-S/8 & 50.62 & 97.82 \\ ChannelViT-S/8 & **59.75** & **99.10** \\ \hline _City split (Zhu et al., 2019)_ & & \\ ViT-S/8 & 41.07 & 62.48 \\ ChannelViT-S/8 & **47.39** & **63.01** \\ \hline \hline \end{tabular} \end{table} Table 4: Test accuracy of 17-way local climate zone classification on So2Sat. We consider two official splits: random split and city split. Both ViT and ChannelViT are trained on all channels with hierarchical channel sampling. We evaluate their performance on 18 channels (Sentinel 1 & 2) as well as partial channels (Sentinel 1). Figure 5: Left: Class-specific relevance attribution of ChannelViT-S/8 for each cell label (perturbed gene) on JUMP-CP. For each perturbed gene (y-axis) and each channel (x-axis), we calculate the maximum attention score, averaged over 100 cells from that specific cell label. This reveals that ChannelViT focuses on different input channels depending on the perturbed gene. Right: A visualization of the relevance heatmaps for both ViT-S/8 (8-channel view) and ChannelViT-S/8 (single-channel view). Both models are trained on JUMP-CP using HCS across all 8 channels. ChannelViT offers interpretability by highlighting the contributions made by each individual channel. vide. This demonstrates that ChannelViT learns meaningful channel embeddings, which can provide additional insights into the relationships between different input signals. ## 5 Conclusion In conclusion, our proposed model, ChannelViT, effectively addresses the unique challenges of multi-channel imaging domains. By enhancing reasoning across input channels and seamlessly handling inputs with varying sets of channels, ChannelViT has consistently outperformed its ViT counterpart in our evaluations on ImageNet and diverse applications such as medical, microscopy cell, and satellite imaging. The introduction of Hierarchical Channel Sampling (HCS) further bolsters the model's robustness when testing with different channel combinations. Moreover, ChannelViT not only improves data efficiency but also provides additional interpretability, underscoring its potential for broad applications in the field of multi-channel imaging.
2309.17216
The long-term impact of (un)conditional cash transfers on labour market outcomes in Ecuador
Despite the popularity of conditional cash transfers in low- and middle-income countries, evidence on their long-term effects remains scarce. This paper assesses the impact of the Ecuador's Human Development Grant on the formal sector labour market outcomes of children in eligible households. This grant -- one of the first of its kind -- is characterised by weak enforcement of its eligibility criteria. By means of a regression discontinuity design, we find that this programme increased formal employment rates and labour income around a decade after exposure, thereby curbing the intergenerational transmission of poverty. We discuss possible mediating mechanisms based on findings from previous literature and, in particular, provide evidence on how the programme contributed to persistence in school in the medium run.
Juan Ponce, José-Ignacio Antón, Mercedes Onofa, Roberto Castillo
2023-09-29T13:15:07Z
http://arxiv.org/abs/2309.17216v1
# The long-term impact of (un)conditional cash transfers on labour market outcomes in Ecuador+ ###### Abstract Despite the popularity of conditional cash transfers in low- and middle-income countries, evidence on their long-term effects remains scarce. This paper assesses the impact of the Ecuador's Human Development Grant on the formal sector labour market outcomes of children in eligible households. This grant--one of the first of its kind--is characterised by weak enforcement of its eligibility criteria. By means of a regression discontinuity design, we find that this programme increased formal employment rates and labour income around a decade after exposure, thereby curbing the intergenerational transmission of poverty. We discuss possible mediating mechanisms based on findings from previous literature and, in particular, provide evidence on how the programme contributed to persistence in school in the medium run. **Keywords:** conditional cash transfers, long-term effects, formal labour market, employment, labour income. **JEL classification:** I38, J21. J32, J25, J46. ## 1 Introduction Since Mexico launched _Progresa_ in 1997, nearly two hundred countries of different levels of development across the five continents have introduced conditional cash transfers (CCTs). The most salient feature of these programmes is that, to be eligible for the welfare payments, recipients must meet requirements related to child school attendance, medical check-ups or the like. Therefore, the most distinctive feature of CCTs is their objective of improving the economic outcomes of the next generation, thereby preventing the transmission of poverty to the next cohort. Paradoxically, whereas the stream of studies evaluating the short-term effects of CCTs is large and reaches rather optimistic conclusions, particularly on educational enrolment, evidence on CCTs' long-run impact remains scarce. Furthermore, some voices have warned against the common practice--and the potential bias associated with it--of rarely conducting long-term follow-ups on social interventions when their short-run effects are small (Leight, 2022), as has sometimes been the case with CCT programmes. This paper examines the impact of one of the pioneering programmes of its kind, the Human Development Grant (HDG) in Ecuador, where enforcement of the grant's eligibility criteria has historically been very weak, on the future work outcomes of young adults eligible for the programme when they were children under 15 years of age. Combining information from poverty censuses (the specific databases used to administer the programme) and social security records in a regression discontinuity design (RDD), we estimate the local intention-to-treat (ITT) effect of the HDG on the proportion of months worked and the labour income earned in the formal labour market by eligible individuals after their 21st birthday. Our findings indicate that the HDG is fulfilling one of its core missions. A child's having been eligible for the HDG in 2008/2009 results in a 2.5 percentage point increase in her effective participation in the formal labour market and approximately $9 more in monthly labour income earned in this sector a decade later. These effects are sizeable considering the scale of informal activities and national living standards. We discuss several possible mechanisms that may be at work here. In particular, we review previous studies documenting the grant's positive contribution to human capital formation and provide some evidence on its role in preventing school dropout in the medium term in explaining this positive outcome. We contribute to the literature in two different ways. First and most importantly, our work adds to the limited number of existing studies on the long-term impact of CCTs (Molina Millan et al., 2019). Second, given the weak enforcement of requirements to receive the subsidy, it provides additional evidence to weigh in the debate on the relevance of conditionality in these programmes (Baird et al., 2013, 2014). The rest of the paper unfolds as follows. Section 2 discusses the previous literature and frames our work within it. Section 3 discusses our research design, including the institutional setting, data and empirical strategy. Section 4 presents the results of our analyses and discusses their external validity. Section 5 summarises and discusses the main implications of the research. ## 2 Background and related literature Conditional cash transfers have represented the spearhead of antipoverty policies in Latin America and the Caribbean and other low- and middle-income regions over the last two decades. After the pioneering Mexican and Brazilian experiences, the HDG, in its current version, was rolled out as one of the first programmes of this kind operating in the hemisphere (Cecchini & Madariaga, 2011; Rawlings, 2005; Villatoro, 2005). Similarly to most benefits of this kind, in addition to targeting socially disadvantaged households, the payment of a monthly monetary sum is conditional on families meeting certain criteria related to children's school enrolment and attendance of medical check-ups. Nevertheless, as we explain in the next section, one idiosyncratic feature of the Ecuadorian programme is that the enforcement of those requirements has been (at best) quite weak. These programmes aim to improve the lives of households experiencing deprivation and to reduce future poverty. The conditions attached to the benefits are aimed at encouraging human capital accumulation--particularly among children--to prevent the intergenerational transmission of poverty (Cecchini & Madariaga, 2011; Fiszbein et al., 2009). According to Fiszbein et al. (2009), the rationale for imposing eligibility conditions follows from two different considerations.The first is the assumption that parental investment in children's human capital would be suboptimal in the absence of such requirements, mainly because of the existence of positive externalities related to education and health and to information problems (e.g., about the returns to human capital), principal-agent problems (incomplete altruism or even conflicts between the two parents) or behavioural elements (excessively high discount rates on the part of parents). Political economy considerations are relevant as well. Demanding that poor households meet certain criteria to access the targeted benefits aims to foster greater acceptance of such programmes among taxpayers than might be the case if the cash transfers were unconditional. The literature that assesses the performance of these types of programmes is notably abundant (see, among many others, Baird et al. (2014) and Parker et al. (2007)). The case of the Ecuadorian programme has received much attention from the research community. In addition to establishing its relevance for poverty alleviation (Fiszbein et al., 2009; Ordonez et al., 2015; World Bank, 2018), this body of work has highlighted the positive impact of the programme on school enrolment--see Ponce (2023) for a survey--and a reduction in child labour (Edmonds & Schady, 2012; Martinez Dobronsky & Rosero Moncayo, 2012; Schultz, 2004). Overall, the grant does not appear to have significantly affected child development (Fernald & Hidrobo, 2011; Paxson & Schady, 2007, 2010; Ponce & Bedi, 2010), health (Fernald & Hidrobo, 2011) or adult labour supply (Bosch & Schady, 2019). According to the discussion of Barrientos (2012) on the overall benefits of social protection, cash transfers of this kind can curb the intergenerational transmission of poverty through several channels. By alleviating credit constraints, they can contribute to human capital formation (Baird et al., 2013, 2014; Parker et al., 2007). Furthermore, they may foster investment in durable assets that result in future income streams (Blattman et al., 2020; Gelders & Bailey-Athias, 2019; Gertler et al., 2012; Maluccio, 2010; Martinez, 2005). Relatedly, in some cases, mainly depending on the recipient (e.g., if only mothers receive the grant), social protection benefits can alter household resource allocation such that families spend more on goods and services that specifically serve children's interests (Angelucci & Attanasio, 2013; Attanasio et al., 2010; Attanasio & Lechene, 2014; Bergolo & Galvan, 2018; Macours et al., 2012; Schady & Rosero, 2008). The number of studies on the long-term impact of these programmes is quite limited (Molina Millan et al., 2019) but offers some grounds for optimism. Recent works taking advantage of experimental or quasi-experimental designs find that child eligibility for a conditional cash transfer improves educational attainment and labour market outcomes at a later age in Mexico (Kugler & Rojas, 2018; Parker & Vogl, 2023), Honduras (Molina Millan et al., 2020), Nicaragua (Barham et al., 2017) and Colombia (Garcia et al., 2012). Furthermore, this literature suggests additional benefits in the latter two countries (e.g., positive effects on cognitive development and health behaviours) (Barham et al., 2013; Garcia et al., 2012). Three of these works (Barham et al., 2017; Garcia et al., 2012; Kugler & Rojas, 2018) specifically refer to the impact on labour market formalization, the main issue on which we provide evidence. Using data from a randomised controlled trial and a regression discontinuity design, Araujo et al. (2018) study the impact of the Ecuadorian HDG on test scores, educational attainment and work outcomes at a horizon of approximately ten years among children who lived in households eligible for the transfer. They report much more nuanced results than those for other Latin American countries. Their findings indicate a positive effect on secondary school completion and a positive impact, albeit nonrobust, on female employment. Our work aims to contribute to this still scant literature by providing additional evidence on the Ecuadorian case. Specifically, we focus on the likelihood of working in the formal sector and labour income earned in this sector among children eligible for the HDG in 2008/2009 after they reached 21 years of age, approximately a decade later. Our analysis exploits a different time frame from that in Araujo et al. (2018)--centred on 2002/2003-2013/2014--and looks at different labour market outcomes (success in the formal sector rather than overall employability). A related work (Mideros and Gassmann, 2021) evaluates social mobility in terms of a composite welfare index (based on a set of family assets and characteristics) among Ecuadorian households, leveraging the administrative register used for targeting the grant (which, by its own design, suffers from high attrition rates, well above 50%). Employing a difference-in-differences strategy, this paper concludes that HDG eligibility enhances absolute and relative social mobility. Note that this study explores how a household's own socioeconomic status evolves over time rather than whether the grant actually pushes the next generation forward by improving its socioeconomic outcomes in adulthood. Our study also adds to the literature on the relevance of the conditionality associated with these types of benefits (Baird et al., 2013, 2014). As we explain in Section 3.1, in contrast to that for other programmes of these kinds in the region, the enforcement of conditionality for the Ecuadorian HDG is weak. National authorities have always publicly announced the requirements in terms of school enrolment and medical check-ups, but they have never monitored whether households actually meet them. Consequently, it difficult to describe the HDG as a truly _conditional_ transfer. Some authors argue that the programme actually offers something between a conditional and unconditional benefit (Schady and Araujo, 2008). In assessing children's future performance (when they become young adults) in the formal labour market, we think it useful to first describe this sector's characteristics in Ecuador. The definition of formality used here follows the "legalistic" or "social protection" approach (Gasparini and Tornarolli, 2009). Formal workers are those affiliated with social security, who consequently enjoy rights to certain social benefits (contributory old-age, survivors' and disability pensions; maternity and sickness benefits; unemployment insurance and health care [Social Security Administration, 2020]). Informality is a multidimensional phenomenon associated with different explanatory factors (economic structure, lack of law enforcement, poor public services and burdensome regulatory frameworks). The extent to which participation in the informal labour market is a voluntary decision is also a subject of debate (Biles, 2009; Cimoli et al., 2006; La Porta and Shleifer, 2014; Loayza et al., 2009; Maloney, 2004; Portes and Schauffler, 1993; Vuletin, 2008). Informality in the Ecuadorian labour market is pervasive and almost endemic. This segment accounted for 72% of total employment in 2008 (International Labour Organization [ILO], 2023). Regardless of the factors behind the existence of this sector, informal workers in Ecuador have on average much worse social outcomes than those in the formal labour market, such as lower earnings, higher poverty rates and worse future career prospects (Canelas, 2019; Matano et al., 2020; Maurizio & Monsalvo, 2021; Maurizio & Vasquez, 2019; Maurizio et al., 2023). Furthermore, available evidence suggests that informality can result in negative externalities as well. A larger informal sector not only results in higher inequality but also negatively impacts tax collection (Boitano & Abanto, 2019), the health status of the population (Utzet et al., 2021), pension coverage (Daude et al., 2015) and political participation (Baker & Dorr, 2022). Therefore, working in the formal sector in Ecuador is definitely a positive socioeconomic outcome that is of unquestionable interest to policymakers. ## 3 Research design ### Institutional setting The HDG started in Ecuador in 2003 as a reformulation of the Solidarity Grant, a poorly targeted earlier social benefit introduced in 1998 as a safety net that aimed to compensate socially disadvantaged families for the removal of subsidies on gas, petrol and electricity, which was part of the liberalisation and adjustment policies adopted by the country in the late 1990s. The HDG programme aspires to alleviate poverty in the short term and foster human capital formation to prevent the intergenerational transmission of poverty. Specifically, it targets vulnerable families with children under the age of 16. The redesign of the programme in 2003 was aimed at improving its targeting. In particular, the Ecuadorian government--with technical assistance from universities--created an ad hoc poverty census to identify the most vulnerable population more accurately (in Spanish, _Sistema de Identificacion y Seleccion de Beneficiarios de Programas Sociales_ [SELBEN]). The data collection on households' socioeconomic and demographic characteristics relied on visits, public calls and the option for households to sign up for the register on a voluntary basis and request that the government evaluate their eligibility for inclusion. The national authorities initially developed an eligibility index (from 0 to 100, from the lowest to the highest well-being level) based on a principal component analysis of 27 household variables. Households in the first two quintiles--scoring less than 50.65 in the index, which predicted household consumption per capita reasonably well--and children below 16 years old were eligible for the grant. In principle, the programme imposes two conditions on grant beneficiaries. The first requirement relates to education: children aged six to 17 must be enrolled in school and have monthly attendance of at least 80%. The second condition is that infants and children up to five years old attend a series of medical check-ups (one preventive check-up every two month for children below one year old and and every six months for children between one and five years old). Unlike other CCTs that operate in Latin America and the Caribbean, the programme did not set up any enforcement mechanism to verify that beneficiaries meet these conditions. Consequently, the Ecuadorian authorities do not suspend the benefits if families fail to comply with the requirements, making the transfer unconditional in practice. Although the Ecuadorian government does not verify compliance, families do commit in writing to satisfying the conditions, and authorities have always publicly emphasised the need to meet the mentioned requirements. Theoretically, leaving aside the nonenforcement of conditionality, loss of eligibility (from the presence of children or under the SELBEN index) implies benefit withdrawal. In practice, government employees regularly visit households to update the poverty census. This process might result in the suspension of the HDG. Regrettably, information on how often this occurs is scarce. For instance, the government maintains the grant as long as households have children below 18 years old. Loss of eligibility under this criterion should be easy to monitor, but in practice updates to eligibility status can take months or years to be implemented. In addition, Ecuadorian authorities tend to withdraw the grants from a large number of households at the same time rather than on a continuous case-by-case basis. The government committed at the programme's outset to renewing the registry approximately every five years and, to this end, set up the so-called Social Registry 2008/2009. Administered by the Social Registry Unit, a public agency under the auspices of the Ministry of Economic and Social Inclusion of Ecuador, the Social Registry operated similarly to the SELBEN and was aimed at improving the targeting of the HDG. Since the 2008/2009 wave, the National Institute of Statistics and Censuses of Ecuador has been responsible for data collection. Using nonlinear component analysis and 59 household variables, the Social Registry Unit developed a Social Registry Index, rescaled from 0 to 100, to determine eligibility for different social benefits. The government set the cut-off point at 36.5987 points. Together with the rules related to the presence of children below 16 years old, this criterion determined eligibility for the grant from August 2009 to August 2014, when the new Social Register 2013/2014 came into force. The updating of the database implied the cessation of HDG payments to more than 200,000 households (Buser et al., 2017). Apart from being the flagship social programme of successive Ecuadorian governments for more than two decades, the HDG has become one of the most important benefits of this kind across Latin America and the Caribbean. In 2021, it represented approximately 1% of the GDP and reached almost 8% of the population (more than 12% in 2011). The basic amount of the HDG was $11.5 from 2003 to 2007, $30 in 2008, and $35 in the period 2009-2011 and has been $50 since 2012 (Economic Commission for Latin America and the Caribbean [ECLAC], 2023). Whenever possible, the recipients are mothers, who can withdraw the benefit from private banks (in a recent change, eligible mothers can now receive the payment directly in their bank account). ### Data Our analysis leverages data from two different sources. The first is the Social Registry database (Social Registry Unit, 2023). As explained above, this is a cadastre administered by the Ministry of Economic and Social Inclusion that contains static socioeconomic and demographic information of Ecuadorian households. it allows public institutions to determine the eligibility of beneficiaries of social benefits and achieve appropriate targeting according to the procedure described above. In particular, we are able to access the Social Registry 2008/2009 and 2013/2014 waves. The Social Registry database includes the Social Registry Index (and the component variables needed for its calculation) that governs receipt of the HDG. From August 2009, eligible households were those with fewer than 36.5987 points on the index. The Social Registry 2008/2009 index--hereafter, the poverty index--is the forcing variable in our analysis. Furthermore, with the aim of exploring potential mechanisms through which the effects of the grant operate, we retrieve information on school enrolment in 2013/2014 for those individuals present in the database in both waves. Regrettably, we can recover these details only for a subsample of approximately 40% of the young people included in our exercise. Furthermore, we have access to information on the actual receipt of the HDG in 2010 according to administrative sources. We merge these data from the Social Registry with information from the Ecuadorian Social Security Institute (2023), our second data source. It provides data on monthly labour income reported to social security until May 2019. This administrative register allows us to track the formal labour market performance of individuals in all households included in the Social Registry in 2008/2009. We express all earnings values in US$ at constant 2019 prices using the monthly consumer price index (National Institute of Statistics and Censuses of Ecuador, 2023). With these data, we are able to recover the HDG eligibility status of children between 11 and 15 years old in 2008 jointly with their subsequent participation in and labour income from the formal sector upon their reaching 21 years old. Our analysis centres on those young people whom we can observe at least six months after their 21st birthday (therefore, we use social security information from between 2013 and 2019) and who lived with families listed in the Social Registry 2008/2009 whose poverty index then was between 30.5 and 42.5 points. The choice of 21 years old as our lower bound reflects the standard minimum age for graduating from university in Ecuador. For younger individuals, high labour market participation is not necessarily a positive socioeconomic outcome. Being in work before 21 years old may simply reflect that the young person already left the educational system. Given that our sample comprises children of different ages, we cannot observe them for the same time length (the oldest individuals reached age 21 earlier than the others, so we can follow them for a longer period). For this reason, the labour market outcomes in our analysis are the proportion of months that the individual is in formal employment and average formal labour income (calculated over the total number of months after the 21st birthday in which we observe activity for each person).1 Footnote 1: Other works using longitudinal data select a similar outcome (Autor et al., 2014; Dauth et al., 2021; Utar, 2018), based on the cumulative time in employment and cumulative earnings. As our individuals reach the age threshold at different times, we divide the cumulative months in employment and monthly labour income by the number of months. Table 1 displays descriptive statistics of the labour market outcomes of interest and the covariates considered in the analysis. Table 1. Descriptive statistics \begin{tabular}{l r r} \hline & Mean & \multicolumn{1}{c}{Standard deviation} \\ \hline Proportion of months formally employed (since 21 years of age) & 0.167 & 0.310 \\ Average monthly formal labour income (US$, since 21 years of age) & 72.862 & 151.603 \\ Education enrolment in 2008/2009 & 0.905 & 0.294 \\ Education enrolment in 2013/2014 & 0.501 & 0.500 \\ Poverty index in 2008/2009 & 36.183 & 3.412 \\ HDG recipiency in 2010 & 0.580 & 0.493 \\ Female & 0.493 & 0.500 \\ Mestizo & 0.801 & 0.399 \\ Age & 13.416 & 1.456 \\ Household head's years of education & 6.946 & 3.388 \\ Household size & 4.836 & 1.636 \\ Rural area & 0.247 & 0.431 \\ No. of observations & 260,284 & \\ \hline \end{tabular} _Note_: The number of observations for HDG receipt in 2010 and education enrolment in 2014 is 259,534 and 111,673, respectively. All the control covariates for which we do not specify the year refer to their values in the base period, 2008/2009. _Source_: Authors' analysis. ### Empirical strategy We use a sharp regression discontinuity design (RDD) to evaluate the long-term impact of the HDG. We focus on the intention-to-treat (ITT) effect, using a research design that exploits the discontinuity in eligibility at the cut-off point in the Social Registry 2008/2009 index (36.5987). The bulk of our analyses rely on nonparametric local polynomial estimation methods using a data-driven mean squared error (MSE) optimal bandwidth (Calonico et al., 2019, 2019; Cattaneo and Titiunik, 2022; Cattaneo et al., 2020). Our rationale in choosing this method is that global high-order polynomials (i.e., a parametric approach) lead to noisy estimates, sensitivity to the polynomial degree and poor coverage of confidence intervals (Gelman and Imbens, 2019). Our application of this method unfolds as follows (Cattaneo et al., 2020). First, we select a polynomial of order \(p\) and a kernel \(K(\cdot)\). The standard practice is to choose \(p=1\) (local linear regression). We also employ \(p=2\) (local quadratic regression) as a robustness check. Second, we choose a bandwidth. We select a bandwidth \(h\) that minimizes the asymptotic MSE of the point estimator for our baseline analysis (Calonico et al., 2019). We also consider in our robustness checks an alternative optimal bandwidth that minimises the coverage error rate (CER) of the robust bias-corrected confidence interval. Third, we choose a kernel function to weigh the observations in the interval of interest around the cut-off point (\(c\)). We choose a triangular kernel, \(K(u)=(1-|u|)\mathbb{1}(|u|\leq 1)\), which assigns nonnegative weights to each transformed observation (centred around the cut-off and then divided by the selected bandwidth) based on the distance between the observation's score and the cut-off. This kernel function is MSE-optimal. Fourth, on both sides of the cut-off, we separately estimate a weighted least squares regression, with the weights given by the kernel function, \(K\left(\frac{X_{i}-c}{h}\right)\), of the outcome variable (\(Y_{i}\)) on an intercept, the polynomial on the recently forcing variable (\(X_{i}-c\)) and the control covariates of interest (\(Z_{i}\)), all of them at their 2008/2009 values. Formally, we estimate the intercepts for observations below and above the cut-off (\(\hat{\mu}_{-}\) and \(\hat{\mu}_{+}\)), respectively, as follows: \[\begin{split}\hat{\mu}_{-}&:\quad\hat{Y}_{i}=\hat{ \mu}_{-}+\hat{\beta}_{-}(X_{i}-c)+\hat{\Theta}_{-}Z_{i}\\ \hat{\mu}_{+}&:\quad\hat{Y}_{i}=\hat{\mu}_{+}+\hat{ \beta}_{+}(X_{i}-c)+\hat{\Theta}_{+}Z_{i}\end{split} \tag{1}\] The difference between the estimated intercepts, \(\hat{\tau}=\hat{\mu}_{-}-\hat{\mu}_{+}\), yields the estimated ITT of the HDG in the long term. We assess the precision of this point estimate using robust confidence intervals (more conservative than conventional ones) (Calonico et al., 2014). Note that a household's being below the relevant threshold does not only imply that it received the grant when the new targeting rule entered in force, but also it is very likely that it was paid for several months (or even years) until the eligibility status of the household eventually changed (and the Ecuadorian authorities noticed this fact) or the Social Registry Unit updated the cadastre. Therefore, a household's being below the cut-off implies that it might have received the HDG for several years. According to the administrative register of payments for 2010 (the only year for which we have access to this type of information), the targeting of the grant (relative to the information in the database for households between 30.5 and 42.5 points in the index that we use in our analysis) was excellent, with very low undercoverage (2.56% of households below the threshold did not receive the grant in 2010) and leakage rates (only 9.58% of families above the cut-off received payments).2 Therefore, our estimate not only captures the (intrinsically interesting) ITT effect but also provides a reasonable lower bound on the local average treatment effect. Footnote 2: Obviously, our figures refer only to the households in the Social Registry. Considering the whole population that theoretically should receive the grant, although the take-up rates of the programme are high (65% in the two poorest quintiles in 2013), travel costs, personal identity stigma and dissatisfaction with the government pose notable obstacles to claiming the HDG (Rinehart & McGuire, 2017). Furthermore, we estimate the impact of receiving the grant in 2010 using a fuzzy discontinuity design strategy, where we instrument the payment with the discontinuity in take-up rates around the cut-off in the Social Registry Index 2008/2009. This approach is not exempt from problems: the discontinuity in the cut-off is very likely to be correlated with receipt of the HDG not only in 2010 but also in subsequent years (for which we do not have access to the relevant information). Consequently, we do not consider this strategy as capable of offering a better understanding of the long-run effects of the grants. Conversely, this approach is not superior to our exercise of estimating the ITT effect, so we present it simply as a reassuring robustness check, obtaining numerically identical effects. The source of identification of the ITT effect is the reduction in households' likelihood of being paid the HDG upon crossing the eligibility threshold in the Social Registry Index 2008/2009. In other words, the policy application is as good as randomised in the neighbourhood of the cut-off if the research design satisfies certain conditions. The first condition is that there must be no manipulation of the forcing variable by families. Altering their own position relative to the threshold is virtually impossible for families since the Ecuadorian authorities decided the value of the threshold only some time after having built the whole database. In any case, we perform a manipulation test of the density discontinuity based on local polynomial density methods proposed by Cattaneo et al. (2018, 2020). Its results do not allow us to reject the absence of manipulation (Figure 1). The second condition is that there must be no correlation between an observation's being below the cut-off point and the factors affecting the labour market outcome. We assess whether there is any discontinuity in the average values of the observable covariates (which refer to the period approximately a decade before the realization of the outcome) through the lens of the specification outlined above. We do not find any evidence of a shift in these predetermined characteristics at the relevant index threshold (Table 2). Therefore, we have no reason to expect any discontinuity in the relevant unobservable factors at the cut-off. Figure 1: Test for manipulation of the assignment variable based on density discontinuity \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & (I) & (II) & (III) & (IV) & (V) & (VI) \\ & Female & Mestizo & Age & \begin{tabular}{c} Household \\ head’s years \\ of education \\ \end{tabular} & \begin{tabular}{c} Household \\ size \\ \end{tabular} & \begin{tabular}{c} Rural \\ area \\ \end{tabular} \\ \hline Intention to treat & –0.006 & 0.000 & 0.015 & –0.010 & 0.052 & 0.011 \\ & (0.010) & (0.008) & (0.025) & (0.076) & (0.045) & (0.008) \\ No. of observations & 260,284 & 260,284 & 260,284 & 260,284 & 260,284 & 260,284 \\ No. of observations effectively used & 74,188 & 87,150 & 76,043 & 72,869 & 54,742 & 80,432 \\ Mean of dependent variable & 0.493 & 0.801 & 13.416 & 6.946 & 4.836 & 0.247 \\ Standard deviation of dependent variable & 0.500 & 0.399 & 1.456 & 3.388 & 1.636 & 0.431 \\ \hline \hline \end{tabular} _Notes: ***_ significant at the 1% level; *** significant at the 5% level; * significant at the 10% level. The table shows the robust bias-corrected estimates from local linear polynomial regressions adjusted at both sides of the cut-off. All specifications include carton fixed effects. Standard errors clustered at the household level are in parentheses. _Source:_ Authors’ analysis. \end{table} Table 2: Covariate balance: Evaluation of the discontinuity in the covariates at the cut-off In the next section, along with the main results, we present an additional analysis aiming to shed some light on the possible channels through which the effect of the grant is realised. Specifically, combining the Social Registry 2008/2009 and 2013/2014 waves in the fashion discussed above and using an identical econometric specification, we explore whether a child's being eligible for the HDG in 2008/2009 has an effect on her staying in school in 2013/2014. Last, we proceed to perform several robustness checks. First, we employ local quadratic regressions. Second, we assess the impact of selecting a CER-optimal bandwidth. Third, we carry out two placebo tests consisting of checking whether there is any discontinuity at two different points (38 and 40, respectively) well above the relevant cut-off. Third, we assess how our results change when we look at labor market outcomes after 18 years old (instead of 21). Finally, as mentioned above, we estimate a fuzzy RDD instrumenting HDG receipt with the discontinuity around the threshold. ## 4 Results We present our estimation results in four steps. We first discuss the impact of a household's being below the index cut-off (i.e., the ITT effect) on future labour market outcomes, presenting both a graphical illustration of the impact of the discontinuity and the econometric results. Second, we look at how the effects differ across different groups of children. Third, we investigate whether reducing school dropout is a plausible channel of the beneficial effects of the programme on labour market outcomes. Finally, we summarise the results of a number of robustness checks. ### Main results Figure 2 illustrates the impact of HDG eligibility on labour market outcomes. Panel (a) shows the effect on the proportion of months employed in the formal sector, whereas Panel (b) depicts the influence on average formal labour income. Following Cattaneo et al. (2020a), each subgraph presents the local sample means (the mean of the outcome within disjoint intervals of the index [bins], determined by means of the integrated mean squared error [IMSE] at each side of the cut-off), a global polynomial fit of order four and a local linear regression fit.3 All three approaches reveal a sharp increase in the outcome due to eligibility. Being below the relevant threshold implies an increase of approximately 2.5 points in the percentage of months employed in the formal labour market and raises average labour income by approximately $9 per month. The main take-away from the figure is that eligibility has a relevant positive impact on formal labour market outcomes according to parametric and nonparametric methods. As discussed above, we favour the latter in the rest of our analyses because of the advantages over the former highlighted by the literature. Figure 2. Effects on labour market outcomes after 21 years old We show the main results of our econometric exercise in Table 3. We present three specifications for each outcome: one without any controls (I), one including a set of observable socioeconomic household characteristics (II) and one including carton fixed effects in addition to the former controls. The econometric analysis visibly confirms the message conveyed by the graphical exploration above. Eligibility for the grant raised formal employment by approximately 2.5 percentage points and labour income by approximately $9 per month. The precision of the estimates is high in both cases, and the effects are sizeable: some 15% and 12% of the sample average in the case of employment and labour income, respectively; for the whole national economy, in 2019, the proportion of informal employment was 63.5% and the average monthly wage $512.1 (ILO, 2023). These results are substantially more optimistic than the findings of Araujo et al. (2018). Using the Social Registry 2002/2003 and 2013/2014 waves, these \begin{table} \begin{tabular}{l r r r} \hline \hline & (I) & (II) & (III) \\ \hline _Panel A._ Formal employment & & & \\ \hline Intention to treat & 0.025\({}^{***}\) & 0.024\({}^{***}\) & 0.025\({}^{***}\) \\ & (0.006) & (0.006) & (0.006) \\ No. of observations & 260,284 & 260,284 & 260,284 \\ No. of observations effectively used & 76,488 & 78,274 & 80,517 \\ Mean of dependent variable & 0.167 & 0.167 & 0.167 \\ Standard deviation of dependent variable & 0.310 & 0.310 & 0.310 \\ _Panel B._ Formal labour income & & & \\ \hline Intention to treat & 9.370\({}^{***}\) & 8.655\({}^{***}\) & 9.201\({}^{***}\) \\ & (2.920) & (2.811) & (2.742) \\ No. of observations & 260,284 & 260,284 & 260,284 \\ No. of observations effectively used & 80,494 & 83,988 & 85,676 \\ Mean of dependent variable & 72.862 & 72.862 & 72.862 \\ Standard deviation of dependent variable & 151.603 & 151.603 & 151.603 \\ Control variables & & ✓ & ✓ \\ Canton fixed effects & & ✓ & ✓ \\ \hline \hline \end{tabular} _Notes_: \({}^{***}\) significant at 1% level; \({}^{**}\) significant at 5% level; \({}^{*}\) significant at 10% level. The table shows the estimates from local linear regressions estimated separately at both sides of the cut-off with a triangular kernel and data-driven MSE-optimal bandwidths. The control variables included in the second column are child gender, child age, squared child age, child ethnicity, household head’s educational attainment, and household head’s marital status. Standard errors clustered at the household level using robust inference are in parentheses. _Source_: Authors’ analysis from national health surveys. \end{table} Table 3: Effects on labour market outcomes after 21 years old authors analyse the impact on employment rates in 2013/2014 of children living in an eligible household in 2002/2003. They report almost negligible effects of the HDG on the probability of employment, with only a modest positive impact for women in some specifications. The differences in our work may have to do with several factors. First, Araujo et al. (2018) focus on a different outcome (probability of employment at a given point in time) and a different time period (2013/2014) from those considered here. In this respect, given Ecuador's low unemployment rate of approximately 3% between 2010 and 2019 (ILO, 2023), it is reasonable to surmise that eventual improvements in labour market outcomes may occur in job quality rather than quantity. Like us, they use an RDD, exploiting the discontinuity in the index that determined eligibility in 2002/2003. One should bear in mind that their approach--and ours--can identify the impact of the policy only at a local level. Given that we use different cadastres, we actually look at distinct local ITT effects at different points of the distribution of well-being. Second, the identification strategy in Araujo et al. (2018) relies on eligibility for the HDG according to the poverty census from 2002/2003. We make use of the cadastre from 2008/2009, which aimed not only to update the register of potential beneficiaries but also to significantly improve the targeting of the grant (Fabara, 2009; Ministerio de Inclusion Economica y Social, 2019). Such changes might also enhance the performance of the HDG. Third, the HDG became much more generous over time. Whereas its basic amount was $11.5 from 2003 to 2007, it reached $35 from 2009 to 2011. Last but not least, our merging several waves of the Social Registry results in nonrandom sample attrition. For families that believed they were suitable candidates for receiving welfare payments, making sure that they appeared in the Social Registry (for which each household could voluntarily sign up even if it did not receive a visit from government interviewers) was clearly urgent. Nevertheless, it is reasonable to suspect that those whose economic situation had improved over time had fewer incentives to ensure that they remained present in the cadastre. Therefore, it makes sense that an exploration that relies on tracking individuals across different waves of the poverty census could underestimate the effect of eligibility on socioeconomic outcomes (i.e., we would be less likely to observe those individuals exhibiting better economic performance over time). Regarding the external validity of our findings, we should bear in mind that we estimate only the local ITT effects of the subsidy. In other words, our results are relevant for the neighbourhood of the specific cut-off point in 2008. In principle, we would expect these (positive) results to hold, at least to some extent, for households with lower socioeconomic status. In fact, previous work has revealed that children living in the country's poorest households experience the largest short-term impacts (in terms of school enrolment and health) of the HDG (Oosterbeek et al., 2008; Paxson & Schady, 2010; Schady & Araujo, 2008). ### Heterogeneity in the effects of the programme To study the impact of the HDG across different types of individuals, we stratify the sample along several relevant dimensions measured at the time of data collection by the Social Registry 2008/2009. Table 4 shows the effects of the HDG by gender, ethnicity, household head's education and area of residence (urban or rural). Regarding employment, we find a positive effect across all population segments with the exception of rural households (which is partly attributable to the lower sample size for these areas). Nevertheless, the differences are statistically different from zero only in the case of gender, where the positive effects for men exceed the benefit found for women. The pattern is quite similar in the case of formal labour income, with the exception of the null effects on individuals from households with high educational attainment. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & (I) & (II) & (III) & (IV) & (V) & (VI) & (VII) & (VIII) \\ & \multicolumn{2}{c}{Child gender} & \multicolumn{2}{c}{Child ethnicity} & \multicolumn{2}{c}{Household head’s} & \multicolumn{2}{c}{Area of} \\ & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{years of education} & \multicolumn{2}{c}{residence} \\ & Males & Females & Non-mestizos & Mestizos & \(\leq\) 6 years & \(\geq\) 7 years & Urban & Rural \\ \hline \multicolumn{10}{l}{_Panel A._ Formal employment} \\ \multirow{2}{*}{Intention to treat} & 0.033*** & 0.015** & 0.027*** & 0.024*** & 0.026*** & 0.021* & 0.025*** & 0.019 \\ & (0.008) & (0.007) & (0.011) & (0.007) & (0.007) & (0.010) & (0.006) & (0.013) \\ Difference & \multicolumn{2}{c}{} & 0.018* & 0.003 & & 0.005 & & 0.006 \\ & (0.011) & & (0.013) & & (0.012) & & (0.015) \\ No. of observations & 132,079 & 128,205 & 51,718 & 208,566 & 163,198 & 97,086 & 196,034 & 64,250 \\ No. of observations effectively used & 46,113 & 41,772 & 19,998 & 62,638 & 50,593 & 26,306 & 79,409 & 15,280 \\ Mean of dependent variable & 0.216 & 0.116 & 0.156 & 0.170 & 0.169 & 0.164 & 0.160 & 0.189 \\ Standard deviation of dependent variable & 0.341 & 0.264 & 0.299 & 0.312 & 0.311 & 0.307 & 0.303 & 0.327 \\ \multicolumn{10}{l}{_Panel B._ Formal labour income} \\ \multirow{2}{*}{Intention to treat} & 13.550*** & 4.356* & 11.829** & 8.663*** & 10.362*** & 3.809 & 9.329*** & 7.071 \\ & (4.301) & (2.813) & (5.285) & (3.123) & (3.213) & (5.000) & (2.832) & (6.165) \\ Difference & \multicolumn{2}{c}{} & 9.195* & \multicolumn{2}{c}{} & 3.166 & \multicolumn{2}{c}{6.553} & \multicolumn{2}{c}{2.259} \\ & \multicolumn{2}{c}{(5.139)} & \multicolumn{2}{c}{(6.139)} & \multicolumn{2}{c}{(5.944)} & \multicolumn{2}{c}{(6.785)} & \multicolumn{2}{c}{} \\ No. of observations & 132,079 & 128,205 & 51,718 & 208,566 & 163,198 & 97,086 & 196,034 & 64,250 \\ No. of observations effectively used & 45,359 & 51,802 & 20,252 & 67,175 & 56,051 & 23,666 & 79,314 & 15,834 \\ Mean of dependent variable & 96,874 & 48.124 & 67.030 & 74.308 & 72.400 & 73.638 & 70.269 & 80.774 \\ Standard deviation of dependent variable & 171.859 & 122.586 & 144.523 & 153.274 & 149.665 & 154.803 & 149.570 & 157.379 \\ Control variables & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Canton fixed effects & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \multicolumn{10}{l}{_Notes:_***: significant at 1% level; ***: significant at 5% level; *: significant at 10% level. The table shows the estimates from local linear regressions estimated separately at both sides of the cut-off with a triangular kernel and data-driven MSE-optimal bandwidths. The control variables include child gender, child age, squared child age, child ethnicity, household head’s educational attainment, household head’s marital status and province fixed effects. The specification for each group excludes the covariate that defines that population segment (e.g., the analysis for men and women exclude the gender covariate). Standard errors clustered at the household level using robust inference are in parentheses. \\ \hline \hline \end{tabular} \end{table} Table 4: Effect heterogeneity ### Mechanisms As discussed in Section 2, human capital formation is the most obvious channel through which the HDG may affect long-term socioeconomic outcomes. It is possible to track part of the children present in the Social Registry 2008/2009 in the cadastre corresponding to 2013/2014, which offers information on school enrolment at that time. Using the same research design and econometric specification used above, we explore whether HDG eligibility in 2008/2009 contributed to avoiding school dropout in 2013/2014. The results of this analysis--shown in Table 5 suggest that the HDG had a statistically significant impact on the probability that children enrolled in education in 2008/2009 continued studying in 2013/2014. The size of the point estimate, at nearly three percentage points (more than 5% of the average proportion of children who remained in school in 2013/2014), is not negligible. As mentioned in the previous subsection (and for the same reasons), the sample used for this auxiliary analysis suffers from severe attrition, which is very likely to be nonrandom. In our case, we can track only approximately 40% of the young people. As a result, our exercise may underestimate the effect of eligibility on preventing school dropout (we are less likely to track those households with children with better educational outcomes). Regrettably, our data do not allow us to look at other outcomes, but we can also make sense of our results by drawing on evidence from prior literature. First, \begin{table} \begin{tabular}{l c c c} \hline \hline & (I) & (II) & (III) \\ \hline Intention to treat & 0.022 & 0.028\({}^{**}\) & 0.027\({}^{**}\) \\ & (0.014) & (0.013) & (0.012) \\ No. of observations & 102,144 & 102,144 & 102,144 \\ No. of observations effectively used & 34,636 & 37,346 & 41,165 \\ Mean of dependent variable & 0.528 & 0.528 & 0.528 \\ Standard deviation of dependent variable & 0.499 & 0.499 & 0.499 \\ Control variables & & ✓ & ✓ \\ Canton fixed effects & & ✓ & ✓ \\ \hline \end{tabular} _Notes_: \({}^{***}\) significant at 1% level; \({}^{**}\) significant at 5% level; \({}^{*}\) significant at 10% level. The table shows the estimates from local linear regressions estimated separately at both sides of the cut-off with a triangular kernel and data-driven MSE-optimal bandwidths. The control variables included in the second column are child gender, child age, squared child age, child ethnicity, household head’s educational attainment, and household head’s marital status. Standard errors clustered at the household level using robust inference are in parentheses. _Source_: Authors’ analysis from national health surveys. \end{table} Table 5: Effects on education enrolment in 2013/2014 regarding education, several studies on Ecuador find a positive effect of the HDG on school enrolment (Oosterbeek et al., 2008; Schady & Araujo, 2008) and even on a child's probability of completing secondary school a decade after being eligible for the programme (Araujo et al., 2018). The qualitative work of Mayer (2011) finds that the HDG raises families' educational aspirations. Related to the schooling dimension, prior literature also suggests that the grant reduced child labour (Martinez Dobronsky & Rosero Moncayo, 2012; Schady & Araujo, 2006). In contrast, most previous research does not detect any sizeable impact on cognitive development (Paxson & Schady, 2007, 2010; Ponce & Bedi, 2010), with one notable exception (Fernald & Hidrobo, 2011). Unfortunately, evidence on the acquisition of nonformal training is also unavailable. With respect to health, another dimension of human capital formation, the literature on the HDG is not very optimistic. It identifies modest positive effects for some segments of children on hemoglobin levels or receipt of deworming treatments and nutritional supplements (Fernald & Hidrobo, 2011; Paxson & Schady, 2010). Nevertheless, a recent nonexperimental study using county-level panel data suggests that the expansion of the HDG resulted in a substantial reduction in under-five mortality (particularly from poverty-related diseases, notably from malnutrition, diarrheal diseases and lower respiratory tract infections) (Moncayo et al., 2019). The impact on mental health, unexplored in the case of the HDG and Ecuador, represents a pathway worth considering, as well (Haushofer et al., 2020; Zimmerman et al., 2021). As mentioned in Section 2, other channels through which the grant could favour future socioeconomic outcomes might exist. First, the benefit may alleviate credit constraints, which would allow families to invest in productive assets. Although we lack evidence on this mechanism for the Ecuadorian context, the experience of other countries points to the plausibility of this mechanism (Blattman et al., 2020; Gelders & Bailey-Athias, 2019; Gertler et al., 2012; Maluccio, 2010; Martinez, 2005). The absence of negative effects of the HDG on adult work in the short and medium run (Araujo et al., 2017; Bosch & Schady, 2019) suggests the feasibility of this channel in relation to household saving and investment behaviour (e.g., favouring starting a business or accessing assets that enhance employability, such as a vehicle or a driving licence). A last alternative pathway is an improvement in household resource allocation, which is particularly relevant when women are the recipients of the transfers. This mechanism is compatible with the findings of Schady and Rosero (2008) and Nabernegg (2012), which indicate that the HDG supports consumption of child-related rather than undesirable goods and services (such as tobacco or alcohol). ### Robustness checks We perform a number of additional analyses to strengthen the credibility of our main findings. We present some of them in Table 6, based on the most complete econometric specification. The first column of the table shows the results when we use the CER criterion to select the optimal bandwidth. The second employs a local quadratic regression (instead of a linear one). In both exercises, we obtain identical estimates to those in our baseline. The last two columns of the table show the results of two placebo tests consisting of evaluating the impact of crossing two fake thresholds (37.5 and 39.5, respectively). The point estimate is not statistically significant from zero in either case. We complement the assessment of the robustness of our analyses with two \begin{table} \begin{tabular}{l c c c c} \hline \hline & (I) & (II) & (III) & (IV) \\ & CER- & Local & Placebo 1: & Placebo 2: \\ & optimal & quadratic & Cut-off of & Cut-off of \\ & bandwidth & regression & 37.5 points & 39.5 points \\ \hline \hline \end{tabular} _Panel A._ Formal employment \begin{tabular}{l c c c} \hline \hline & 0.024\({}^{***}\) & 0.024\({}^{***}\) & \(-\)0.006 & 0.000 \\ & (0.007) & (0.007) & (0.007) & (0.008) \\ No. of observations & 260,284 & 260,284 & 260,284 & 260,284 \\ No. of observations effectively used & 43,409 & 116,636 & 60,771 & 39,898 \\ Mean of dependent variable & 0.167 & 0.167 & 0.167 & 0.167 \\ Standard deviation of dependent variable & 0.310 & 0.310 & 0.310 & 0.310 \\ \hline \hline \end{tabular} _Panel B._ Formal labour income \begin{tabular}{l c c c c} \hline \hline & 8.644\({}^{***}\) & 9.135\({}^{***}\) & \(-\)4.143 & \(-\)1.424 \\ & (3.228) & (3.253) & (3.031) & (4.104) \\ No. of observations & 260,284 & 260,284 & 260,284 & 260,284 \\ No. of observations effectively used & 46,181 & 116,959 & 69,005 & 35,800 \\ Mean of dependent variable & 72.862 & 72.862 & 72.862 & 72.862 \\ Standard deviation of dependent variable & 151.603 & 151.603 & 151.603 & 151.603 \\ Control variables & ✓ & ✓ & ✓ & ✓ \\ Canton fixed effects & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} _Notes: \({}^{***}\) significant at 1% level; \({}^{**}\) significant at 5% level; \({}^{*}\) significant at 10% level. Column (I) shows the estimates from local linear regressions estimated separately for each subsample at both sides of the cut-off with a triangular kernel and data-driven CER-optimal bandwidths. Column (II) contains the estimates from local quadratic regressions estimated separately for each subsample at both sides of the cut-off with a triangular kernel and data-driven MSE-optimal bandwidths. Column (III) includes the results of a falsification test that considers a cut-off of 37.5 points (it uses local linear regressions estimated separately for each subsample at both sides of the cut-off with a triangular kernel and data-driven MSE-optimal bandwidths). Column (IV) is analogous to column (III), but it considers a cut-off of 39.5 points. The covariates included in the second column are child gender, child age, squared child age, child ethnicity, household head educational attainment, and household head marital status. Standard errors clustered at the household level using robust inference are in parentheses. Source: Authors’ analysis. Source: Authors’ analysis._ \end{table} Table 6: Robustness checks: Effects on labour market outcomes after 21 years old from local quadratic regressions additional exercises. First, we implement a fuzzy RDD using receipt of the HDG in 2010 according to administrative registers. In this approach, we instrument the binary variable capturing whether the household receives the grant with the discontinuity in the poverty index (Table A1). The first stage is notably strong (being just below the threshold increases the probability of receiving the HDG by almost 90 percentage points). These estimates are qualitatively and quantitatively similar to our baseline results. Indeed, we cannot reject that they are statistically different from the estimated ITT effects. As argued above, we strongly believe that this approach is not superior to trying to identify the ITT effects--as we do in the core of our paper. The results of applying the former methodology could well capture the impact of payments in a subsequent year. Therefore, the advantages of the fuzzy RDD are null in this context, even if its results offer a comforting robustness check. Our final evaluation of the stability of our results explores labour market outcomes after 18 instead of 21 years old (Table A2). The results are coherent with our main findings. The point estimates are significant but substantially smaller. This outcome is unsurprising and are in line with our analysis in Section 4.3 because individuals between 18 and 21 years old may be still in formal education, which does not represent a negative socioeconomic outcome. ## 5 Conclusion Despite their having become the flagship social programmes of a number of countries, empirical evidence on the impact of CCTs on long-term outcomes, which are more reflective of their main objective than short-term outcomes, is still scarce. By targeting poor households with children and fostering human capital formation, policymakers expect that these social benefits raise young adults' ability to earn income and consequently enhance the well-being of the next generations. Our paper has contributed to filling this gap, offering a rigorous assessment of the effect of the Ecuadorian HDG on the future outcomes of eligible children in the formal labour market. Our findings suggest that the grant indeed helps to curb--to some extent--the intergenerational transmission of poverty by raising children's future employment rates and wages in the formal economy. These results dialogue well with those from previous literature on the short-run impact of HDG. This prior stream of studies has overwhelmingly highlighted its positive effect on school enrolment. Our work has also added to the literature discussing the role of conditionality. Unfortunately, we do not have any reliable way of determining the causal effect of the weak enforcement of the conditions attached to HDG receipt. Nevertheless, our findings could be useful for the debate, as the previous literature focuses on Mexico, Nicaragua and Colombia, where national authorities require recipient households to meet conditions. Similarly to these papers, we have found a positive impact on long-term outcomes. The scope of the long-term benefits attached to the HDG that we have identified in this work could be greater. Better performance in the formal labour market could allow young adults to increase their access to health care and even retirement benefits in their later lives. Formalisation of the economy is a process with nonnegligible positive externalities (e.g., higher tax collection and lower income inequality). Naturally, as we have studied the impact of the grant only on the first steps of young people from eligible households into the world of work, further analyses of work outcomes at older ages and other dimensions of well-being would be highly valuable.
2301.13652
Round-Robin Beyond Additive Agents: Existence and Fairness of Approximate Equilibria
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [2021] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are envy-free up to one good (EF1) with respect to the agents' true valuation functions. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with cancelable valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular valuation functions as well, even though exact equilibria fail to exist!
Georgios Amanatidis, Georgios Birmpas, Philip Lazos, Stefano Leonardi, Rebecca Reiffenhäuser
2023-01-31T14:09:22Z
http://arxiv.org/abs/2301.13652v1
# Round-Robin Beyond Additive Agents: ###### Abstract Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yielding numerous elegant algorithmic results and producing challenging open questions. The problem becomes much harder in the presence of _strategic_ agents. Ideally, one would want to design _truthful_ mechanisms that produce allocations with fairness guarantees. However, in the standard setting without monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial fairness guarantees. Recently, Amanatidis et al. [5] suggested the study of mechanisms that produce fair allocations in their equilibria. Specifically, when the agents have additive valuation functions, the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations are _envy-free up to one good_ (EF1) with respect to the agents' _true valuation functions_. Following this agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond the above default assumption of additivity. In particular, we prove that for agents with _cancelable_ valuation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1 allocations with respect to the agents' true valuation functions. Further, we show that the approximate EF1 fairness of approximate equilibria surprisingly holds for the important class of _submodular_ valuation functions as well, even though exact equilibria fail to exist! Introduction Fair division refers to the problem of dividing a set of resources among a group of agents in a way that every agent feels they have received a "fair" share. The mathematical study of (a continuous version of) the problem dates back to the work of Banach, Knaster, and Steinhaus [36], who, in a first attempt to formalize fairness, introduced the notion of _proportionality_, i.e., each of the \(n\) agents receives at least \(1/n\)-th of the total value from fer perspective. Since then, different variants of the problem have been studied in mathematics, economics, political science, and computer science, and various fairness notions have been defined. The most prominent fairness notion is _envy-freeness_[22, 21, 37], where each agent values her set of resources at least as much as the set of any other agent. When the available resources are _indivisible_ items, i.e., items that cannot be split among agents, notions introduced for infinitely divisible resources, like proportionality and envy-freeness are impossible to satisfy, even approximately. In the last two decades fair allocation of indivisible items has attracted extensive attention, especially within the theoretical computer science community, yielding numerous elegant algorithmic results for various new fairness notions tailored to this discrete version of the problem, such as _envy-freeness up to one good_ (EF1) [28, 16], _envy-freeness up to any good_ (EFX) [18], and _maximin share fairness_ (MMS) [16]. We refer the interested reader to the surveys of Procaccia [34], Bouveret et al. [15], Amanatidis et al. [6]. In this work, we study the problem of fairly allocating indivisible _goods_, i.e., items of non-negative value, to _strategic_ agents, i.e., agents who might misreport their private information if they have an incentive to do so. Incentivising strategic agents to truthfully report their valuations is a central goal--and often a notorious challenge--in mechanism design, in general. Specifically in fair division, this seems particularly necessary, since any fairness guarantee on the outcome of a mechanism typically holds with respect to its input, namely the _reported_ preferences of the agents rather than their true, private preferences which they may have chosen not to reveal. Without truthfulness, fairness guarantees seem to become meaningless. Unfortunately, when monetary transfers are not allowed, as is the standard assumption in fair division, such _truthful_ mechanisms fail to exist for any meaningful notion of fairness, even for simple settings with two agents who have additive valuation functions [2]. As an alternative, Amanatidis et al. [5] initiated the study of _equilibrium fairness_: when a mechanism always exhibits stable (i.e., pure Nash equilibrium) states, each of which corresponds to a fair allocation with respect to the _true_ valuation functions, the need for extracting agents' true preferences is mitigated. Surprisingly, they show that for the standard case of additive valuation functions, the simple _Round-Robin_ routine is such a mechanism with respect to EF1 fairness. Round-Robin takes as input an ordering of the goods for each agent, and then cycles through the agents and allocates the goods one by one, giving to each agent their most preferred available good. For agents with additive valuation functions, Round-Robin is known to produce EF1 allocations (see, e.g., [30]). Note that, without monetary transfers, what distinguishes a mechanism from an algorithm is that its input is the, possibly misreported, agents' preferences. To further explore the interplay between incentives and fairness, we take a step back and focus solely on this very simple, yet fundamental, allocation protocol. It should be noted that the Round-Robin algorithm is one of the very few fundamental procedures one can encounter throughout the discrete fair division literature. Its central role is illustrated by various prominent results, besides producing EF1 allocations: it can be modified to produce approximate MMS allocations [3], as well as EF1 allocations for _mixed goods and chores_ (i.e., items with negative value) [9]. It produces _envy-free_ allocations with high probability when the values are drawn from distributions [29], it is used to produce a "nice" initial allocation as a subroutine in the state-of-the-art approximation algorithms for _pairwise maximin share fair_ (PMMS) allocations [25] and EFX allocations [4], it has the lowest communication complexity of any known fair division algorithm, and, most relevant to this work, it is the _only_ algorithm for producing fair allocations for more than two agents that, when viewed as a mechanism, is known to even have equilibria [8]. We investigate the existence and the EF1 guarantees of approximate pure Nash equilibria of the Round-Robin mechanism beyond additive valuation functions, i.e., when the goods already assigned to an agent potentially change how they value the remaining goods. In particular, we are interested in whether anything can be said about classes that largely generalize additive functions, like _cancelable_ functions, i.e., functions where the marginal values with respect to any subset maintain the relative ordering of the goods, and _submodular_ functions, i.e., functions capturing the notion of diminishing returns. Although the stability and equilibrium fairness properties of Round-Robin have been visited before [8, 5], to the best of our knowledge, we are the first to study the problem for non-additive valuation functions and go beyond exact pure Nash equilibria. Cancelable functions also generalize budget-additive, unit-demand, and multiplicative valuation functions [12], and recently have been of interest in the fair division literature as several results can be extended to this class [12, 1, 19]. For similar reasons, cancelable functions seem to be a good pairing with Round-Robin as well, at least in the algorithmic setting (see, e.g., Proposition 2.5). Nevertheless, non-additive functions seem to be massively harder to analyze in our setting and come with various obstacles. First, it is immediately clear that, even without strategic agents, the input of an ordinal mechanism implemented as a simultaneous-move one-shot game, like the Round-Robin mechanism we study here, can no longer capture the complexity of a submodular function (see also the relevant discussion in Our Contributions). As a result, translating this sequential assignment to an estimate on the value of each agent's _bundle_ of goods, is not obvious. Lastly, and this applies to cancelable functions as well, assuming equilibria do exist and enough can be shown about the value of the assigned bundles to establish fairness, there is no reason to expect that any fairness guarantee will hold with respect to the true valuation functions, as the agents may misreport their preferences in an arbitrary fashion. ### Contribution and Technical Considerations We study the well-known Round-Robin mechanism (Mechanism 1) for the problem of fairly allocating a set of indivisible goods to a set of strategic agents. We explore the existence of approximate equilibria, along with the fairness guarantees that the corresponding allocations provide with respect to the agents' true valuation functions. Qualitatively, we generalize the surprising connection between the stable states of this simple mechanism and its fairness properties to all approximate equilibria equilibria and for valuation functions as general as subadditive cancelable and submodular. In more detail, our main contributions can be summarized as follows: * We show that the natural generalization of the _bluff profile_ of Aziz et al. [8] is an exact PNE that always corresponds to an EF1 allocation, when agents have _cancelable_ valuation functions (Theorem 3.2 along with Proposition 2.5). Our proof is simple and intuitive and generalizes the results of Aziz et al. [8] and Amanatidis et al. [5]. * For agents with submodular valuation functions, we show that there are instances where no \((3/4+\varepsilon)\)-approximate PNE exists (Proposition 3.4), thus creating a separation between the cancelable and the submodular cases. Nevertheless, we prove that an appropriate generalization of the bluff profile is a \(1/2\)-approximate PNE (Theorem 3.7) that also produces an \(1/2\)-EF1 allocation with respect to the true valuation functions (Theorem 3.8). * We provide a unified proof that connects the factor of an approximate PNE with the fairness approximation factor of the respective allocation. In particular, any \(\alpha\)-approximate PNE results in a \(\alpha/2\)-EF1 allocation for subadditive cancelable agents (Theorem 4.5), and in a \(\alpha/3\)-EF1 allocation for submodular agents (Theorem 4.4). We complete the picture by providing lower bounds in both cases (Theorem 4.3 and Proposition 4.8), which demonstrate that our results are almost tight. While this is not the first time Round-Robin is considered for non-additive agents, see, e.g., [13], to the best of our knowledge, we are the first to study its fairness guarantees for cancelable and submodular valuation functions, independently of incentives. As a minor byproduct of our work, Theorem 3.8 and the definition of the bluff profile imply that, given _value oracles_ for the submodular functions, we can use Round-Robin as a subroutine to produce \(1/2\)-EF1 allocations. This also raises the question of whether one should allow a more expressive bid, e.g., a value oracle. While, of course, this is a viable direction, we avoid it here as it comes with a number of issues. Allowing the input to be exponential in the number of goods is already problematic, especially when simplicity and low communication complexity are two appealing traits of the original mechanism. Moreover, extracting orderings from value oracles would essentially result in a mechanism equivalent to ours (if the ordering of an agent depended only on _her_ function) or to a sequential game (if the orderings depended on all the functions) which is not what we want to explore here. Note that less information is not necessarily an advantage towards our goal. While this results in a richer space of equilibria, fairness guarantees are increasingly harder to achieve. As a final remark, all the algorithmic procedures we consider run in polynomial time, occasionally assuming access to value oracles, e.g., Algorithms 2, 3, 4. Although we do not consider computational complexity questions here, like how do agents compute best responses or how do they reach approximate equilibria, we do consider such questions interesting directions for future work. ### Further Related Work The problem of fairly allocating indivisible goods to additive agents in the non-strategic setting has been extensively studied; for a recent survey, see Amanatidis et al. [6]. Although the additivity of the valuation functions is considered a standard assumption, there are many works that explore richer classes of valuation functions. Some prominent examples include the computation of EF1 allocations for agents with general non-decreasing valuation functions [28], EFX allocations (or relaxations of EFX) under agents with cancelable valuation functions [12, 1, 19] and subaditive valuation functions [33, 20], respectively, and approximate MMS allocations for submodular, XOS, and subadditive agents [11, 23]. Moving to the strategic setting, Caragiannis et al. [17] and Markakis and Psomas [31] were the first to consider the question of whether it is possible to have mechanisms that are truthful and fair at the same time, again assuming additive agents. Amanatidis et al. [2] resolved this question for two agents, showing there is no truthful mechanism with fairness guarantees under any meaningful fairness notion. As a result, subsequent papers considered truthful mechanism design under restricted valuation function classes [24, 10]. The stability of Round-Robin was first studied by Aziz et al. [8], who proved that it always has PNE by using a special case of retracted result of Bouveret and Lang [13] (this did not affect the former though; see [7]). Finally, besides the work of Amanatidis et al. [5] mentioned earlier, the fairness properties of Round-Robin under strategic agents have recently been studied by Psomas and Verma [35]. Therein it is shown that Round-Robin, despite being non-truthful, satisfies a relaxation of truthfulness, as it is _not obviously manipulable_. ## 2 Preliminaries For \(a\in\mathbb{N}\), let \([a]\) denote the set \(\{1,2,\ldots,a\}\). We will use \(N=[n]\) to denote the set of agents and \(M=\{g_{1},\ldots,g_{m}\}\) to denote the set of goods. Each agent \(i\in N\) has a valuation function \(v_{i}:2^{M}\rightarrow\mathbb{R}_{\geq 0}\) over the subsets of goods. We assume that all \(v_{i}\) are _normalized_, i.e., \(v_{i}(\emptyset)=0\). We also adopt the shortcut \(v_{i}(T\,|\,S)\) for the _marginal value_ of a set \(T\) with respect to a set \(S\), i.e., \(v_{i}(T\,|\,S)=v_{i}(T\cup S)-v(S)\). If \(T=\{g\}\), we write \(v_{i}(g\,|\,S)\) instead of \(v(\{g\}\,|\,S)\). For each agent \(i\in N\), we say that \(v_{i}\) is * _non-decreasing_ (often referred to as _monotone_), if \(v_{i}(S)\leq v_{i}(T)\) for any \(S\subseteq T\subseteq M\). * _submodular_, if \(v_{i}(g\,|\,S)\geq v_{i}(g\,|\,T)\) for any \(S\subseteq T\subseteq M\) and \(g\not\in T\). * _cancelable_, if \(v_{i}(S\cup\{g\})>v_{i}(T\cup\{g\})\Rightarrow v_{i}(S)>v_{i}(T)\) for any \(S,T\subseteq M\) and \(g\in M\setminus(S\cup T)\). * _additive_, if \(v_{i}(S\cup T)=v_{i}(S)+v_{i}(T)\) for every \(S,T\subseteq M\) with \(S\cap T=\emptyset\). * _subadditive_, if \(v_{i}(S\cup T)\leq v_{i}(S)+v_{i}(T)\) for every \(S,T\subseteq M\). Throughout this work, we only consider non-decreasing valuation functions, e.g., when we refer to submodular functions, we mean non-decreasing submodular functions. Note that although both submodular and (subadditive) cancelable functions are strict superclasses of additive functions, neither one is a super-class of the other. We will occasionally need an alternative characterization of submodular functions due to Nemhauser et al. [32]. **Theorem 2.1** (Nemhauser et al. [32]).: _A function \(v:2^{M}\to\mathbb{R}_{\geq 0}\) is (non-decreasing) submodular if and only if we have \(v(T)\leq v(S)+\sum_{i\in T\setminus S}v(i\,|\,S)\), for all \(S,T\subseteq M\)._ Also, the following lemma summarizes some easy observations about cancelable functions. **Lemma 2.2**.: _If \(v:2^{M}\to\mathbb{R}_{\geq 0}\) is cancelable, then \(v_{i}(S\cup R)>v_{i}(T\cup R)\Rightarrow v_{i}(S)>v_{i}(T)\), implying that \(v_{i}(S)\geq v_{i}(T)\Rightarrow v_{i}(S\cup R)\geq v_{i}(T\cup R)\), for any \(S,T,R\subseteq M\), such that \(R\subseteq M\setminus S\cup T\). In particular, \(v_{i}(S)=v_{i}(T)\Rightarrow v_{i}(S\cup R)=v_{i}(T\cup R)\)._ Note that, for \(S,T\subseteq M\), Lemma 2.2 directly implies that \(\operatorname*{arg\,max}_{g\in T}v(g)\subseteq\operatorname*{arg\,max}_{g\in T }v(g\,|\,S)\). Despite the fact that the agents have valuation functions, the mechanism we study (Mechanism 1) is _ordinal_, i.e., it only takes as input a _preference ranking_ from each agent. Formally, the preference ranking \(>_{i}\), which agent \(i\) reports, defines a total order on \(M\), i.e., \(g>_{i}g^{\prime}\) implies that good \(g\) precedes good \(g^{\prime}\) in agent \(i\)' declared preference ranking.1 We call the vector of the agents' declared preference rankings, \(\succs=(>_{1},\ldots,>_{n})\), the _reported profile_ for the instance. So, while an instance to our problem is an ordered triple \((N,M,\nu)\), where \(\nu=(v_{1},\ldots,v_{n})\) is a vector of the agents' valuation functions, the input to Mechanism 1 is \((N,M,\succs)\) instead. Footnote 1: See the discussion after the statement of Mechanism 1 about why assuming that the reported preference rankings are total (rather than partial) orders is without loss of generality. Note that \(>_{i}\) may not reflect the actual underlying values, i.e., \(g>_{i}g^{\prime}\) does not necessarily mean that \(v_{i}(g)>v_{i}(g^{\prime})\) or, more generally, \(v_{i}(g\,|\,S)>v_{i}(g^{\prime}\,|\,S)\) for a given \(S\subseteq M\). This might be due to agent \(i\) misreporting her preference ranking, or due to the fact that any single preference ranking is not expressive enough to fully capture all the partial orders induced by a submodular function. Nevertheless, a valuation function \(v_{i}\) does induce a _true preference ranking_\(\succs_{i|S}^{*}\) for each set \(S\subseteq M\), which is a partial order, i.e., \(g\succs_{i|S}^{*}g^{\prime}\Leftrightarrow v_{i}(g\,|\,S)\geq v_{i}(g^{\prime} \,|\,S)\) for all \(g,g^{\prime}\in M\). We use \(>_{i|S}^{*}\) if the corresponding preference ranking is _strict_, i.e., when \(g\succs_{i|S}^{*}g^{\prime}\,\wedge\,g^{\prime}\succs_{i|S}^{*}g\,\Rightarrow g =g^{\prime}\), for all \(g,g^{\prime}\in M\setminus S\). For additive (and more generally, for cancelable) valuations, we drop \(S\) for the notation and simply write \(\succs_{i}^{*}\) or \(\succs_{i}^{*}\). Finally, for a total order \(>\) on \(M\) and a set \(T\subseteq M\), we use \(\operatorname{top}(>,T)\) to denote the "largest" element of \(T\) with respect to \(>\). ### Fairness Notions A fair division mechanism produces an _allocation_\((A_{1},\ldots,A_{n})\), where \(A_{i}\) is the _bundle_ of agent \(i\), which is a partition of \(M\). The latter corresponds to assuming no free disposal, namely all the goods must be allocated. There are several different notions which attempt to capture which allocations are "fair". The most prominent such notion in the fair division literature has been _envy-freeness_ (EF) [22, 21, 37], which has been the starting point for other relaxed notions, more appropriate for the indivisible goods setting we study here, as _envy-freeness up to one good_ (EF1) [28, 16] and _envy-freeness up to any good_ (EFX) [18]. Here we focus on EF1. **Definition 2.3**.: An allocation \((A_{1},\ldots,A_{n})\) is * \(\alpha\)_-envy-free_ (\(\alpha\)-EF), if for every \(i,j\in N\), \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j})\). * \(\alpha\)_-envy-free up to one good_ (\(\alpha\)-EF1), if for every pair of agents \(i,j\in N\), with \(A_{j}\neq\emptyset\), there exists a good \(g\in A_{j}\), such that \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j}\setminus\{g\})\). When for every agent \(j\in N\) with \(A_{j}\neq\emptyset\), we have \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{j}\setminus\{g\})\) for some good \(g\in A_{j}\), we say that \((A_{1},\ldots,A_{n})\) is \(\alpha\)-EF1 _from agent \(i\)'s perspective_, even when the allocation is not \(\alpha\)-EF1! ### Mechanisms and Equilibria We are interested in _mechanisms_ that produce allocations with EF1 guarantees. When _no payments_ are allowed, like in our setting, an allocation mechanism \(\mathcal{M}\) is just an allocation algorithm that takes as input the agents' reported preferences. In particular, Round-Robin, the mechanism of interest here, takes as input the reported profile \(\boldsymbol{\succ}\) and produces an allocation of all the goods. This distinction in terminology is necessary as the reported input may not be consistent with the actual valuation functions due to the agents' incentives. When the allocation returned by \(\mathcal{M}(\boldsymbol{\succ})\) has some fairness guarantee, e.g., it is 0.5-EF1, we will attribute the same guarantee to the reported profile itself, i.e., we will say that \(\boldsymbol{\succ}\) is 0.5-EF1. We study the fairness guarantees of the (approximate) pure Nash equilibria of Round-Robin. Given a preference profile \(\boldsymbol{\succ}=(>_{1},\ldots,>_{n})\), we write \(\boldsymbol{\succ}_{-i}\) to denote \((>_{1},\ldots,>_{l-1},>_{i+1},\ldots,>_{n})\) and given a preference ranking \(>^{\prime}_{i}\) we use \((>^{\prime}_{i},\boldsymbol{\succ}_{-i})\) to denote the profile \((>_{1},\ldots,>^{\prime}_{i-1},>^{\prime}_{i+1},\ldots,>_{n})\). For the next definition we abuse the notation slightly: given an allocation \((A_{1},\ldots,A_{n})\) produced by \(\mathcal{M}(\boldsymbol{\succ})\), we write \(v_{i}(\mathcal{M}(\boldsymbol{\succ}))\) to denote \(v_{i}(A_{i})\); similarly for \(\mathcal{M}(\boldsymbol{\succ}^{\prime}_{i},\boldsymbol{\succ}_{-i})\). **Definition 2.4**.: Let \(\mathcal{M}\) be an allocation mechanism and consider a preference profile \(\boldsymbol{\succ}=(>_{1},\ldots,>_{n})\). We say that the total order \(>_{i}\) is an \(\alpha\)_-approximate best response_ to \(\boldsymbol{\succ}_{-i}\) if for every total order, i.e., permutation \(>^{\prime}_{i}\) of \(M\), we have \(\alpha\cdot v_{i}(\mathcal{M}(>^{\prime}_{i}\boldsymbol{\succ}_{-i}))\leq v_{ i}(\mathcal{M}(\boldsymbol{\succ}))\). The profile \(\boldsymbol{\succ}\) is an \(\alpha\)_-approximate pure Nash equilibrium_ (PNE) if, for each \(i\in N\), \(\succ_{i}\) is an \(\alpha\)-approximate best response to \(\boldsymbol{\succ}_{-i}\). When \(\alpha=1\), we simply refer to best responses and exact PNE. ### The Round-Robin Mechanism We state Round-Robin as a mechanism (Mechanism 1) that takes as input a reported profile \((>_{1},\ldots,>_{n})\). For the sake of presentation, we assume that the agents in each _round_ (lines 3-6) are always considered according to their "name", i.e., agent 1 is considered first, agent 2 second, and so on, instead of having a permutation determining the priority of the agents as an extra argument of the input. This is without loss of generality, as it only requires renaming the agents accordingly. We often refer to the process of allocating a good to an agent (lines 4-6) as a _step_ of the mechanism. Note that there is no need for a tie-breaking rule here, as the reported preference rankings are assumed to be total orders. Equivalently, one could allow for partial orders (either directly or via cardinal bids as it is done in [5]) paired with a deterministic tie-breaking rule, e.g., lexicographic tie-breaking, a priori known to the agents. In the rest of the paper, we will assume that \(m=kn\) for some \(k\in\mathbb{N}\), for simplicity. Note that this is without loss of generality, as we may introduce at most \(n-1\) dummy goods that have marginal value of \(0\) with respect to any set for everyone and append them at the end of the reported preference rankings to be allocated during the last steps of the mechanism. We have already mentioned that Round-Robin as an algorithm produces EF1 allocations for additive agents, where the input is assumed to be any strict variant \(\succ^{*}=(>_{1|\emptyset^{*}}^{*}>_{2|\emptyset^{*}}^{*},\ldots,>_{n|\emptyset }^{*})\) of the truthful profile \((>_{1|\emptyset^{*}}^{*}>_{2|\emptyset^{*}}^{*},\ldots,>_{n|\emptyset}^{*})\), i.e., the profile where each agent ranks the goods according to their singleton value. This property fully extends to cancelable valuation functions as well. The proof of Proposition 2.5 is rather simple, but not as straightforward as the additive case; note that it requires Lemma 3.3 from the next section. **Proposition 2.5**.: _Let be as described above. When all agents have cancelable valuation functions, the allocation returned by Round-Robin(\(\succ^{*}\)) is EF1._ Proof.: Let \((A_{1},\ldots,A_{n})\) be the allocation returned by Round-Robin(\(\succ^{*}\)). Fix two agents, \(i\) and \(j\), and let \(A_{i}=\{x_{1},x_{2},\ldots,x_{k}\}\) and \(A_{j}=\{y_{1},y_{2},\ldots,y_{k}\}\), where the goods in both sets are indexed according to the round in which they were allocated to \(i\) and \(j\), respectively. By the way Mechanism 1 is defined, we have \(x_{r}>_{i|\emptyset}^{*}y_{r+1}\), for all \(r\in[k-1]\). Therefore, \(x_{r}\succ_{i|\emptyset}^{*}y_{r+1}\), or equivalently, \(v_{i}(x_{r})\geq v_{i}(y_{r+1})\), for all \(r\in[k-1]\). Thus, by Lemma 3.3, we get \(v_{i}(A_{i}\setminus\{x_{k}\})\geq v_{i}(A_{j}\setminus\{y_{1}\})\), and using the fact that \(v_{i}\) is non-decreasing, \(v_{i}(A_{i})\geq v_{i}(A_{j}\setminus\{y_{1}\})\). ## 3 Existence of approximate PNE At first glance, it is not clear why Mechanism 1 has any pure Nash equilibria, even approximate ones for a constant approximation factor. For additive valuation functions, however, it is known that for any instance we can construct a simple preference profile, called the _bluff profile_, which is an exact PNE. While the proof of this fact, in its full generality, is fragmented over three papers [8, 14, 5], we give here a simple proof that generalizes the existence of exact PNE to cancelable valuation functions. As we shall see later, extending this result to submodular functions is not possible and even defining a generalization of the bluff profile which is a \(0.5\)-approximate PNE is not straightforward. ### Cancelable valuations Defining the bluff profile for cancelable agents, we will start from a strict variant of the truthful profile \((>_{1|\emptyset^{*}}^{*}>_{2|\emptyset^{*}}^{*},\ldots,>_{n|\emptyset}^{*})\), i.e., the profile where each agent ranks the goods according to their value (as single tons) in descending order, as we did for Proposition 2.5. Assume that any ties are broken deterministically to get the strict version \(\succ^{*}=(>^{*}_{1|\theta^{*}}>^{*}_{2|\theta^{*}},\ldots,>^{*}_{n|\theta^{*}})\). Now, consider \(\mathrm{Round-Robin}(\succ^{*})\) and let \(h_{1},h_{2},\ldots,h_{m}\) be a renaming of the goods according to the order in which they were allocated and \(>^{\mathrm{b}}\) be the corresponding total order (i.e., \(h_{1}>^{\mathrm{b}}h_{2}>^{\mathrm{b}}\ldots>^{\mathrm{b}}h_{m}\)). The _bluff profile_ is the preference profile \(\succ^{\mathrm{b}}=(>^{\mathrm{b}},>^{\mathrm{b}},\ldots,>^{\mathrm{b}})\), where everyone ranks the goods in the order they were allocated in \(\mathrm{Round-Robin}(\succ^{*})\). The following fact follows directly from the definition of the bluff profile and the description of \(\mathrm{Round-Robin}\). **Fact 3.1**.: _If \((\succ^{*})\) is a strict version of the truthful preference profile and \((\succ^{\mathrm{b}})\) is the corresponding bluff profile, then \(\mathrm{Round-Robin}(\succ^{\mathrm{b}})\) and \(\mathrm{Round-Robin}(\succ^{*})\) both return the same allocation._ An interesting observation about this fact is that, combined with Proposition 2.5 and Theorem 3.2, it implies that there is at least one PNE of Mechanism 1 which is EF1! Of course, it is now known that all exact PNE of \(\mathrm{Round-Robin}\) are EF1 for agents with _additive_ valuation functions and, as we will see later on, even approximate PNE have (approximate) EF1 guarantees for much more general instances, including the case of _subadditive cancelable_ valuation functions. **Theorem 3.2**.: _When all agents have cancelable valuation functions, the bluff profile is an exact PNE of Mechanism 1._ We first need to prove the following lemma that generalizes a straightforward property of additive functions for cancelable functions. **Lemma 3.3**.: _Suppose that \(v(\cdot)\) is a cancelable valuation function. Consider sets \(X=\{x_{1},x_{2},\ldots,x_{k}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{k}\}\). If for every \(j\in[k]\), we have that \(v(x_{j})\geq v(y_{j})\), then \(v(X)\geq v(Y)\)._ Proof.: We begin by arguing that it is without loss of generality to first assume that the elements of \(X\) are ordered by non-increasing value with respect to \(v\) and then also assume that \(y_{j}\notin\{x_{1},x_{2},\ldots,x_{j-1}\}\), for any \(j\in[k]\). The former is indeed a matter of reindexing, if necessary, the elements of \(X\) and consistently reindexing the corresponding elements of \(Y\). For the latter, suppose that there exist \(j\) such that \(y_{j}=x_{t}\) for \(t\leq j-1\) and consider the smallest \(t\) for which this happens. We have \(v(x_{t})\geq v(x_{t+1})\geq\ldots\geq v(x_{j})\) by the assumption on the ordering of the elements of \(X\), \(v(x_{j})\geq v(y_{j})\) by hypothesis, and \(v(y_{j})=v(x_{t})\). Thus, \(v(x_{t})=v(x_{t+1})=\ldots=v(x_{j})\). Now we may rename the elements of \(Y\) to \(\{y^{\prime}_{1},\ldots,y^{\prime}_{k}\}\) by inserting \(y_{j}\) to the \(t\)-th position, i.e., \(y^{\prime}_{t}=y_{j}\), \(y^{\prime}_{s}=y_{s-1}\), for \(t+1\leq s\leq j\), and \(y^{\prime}_{s}=y_{s}\), for \(s<t\) or \(s>j\). Since only \(y_{t},y_{t+1},\ldots,y_{j}\) changed indices but \(v(x_{t})=v(x_{t+1})=\ldots=v(x_{j})\), we again have that \(v(x_{j})\geq v(y^{\prime}_{j})\) for every \(j\in[k]\). Moreover, now the smallest \(\ell\) for which there exist \(j>\ell\) such that \(y_{j}=x_{\ell}\) is strictly larger than \(t\). By repeating this renaming of the elements of \(Y\) we end up with a renaming \(\{y^{*}_{1},\ldots,y^{*}_{k}\}\) such that for every \(j\in[k]\), \(v(x_{j})\geq v(y^{*}_{j})\) and \(y^{*}_{j}\notin\{x_{1},x_{2},\ldots,x_{j-1}\}\). So, assuming that the elements of \(X\) are ordered in non-increasing value with respect to \(v\) and that \(y_{j}\notin\{x_{1},x_{2},\ldots,x_{j-1}\}\), for any \(j\in[k]\), suppose towards a contradiction that \(v(X)<v(Y)\). That is, \(v(\{x_{1},x_{2},\ldots,x_{k}\})<v(\{y_{1},y_{2},\ldots,y_{k}\})\). Observe that if \(v(\{x_{1},x_{2},\ldots,x_{k-1}\})\geq v(\{y_{1},y_{2},\ldots,y_{k-1}\})\), this would imply that \(v(\{x_{1},\ldots,x_{k-1},y_{k}\})\geq v(\{y_{1},\ldots,y_{k-1},y_{k}\})\), by the definition of cancelable valuations and the fact that \(y_{k}\notin\{x_{1},\ldots,x_{k-1}\}\cup\{y_{1},\ldots,y_{k-1}\}\). This leads to \[v(\{x_{1},\ldots,x_{k-1},x_{k}\})\geq v(\{x_{1},\ldots,x_{k-1},y_{k}\})\geq v( \{y_{1},\ldots,y_{k-1},y_{k}\})\,,\] where the first inequality follows from \(v(x_{k})\geq v(y_{k})\) and Fact 2.2, contradicting our initial assumption. Therefore, \(v(\{x_{1},\ldots,x_{k-1}\})<v(\{y_{1},\ldots,y_{k-1}\})\). By repeating the same argument \(k-2\) more times, we end up with \(v(x_{1})<v(y_{1})\), a contradiction. Proof of Theorem 3.2.: Now we show that the bluff profile for cancelable valuations is an exact PNE. Consider the goods named \(h_{1},\ldots,h_{m}\) as in the bluff profile, i.e., by the order in which they are picked when each agent reports their preference order to be the one induced by all singleton good values. Consider agent \(i\). Her assigned set of goods under the bluff profile is \(A_{i}^{\mathrm{b}}=\{h_{i},h_{n+i},\ldots,h_{(k-1)n+i}\}\), where \(k=m/n\). Assume now that she deviates from \(\succ^{\mathrm{b}}\) to \(\succ_{i}\), resulting in some allocated set \(A_{i}=\{y_{1},y_{2},\ldots,y_{k}\}\), where we assume \(y_{r}\) to be allocated in round \(r\). We need to show \(v_{i}(A_{i}^{\mathrm{b}})\geq v_{i}(A_{i})\). To this end, we compare the goods allocated to agent \(i\) in both reports, one by one. If \(v_{i}(y_{r})\leq v_{i}(h_{(r-1)n+i})\) for every \(r\in[k]\), then we are done by applying Lemma 3.3 with \(A_{i}^{\mathrm{b}}\) and \(A_{i}\). If some of these inequalities fail, let \(r\) denote the latest round such that \(v_{i}(y_{r})>v_{i}(h_{(r-1)n+i}\). Therefore, in the execution of Mechanism 1 with the bluff profile as input, \(y_{r}\) was no longer available in round \(r\). However, \(y_{r}\) becomes available in round \(r\) once agent \(i\) deviates. This can only stem from the fact that at some point before round \(r\), a good \(h_{t}\) with \(t>(r-1)n+i\) was picked (since the overall number of goods picked per round always stays the same). Clearly, the only agent who could have done so (since she is the only one deviating from the common bluff order) is agent \(i\). Therefore, it holds that \(h_{t}=y_{j}\) for some \(j<r\). Now, we replace the ordered set \(Y=(y_{1},y_{2},\ldots,y_{k})\) by \(Y^{\prime}=(y_{1},\ldots,y_{j-1},y_{r},y_{j+1},\ldots,y_{r-1},y_{j},y_{r+1}, \ldots,y_{k})\), i.e., we simply exchange \(y_{r}\) and \(y_{j}\). It will be convenient to rename \(y_{1},\ldots,y_{k}\) so that \(Y^{\prime}=(y_{1}^{\prime},y_{2}^{\prime},\ldots,y_{k}^{\prime})\) We claim that it if agent \(i\) reports a preference ranking \(\succ_{i}^{\prime}\) that starts with all goods in \(Y^{\prime}\), in that specific order, followed by everything else, in any order, she still gets \(A_{i}\) but the goods are allocated in the order suggested by \(Y^{\prime}\). Indeed, first notice that the first \(j-1\) rounds of Round-Robin will be the same as in the run with the original deviation \(\succ_{i}\). Further, \(y_{j}^{\prime}=y_{r}\) is allocated earlier under \(\succ_{i}^{\prime}\) than under \(\succ_{i}\), and thus it surely is available at the time. After that, rounds \(j-1\) to \(r-1\) will be the same as in the run with the deviation \(\succ_{i}\). Now \(y_{r}^{\prime}=y_{j}\) is allocated later than before, namely in round \(r\), but it is not among the first \((r-1)n+i\) goods in the bluff order, as noted above, which means it is not allocated to any other agent in any round before the \(r\)-th under \(\succ_{i}^{\prime}\). Finally, rounds \(r+1\) to \(k\) will be the same as in the run with \(\succ_{i}\). Although agent \(i\) still is assigned the same set \(A_{i}\) by deviating to \(\succ_{i}^{\prime}\), we now have \(v_{i}(y_{r}^{\prime})=v_{i}(y_{j})\leq v_{i}(h_{(r-1)n+i}\), where the inequality holds because both goods are available in round \(r\) of the bluff run, and agent one prefers \(h_{(r-1)n+i}\). Also, all later goods in \(Y^{\prime}\) remain unchanged, i.e., \(y_{i}^{\prime}=y_{s}\) for \(s>r\). Therefore, the latest occurrence of some \(y_{r}^{\prime}>h_{(\ell-1)n+i}\) now happens at an earlier point in the sequence, if at all. Repeating this process until no such occurrence is left yields an ordering \(Y^{*}=(y_{1}^{*},y_{2}^{*},\ldots,y_{k}^{*})\) of \(A_{i}\) such that for all \(r\in[k]\), \(v_{i}(y_{r}^{*})\leq v_{i}(h_{(r-1)n+i})\). Now using Lemma 3.3 completes the proof. ### Submodular valuations We move on to the much more general class of submodular valuations. In order to define the bluff profile in this case, we again would like to start from the truthful profile. However, recall that Round-Robin restricts each agent's report to specifying an ordering on the good set \(M\) and these preference rankings are not expressive enough to fully capture submodular valuation functions. In fact, it is not obvious what 'truthful' means here without further assumptions on what information is known by the agents. Still, we define a _truthfully greedy_ allocation and use this as our starting point. Imagine that, instead of having a full preference profile from the beginning, we only ask the active agent \(i\) (i.e., the agent to which we are about to allocate a new good) for the good with the largest marginal value with respect to her current set of goods \(A_{i}\) and give this to her. Let \(h_{1},h_{2},\ldots,h_{m}\) be a renaming of the goods according to the order in which they would be allocated in this hypothetical truthfully greedy scenario and \(\succ^{\mathrm{b}}\) be the corresponding total order. Like in the cancelable case, the bluff profile is the preference profile \(\succ^{\mathrm{b}}=(\succ^{\mathrm{b}},\succ^{\mathrm{b}},\ldots,\succ^{ \mathrm{b}})\). Formally, the renaming of the goods is performed as described in Algorithm 2 below. It should be noted that this definition of the bluff profile is consistent with the definition for cancelable functions, assuming that all ties are resolved lexicographically. Also notice that the allocation Round-Robin(\(\succ^{\mathrm{b}}\)) produced under the bluff profile is exactly \((X_{1},X_{2},\allowbreak\ldots,X_{n})\), as described in Algorithm 2, i.e., \(X_{i}=A_{i}^{\mathrm{b}}=\{h_{i},h_{n+i},\ldots,h_{(k-1)n+i}\}\), where recall that \(k=m/n\). ``` 1:\(X_{i}=\emptyset\) for \(i\in[n]\) 2:for\(j=1,\ldots,m\)do 3:\(i=(j-1)\pmod{n}+1\) 4:\(h_{j}=\operatorname*{arg\,max}\limits_{g\in M\setminus\bigcup_{t}X_{t}}\,v_{i}(g \,|\,X_{i})\) // Ties are broken lexicographically. 5:\(X_{i}=X_{i}\cup\{h_{j}\}\) 6:return\((h_{1},h_{2},\ldots,h_{m})\) ``` **Algorithm 2** Greedy renaming of goods for defining the bluff profile The main result of this section is Theorem 3.7 stating that the bluff profile is a \(\frac{1}{2}\)-approximate PNE when agents have submodular valuation functions. While this sounds weaker than Theorem 3.2, it should be noted that for submodular agents Mechanism 1 does not have PNE in general, even for relatively simple instances, as stated in Proposition 3.4. In fact, even the existence of approximate equilibria can be seen as rather surprising, given the generality of the underlying valuation functions. **Proposition 3.4**.: _There exists an instance where all agents have submodular valuation functions such that Mechanism 1 has no \((\frac{3}{4}+\varepsilon)\)-approximate PNE._ Proof.: Consider an instance with 2 agents and 4 goods \(M=\{g_{1},g_{2},g_{3},g_{4}\}\), with the following valuation for all possible 2-sets: \[v_{1}(\{g_{1},g_{2}\}) =3 v_{2}(\{g_{1},g_{2}\}) =4\] \[v_{1}(\{g_{1},g_{3}\}) =3 v_{2}(\{g_{1},g_{3}\}) =4\] \[v_{1}(\{g_{1},g_{4}\}) =4 v_{2}(\{g_{1},g_{4}\}) =3\] \[v_{1}(\{g_{2},g_{3}\}) =4 v_{2}(\{g_{2},g_{3}\}) =3\] \[v_{1}(\{g_{2},g_{4}\}) =3 v_{2}(\{g_{2},g_{4}\}) =4\] \[v_{1}(\{g_{3},g_{4}\}) =3 v_{2}(\{g_{3},g_{4}\}) =4\] In addition, all individual goods have the same value: \(v_{1}(x)=v_{2}(x)=2\) for \(x\in M\), while all 3-sets and 4-sets have value 4, for both agents. We begin by establishing that this valuation function is indeed submodular for both agents. Observe for any set \(S\subseteq M\) and \(i\in[2],j\in[4]\) we have: \[|S| =0 \Rightarrow v_{i}(g_{j}\mid S)\in\{2\}\] \[|S| =1 \Rightarrow v_{i}(g_{j}\mid S)\in\{1,2\}\] \[|S| =2 \Rightarrow v_{i}(g_{j}\mid S)\in\{0,1\}\] \[|S| =3 \Rightarrow v_{i}(g_{j}\mid S)=0\,,\] which immediately implies that both valuation functions are indeed submodular. Notice that for any reported preferences \(\succ_{1},\succ_{2}\), one of the two agents will receive goods leading to a value of 3. If this is the agent 1, she can easily deviate and get 4 instead. In particular, if agent 2 has good \(g_{2}\) or \(g_{3}\) first in their preferences then agent 1 can get \(\{g_{1},g_{4}\}\), and if agent 2 has good \(g_{1}\) or \(g_{4}\) as first then agent 1 can get \(\{g_{2},g_{3}\}\) instead. On the other hand, if agent 2 received a value of 3 they can also always deviate to 4. Notice that for any \(g_{a}\), agent 2 always has two sets different sets \(\{g_{a},g_{b}\}\), \(\{g_{a},g_{c}\}\) with value 4 and one \(\{g_{a},g_{d}\}\) with value 3. Thus, for any preference of agent 1 with \(g_{\hat{a}}\succ_{1}g_{\hat{b}}\succ_{1}g_{\hat{c}}\succ_{1}g_{\hat{d}}\), agent 2 can deviate and get either \(\{g_{\_}b{g_{\_}d{\_}j}\}\) or \(\{g_{\_}c,g_{\_}d{\_}j\}\), one of which must have value 4. Therefore, in every outcome there exists an agent that can deviate to improve their value from 3 to 4. Moving towards the proof of Theorem 3.7 for the submodular case, we note that although it is very different from that of Theorem 3.2, we will still need an analog of the main property therein, i.e., the existence of a good-wise comparison between the goods an agent gets under the bluff profile and the ones she gets by deviating. As expected, the corresponding property here (see Lemma 3.5) is more nuanced and does not immediately imply Theorem 3.7 as we are now missing the analog of Lemma 3.3. Throughout this section, we are going to argue about an arbitrary agent \(i\). To simplify the notation, let us rename \(X_{\_}i=A_{\_}i^{\_}i=\{h_{\_}h_{\_}n+i,\ldots,h_{\_}(k-1)n+i\}\) to simply \(X=\{x_{\_}1,x_{\_}2,\ldots,x_{\_}k\}\), where we have kept the order of indices the same, i.e., \(x_{\_}j=h_{\_}(j-1)n+i\). This way, the goods in \(X\) are ordered according to how they were allocated to agent \(i\) in the run of Mechanism 1 with the bluff profile as input. We also need to define the ordering of the goods agent \(i\) gets when she deviates from the bluff bid \(>^{\_}b\) to another preference ranking \(>_{\_}i\). Let \(A_{\_}i=Y=\{y_{\_}1,y_{\_}2,\ldots,y_{\_}k\}\) be this set of goods. Instead of renaming the elements of \(Y\) in a generic fashion like in the proof of Theorem 3.2, doing so becomes significantly more complicated, and we need to do it in a more systematic way, see Algorithm 3. Input: \(X=\{x_{\_}1,x_{\_}2,\ldots,x_{\_}k\}\), \(Y\), and a value oracle for \(v_{\_}i(\cdot)\) ``` 1:\(Z=Y\) 2:for\(j=|Y|,\ldots,1\)do 3:\(y^{\prime}_{\_}j=\operatorname*{arg\,min}_{g\in Z}v_{\_}i(g\,|\,\{x_{\_}1, \ldots,x_{\_}j-1\})\)// Ties are broken lexicographically. 4:\(Z=Z\setminus\{y^{\prime}_{\_}j\}\) 5:return\((y^{\prime}_{\_}1,y^{\prime}_{\_}2,\ldots,y^{\prime}_{\_}{|Y|})\) ``` **Algorithm 3** Greedy renaming of goods for the deviating agent \(i\) In what follows, we assume that the indexing \(y_{\_}1,y_{\_}2,\ldots,y_{\_}k\) is already the result of Algorithm 3. This renaming is crucial and it will be used repeatedly. In particular, we need this particular ordering in order to prove that \(v_{\_}i(x_{\_}j\,|\,\{x_{\_}1,\ldots,x_{\_}j-1\})\geq v_{\_}i(y_{\_}j\,|\,\{x_{ \_}1,\ldots,x_{\_}j-1\})\), for all \(j\in[k]\), in Lemma 3.5 below. Towards that, we need to fix some notation for the sake of readability. For \(j\in[k]\), we use \(X^{j}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\}\), respectively. The sets \(Y^{j}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\), for \(j\in[k]\), are defined analogously. We also use \(X^{0}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\). The main high-level idea of the proof is that if \(v_{\_}i(y_{\_}t\,|\,X^{\ell-1}_{\_}-)>v_{\_}i(x_{\_}t\,|\,X^{\ell-1}_{\_}-)\) for some \(\ell\), then it must be the case that during the execution of Round-Robin\((\succ^{\_}b)\) every good in \(Y^{\ell}_{\_}-=\{y_{\_}1,\ldots,y_{\_}\ell\}\) is allocated before the turn of agent \(i\) in round \(\ell\). Then, using a simple counting argument, we show that agent \(i\) cannot receive all the goods in \(Y^{\ell}_{\_}-\) when deviating, leading to a contradiction. **Lemma 3.5**.: _Let \(X=\{x_{\_}1,x_{\_}2,\ldots,x_{\_}k\}\) be agent \(i\)'s bundle in Round-Robin\((\succ^{\_}b)\), where goods are indexed in the order they were allocated, and \(Y=\{y_{\_}1,y_{\_}2,\ldots,y_{\_}k\}\) be \(i\)'s bundle in Round-Robin\((\succ_{\_}i,\succ^{\_}b)\), where goods are indexed by Algorithm 3. Then, for every \(j\in[k]\), we have \(v_{\_}i(x_{\_}j\,|\,X^{j-1}_{\_}-)\geq v_{\_}i(y_{\_}j\,|\,X^{j-1}_{\_}-)\)._ Proof.: The way goods in \(X\) are indexed, we have that \(x_{\_}j\) is the good allocated to agent \(i\) in round \(j\) of Round-Robin\((\succ^{\_}b)\). Suppose, towards a contradiction, that there is some \(\ell\in[k]\), for which we have \(v_{\_}i(y_{\_}t\,|\,X^{\ell-1}_{\_}-)>v_{\_}i(x_{\_}t\,|\,X^{\ell-1}_{\_}-)\). First notice that \(\ell\neq 1\), as \(x_{\_}1\) is, by the definition of the bluff profile, a singleton of maximum value for agent \(i\) excluding the goods allocated to agents \(1\) through \(i-1\) in round \(1\), regardless of agent \(i\)'s bid. Thus, \(\ell\geq 2\). Let \(B\subseteq M\) and \(D\subseteq M\) be the sets of goods allocated (to any agent) up to right before a good is allocated to agent \(i\) in round \(\ell\) in Round-Robin\((\succ^{\_}b)\) and Round-Robin\((\succ_{\_}i,\succ^{\_}b)\), respectively. Clearly, \(|B|=|D|=(\ell-1)n+i-1\). In fact, we claim that in this case the two sets are equal. **Claim 3.6**.: _It holds that \(B=D\). Moreover, \(\{y_{1},\ldots,y_{\ell}\}\subseteq B\)._ Proof of the claim.: We first observe that \(v_{i}(y_{j}\,|\,X_{-}^{\ell-1})\geq v_{i}(y_{\ell}\,|\,X_{-}^{\ell-1})>v_{i}(x_{ \ell}\,|\,X_{-}^{\ell-1})\), for every \(j\in[\ell-1]\), where the first inequality follows from way Algorithm 3 ordered the elements of \(Y\). Now consider the execution of Round-Robin(\(\succ^{\mathrm{b}}\)). Since \(x_{\ell}\) was the good allocated to agent \(i\) in round \(\ell\), \(x_{\ell}\) had maximum marginal value for agent \(i\) with respect to \(X_{-}^{\ell-1}\) among the available goods. Thus, none of the goods \(y_{1},\ldots,y_{\ell}\) were available at the time. That is, \(y_{1},\ldots,y_{\ell}\) were all already allocated to some of the agents (possibly including agent \(i\) herself). We conclude that \(\{y_{1},\ldots,y_{l}\}\subseteq B\). Now suppose for a contradiction that \(D\neq B\) and consider the execution of Round-Robin(\(\succ_{i}\), \(\succ^{\mathrm{b}}_{-i}\)). Recall that the goods in \(B\) are still the \((\ell-1)n+i-1\) most preferable goods for every agent in \(N\setminus\{i\}\) according to the profile (\(\succ_{i}\), \(\succ^{\mathrm{b}}_{-i}\)). Therefore, all agents in \(N\setminus\{i\}\) will get goods from \(B\) allocated to them up to the point when a good is allocated to agent \(i\) in round \(\ell\), regardless of what \(\succ_{i}\) is. If agent \(i\) also got only goods from \(B\) allocated to her in the first \(\ell-1\) rounds of Round-Robin(\(\succ_{i}\), \(\succ^{\mathrm{b}}_{-i}\)), then \(D\) would be equal to \(B\). Thus, at least one good which is not in \(B\) (and thus, not in \(\{y_{1},\ldots,y_{\ell}\}\)) must have been allocated to agent \(i\) in the first \(\ell-1\) rounds. As a result, at the end of round \(\ell-1\), there are at least two goods in \(\{y_{1},\ldots,y_{\ell}\}\) that have not yet been allocated to \(i\). However, we claim that up to right before a good is allocated to agent \(i\) in round \(\ell+1\), all goods in \(B\) (and thus in \(\{y_{1},\ldots,y_{\ell}\}\) as well) will have been allocated, leaving \(i\) with at most \(\ell-1\) goods from \(\{y_{1},\ldots,y_{\ell}\}\) in her final bundle and leading to a contradiction. Indeed, this follows from a simple counting argument. Right before a good is allocated to agent \(i\) in round \(\ell+1\), the goods allocated to agents in \(N\setminus\{i\}\) are exactly \(\ell(n-1)+i-1\geq(\ell-1)n+i-1=|B|\). As noted above, agents in \(N\setminus\{i\}\) will get goods from \(B\) allocated to them as long as they are available. Thus, no goods from \(B\), or from \(\{y_{1},\ldots,y_{\ell}\}\) in particular, remain unallocated right before a good is allocated to agent \(i\) in round \(\ell+1\). Therefore, agent \(i\) may get at most \(\ell-1\) goods from \(\{y_{1},\ldots,y_{\ell}\}\) (at most \(\ell-2\) in the first \(\ell-1\) rounds and one in round \(\ell\)), contradicting the definition of the set \(Y\). We conclude that \(D=B\). Given the claim, it is now easy to complete the proof. Clearly, in the first \(\ell-1\) rounds of Round-Robin(\(\succ_{i}\), \(\succ^{\mathrm{b}}_{-i}\)) at most \(\ell-1\) goods from \(\{y_{1},\ldots,y_{\ell}\}\) have been allocated to agent \(i\). However, when it is \(i\)'s turn in round \(\ell\), only goods in \(M\setminus D\) are available, by the definition of \(D\). By Claim 3.6, we have \(\{y_{1},\ldots,y_{l}\}\subseteq D\), and thus there is at least one good \(\{y_{1},\ldots,y_{\ell}\}\) that is allocated to another agent, which contradicts the definition of \(Y\). We are now ready to state and prove the main result of this section. **Theorem 3.7**.: _When all agents have submodular valuation functions, the bluff profile is a \(\frac{1}{2}\)-approximate PNE of Mechanism 1. Moreover, this is tight, i.e., for any \(\varepsilon>0\), there are instances where the bluff profile is not a \(\left(\frac{1}{2}+\varepsilon\right)\)-approximate PNE._ Proof.: We are going to use the notation used so far in the section and consider the possible deviation of an arbitrary agent \(i\). Like in the statement of Lemma 3.5, \(X=\{x_{1},\ldots,x_{k}\}\) is agent \(i\)'s bundle in Round-Robin(\(\succ^{\mathrm{b}}\)), with goods indexed in the order they were allocated, and \(Y=\{y_{1},y_{2},\ldots,y_{k}\}\) is \(i\)'s bundle in Round-Robin(\(\succ_{i}\), \(\succ^{\mathrm{b}}_{-i}\)), with goods indexed by Algorithm 3. Also, recall that \(X^{j}_{-}=\{x_{1},\ldots,x_{j}\}\) and \(X^{j}_{+}=\{x_{j},\ldots,x_{k}\}\) (and similarly for \(Y^{j}_{-}\) and \(Y^{j}_{+}\)). We also use the convention that \(Y^{k+1}_{+}=\emptyset\). For any \(j\in[k]\), we have \[v_{i}(X^{j}_{-})-v_{i}(X^{j-1}_{-}) =v_{i}(x_{j}\,|\,X^{j-1}_{-})\] \[\geq v_{i}(y_{j}\,|\,X^{j-1}_{-})\] \[\geq v_{i}(y_{j}\,|\,X^{j-1}_{-}\cup Y^{j+1}_{+})\] \[=v_{i}(X_{-}^{j-1}\cup Y_{+}^{j+1}\cup\{y_{j}\})-v_{i}(X_{-}^{j-1}\cup Y _{+}^{j+1})\] \[=v_{i}(X_{-}^{j-1}\cup Y_{+}^{j})-v_{i}(X_{-}^{j-1}\cup Y_{+}^{j+1})\] \[\geq v_{i}(X_{-}^{j-1}\cup Y_{+}^{j})-v_{i}(X_{-}^{j}\cup Y_{+}^{j+ 1})\.\] The first inequality holds because Lemma 3.5 applies on \(X\) and \(Y\), whereas the second inequality holds because of submodularity. Finally, the last inequality holds since \(X_{-}^{j-1}\subseteq X_{-}^{j}\) and \(v_{i}(\cdot)\) is non-decreasing, for every \(i\in N\). Using these inequalities along with a standard expression of the value of a set as a sum of marginals, we have \[v_{i}(X) =v_{i}(X_{-}^{k})-v_{i}(X_{-}^{0})\] \[=\sum_{j=1}^{k}\left(v_{i}(X_{-}^{j})-v_{i}(X_{-}^{j-1})\right)\] \[\geq\sum_{j=1}^{k}\left(v_{i}(X_{-}^{j-1}\cup Y_{+}^{j})-v_{i}(X_ {-}^{j}\cup Y_{+}^{j+1})\right)\] \[=v_{i}(X_{-}^{0}\cup Y_{+}^{1})-v_{i}(X_{-}^{k}\cup Y_{+}^{k+1})\] \[=v_{i}(Y)-v_{i}(X)\.\] Thus, we have \(v_{i}(X)\geq\frac{1}{2}\cdot v_{i}(Y)\), and we conclude that \(\succ^{\text{b}}\) is a \(\frac{1}{2}\)-approximate PNE of Mechanism 1. To show that the result is tight, consider an example with two agents and five goods. The valuation function of agent 1 is additive and defined as follows on the singletons: \[v_{1}(g_{1})=2\quad v_{1}(g_{2})=1\quad v_{1}(g_{3})=1-\varepsilon_{1}\quad v _{1}(g_{2})=1-\varepsilon_{2}\quad v_{1}(g_{5})=1-\varepsilon_{3}\,,\] where \(1\gg\varepsilon_{3}>\varepsilon_{2}>\varepsilon_{1}>0\). The valuation function of agent 2 is OXS2 and defined by the maximum matchings in the bipartite graph below, e.g., \(v_{2}(\{g_{1},g_{2}\})=2+1=3\) and \(v_{2}(\{g_{1},g_{4},g_{5}\})=2+1-\varepsilon_{2}=3-\varepsilon_{2}\). Footnote 2: Roughly speaking, OXS functions generalize unit-demand functions. The set of OXS functions is a strict superset of additive functions and a strict subset of submodular functions. See, [26, 27]. It is not hard to see that the bluff profile for this instance consists of the following declared ordering by both agents: \(g_{1}>g_{2}>g_{3}>g_{4}>g_{5}\). The allocation produced by Mechanism 1 for the bluff profile is then \(A=(A_{1},A_{2})\), where \(A_{1}=\{g_{1},g_{3},g_{5}\}\), and \(A_{2}=\{g_{2},g_{4}\}\). Observe that \(v_{1}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}\) and \(v_{2}(A_{2})=1\). It is easy to see that there is no profitable deviation for agent 1, while the maximum value that agent \(2\) can attain by deviating is \(2-\varepsilon_{1}-\varepsilon_{2}\). Agent \(2\) achieves this by reporting the preference ranking: \(g_{3}>g_{4}>g_{1}>g_{2}>g_{5}\) and getting goods \(\{g_{3},g_{4}\}\). This implies that for any \(\varepsilon>0\) one can chose appropriately small \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}\) so that the bluff profile is not a \(\left(\frac{1}{2}+\varepsilon\right)\)-approximate PNE. In Section 4, we show that every approximate PNE of Mechanism 1 results in an approximately EF1 allocation. Here, as a warm-up, we start this endeavor with an easy result which holds specifically for the bluff profile (and can be extended to approximate PNE where all agents submit the same preference ranking) but shows a better fairness guarantee than our general Theorem 4.4. **Theorem 3.8**.: _When all agents have submodular valuation functions \(v_{1},\ldots,v_{n}\), the allocation returned by Round-Robin(\(\succ^{\mathrm{b}}\)) is \(\frac{1}{2}\)-EF1 with respect to \(v_{1},\ldots,v_{n}\). Moreover, this is tight, i.e., for any \(\varepsilon>0\), there are instances where this allocation is not \(\left(\frac{1}{2}+\varepsilon\right)\)-EF1._ Proof.: In order to obtain a contradiction, suppose that the allocation \((A_{1}^{\mathrm{b}},A_{2}^{\mathrm{b}},\ldots,A_{n}^{\mathrm{b}})\) returned by Round-Robin(\(\succ^{\mathrm{b}}\)) is not \(\frac{1}{2}\)-EF1. That is, there exist agents \(i\) and \(j\) such that \(v_{i}(A_{i}^{\mathrm{b}})<0.5\cdot v_{i}(A_{j}^{\mathrm{b}}\setminus\{g\})\), for all \(g\in A_{j}^{\mathrm{b}}\). We are going to show that this allows us to construct a deviation for agent \(i\) where she gets value more than \(2v_{i}(A_{i}^{\mathrm{b}})\), contradicting the fact that \(\succ^{\mathrm{b}}\) is a \(\frac{1}{2}\)-approximate PNE. Recall that using the renaming \(h_{1},h_{2},\ldots\) produced by Algorithm 2, we have \(A_{i}^{\mathrm{b}}=\{h_{i},h_{n+i},\ldots,h_{(k-1)n+i}\}\) and \(A_{j}^{\mathrm{b}}=\{h_{j},h_{n+j},\ldots,h_{(k-1)n+j}\}\). Let \(\delta\) be the indicator variable of the event \(j<i\), i.e., \(\delta\) is \(1\) if \(j<i\) and \(0\) otherwise. We will show that it is possible for agent \(i\) to get the set \(\{h_{\delta n+j},h_{(1+\delta)n+j},h_{(2+\delta)n+j},\ldots,h_{(k-1)n+j}\}\), which is either the entire \(A_{j}^{\mathrm{b}}\) (when \(i<j\)) or \(A_{j}^{\mathrm{b}}\setminus\{h_{j}\}\) (when \(j<i\)). In particular, let \(\succ_{i}\) be a preference ranking that starts with all goods in \(A_{j}^{\mathrm{b}}\) in the same order as they were allocated to agent \(j\) in Round-Robin(\(\succ^{\mathrm{b}}\)), followed by everything else, in any order. Consider the execution of Round-Robin(\(\succ_{i},\succ_{-i}^{\mathrm{b}}\)). The crucial, yet simple, observation (that makes an inductive argument work) is that the first \(i-1\) goods \(h_{1},\ldots,h_{i-1}\) are allocated as before, then good \(h_{\delta n+j}\) (rather than \(h_{i}\)) is allocated to agent \(i\), and after that the \(n-1\) top goods for all agents in \(N\setminus\{i\}\) according to \(\succ_{-i}^{\mathrm{b}}\) are \(h_{i},h_{i+1},\ldots,h_{\delta n+j-1},h_{\delta n+j+1},\ldots,h_{n+i-1}\), and these are allocated in the next \(n-1\) steps of the algorithm. As a result, right before a second good is allocated to agent \(i\), the available goods are \(h_{n+i},h_{n+i+1},\ldots,h_{m}\) exactly as in the execution of Round-Robin(\(\succ^{\mathrm{b}}\)). More generally, right before an \(r\)-th good is allocated to \(i\), her bundle is \(\{h_{\delta n+j},h_{(1+\delta)n+j},h_{(2+\delta)n+j},\linebreak\ldots,h_{(r-2 +\delta)n+j}\}\), and the available goods are \(h_{(r-1)n+i},h_{(r-1)n+i+1},\ldots,h_{m}\) (as they were in the execution of Round-Robin(\(\succ^{\mathrm{b}}\))). Then good \(h_{(r-1+\delta)n+j}\) (rather than \(h_{(r-1)n+i}\)) is allocated to agent \(i\), and after that the \(n-1\) top goods for all agents according to \(\succ_{-i}^{\mathrm{b}}\) are \[h_{(r-1)n+i},h_{(r-1)n+i+1},\ldots,h_{(r-1+\delta)n+j-1},h_{(r-1+\delta)n+j+1},\ldots,h_{rn+i-1}\,,\] and they are allocated in the next \(n-1\) steps of the algorithm. At the end, agent \(i\) gets the entire \(A_{j}^{\mathrm{b}}\) or \(A_{j}^{\mathrm{b}}\setminus\{h_{j}\}\) plus some arbitrary good, depending on whether \(i<j\) or \(j<i\). In either case, by monotonicity, agent \(i\)'s value for her bundle is at least \(v_{i}(A_{j}^{\mathrm{b}}\setminus\{h_{j}\})>2v_{i}(A_{i}^{\mathrm{b}})\), where the last inequality follows from our assumption that \((A_{1}^{\mathrm{b}},A_{2}^{\mathrm{b}},\ldots,A_{n}^{\mathrm{b}})\) is not \(\frac{1}{2}\)-EF1. Therefore, by deviating from \(\succ^{\mathrm{b}}\) to \(\succ_{i}\), agent \(i\) increases her value by a factor strictly greater than \(2\), contradicting Theorem 3.7. To show that this factor is tight, we again turn to the example given within the proof of Theorem 3.7. Recall the allocation produced by Mechanism 1 for the bluff profile is \(A=(A_{1},A_{2})\), with \(A_{1}=\{g_{1},g_{3},g_{5}\}\) and \(A_{2}=\{g_{2},g_{4}\}\). Observe that agent \(1\) is envy-free towards agent \(2\) as \(v_{1}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}>2-\varepsilon_{2}=v_{1}(A_{2})\). On the other hand, \(v_{2}(A_{2})=1\), whereas \(v_{2}(A_{1})=4-\varepsilon_{1}-\varepsilon_{3}\) and \(v_{2}(A_{1}\setminus\{g_{1}\})=2-\varepsilon_{1}-\varepsilon_{3}\). The latter implies that for any \(\varepsilon>0\) one can chose appropriately small \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3}\) so that the bluff profile does not result in a \(\left(\frac{1}{2}+\varepsilon\right)\)-EF1 allocation with respect to the true valuation functions of the agents. Fairness properties of PNE In Section 2.3, Proposition 2.5, we state the fairness guarantees of Round-Robin\(-\)viewed as an algorithm\(-\) when all agents have cancelable valuation functions. So far, we have not discussed this matter for the submodular case. It is not hard to see, however, that Theorem 3.8 and the definition of the bluff profile via Algorithm 2 imply that when we have (value oracles for) the valuation functions, then we can use Round-Robin to algorithmically produce \(\frac{1}{2}\)-EF1 allocations. Using similar arguments, we show next that for any preference profile \(\succs=(\succ_{1},\ldots,\succ_{n})\) and any \(i\in N\), there is always a response \(\succ_{i}^{\prime}\) of agent \(i\) to \(\succs_{-i}\), such that the allocation returned by Round-Robin\((\succ_{i}^{\prime},\succs_{-i})\) is \(\frac{1}{2}\)-EF1 _from agent \(i\)'s perspective_. Towards this, we first need a variant of Algorithm 2 that considers everyone in \(N\setminus\{i\}\) fixed to their report in \(\succs_{-i}\) and greedily determines a "good" response for agent \(i\). An intuitive interpretation of what Algorithm 4 below is doing, can be given if one sees Mechanism 1 as a sequential game. Then, given that everyone else stays consistent with \(\succs_{-i}\), agent \(i\)_picks_ a good of maximum marginal value every time her turn is up. ``` 1:\(S=M;X=\emptyset\) 2:for\(j=1,\ldots,m\)do 3:\(\ell=(j-1)\pmod{n}+1\) 4:if\(\ell=i\)then 5:\(x_{\lceil j/n\rceil}=\operatorname*{arg\,max}_{g\in S}v_{i}(g\,|\,X)\)// Ties are broken lexicographically. 6:\(X=X\cup\{x_{\lceil j/n\rceil}\}\) 7:\(S=S\setminus\{x_{\lceil j/n\rceil}\}\) 8:else 9:\(g=\operatorname{top}(>_{\ell},S)\) 10:\(S=S\setminus\{g\}\) 11:return\(x_{1}\succ_{i}^{\prime}x_{2}\succ_{i}^{\prime}\ldots\succ_{i}^{\prime}x_{k} \succ_{i}^{\prime}\ldots\)// Arbitrarily complete \(\succ_{i}^{\prime}\) with goods in \(M\setminus X\). ``` **Algorithm 4** Greedy response of agent \(i\) to \(\succs_{-i}\) Proving the next lemma closely follows the proof of Theorem 3.7 but without the need of an analog of Lemma 3.5, as we get this for free from the way the greedy preference profile \(\succ_{i}^{\prime}\) is constructed. **Lemma 4.1**.: _Assume that agent \(i\) has a submodular valuation function \(v_{i}\). If \(\succ_{i}^{\prime}\) is the ranking returned by Algorithm 4 when given \(N,M,\succs_{-i},v_{i}\), then the allocation \((A_{1}^{\prime},A_{2}^{\prime},\ldots,A_{n}^{\prime})\) returned by Round-Robin\((\succ_{i}^{\prime},\succs_{-i})\) is such that for every \(j\in N\), with \(A_{j}^{\prime}\neq\emptyset\), there exists a good \(g\in A_{j}^{\prime}\), so that \(v_{i}(A_{i}^{\prime})\geq\frac{1}{2}\cdot v_{i}(A_{j}^{\prime}\setminus\{g\})\)._ Proof.: First, it is straightforward to see that \(A_{i}^{\prime}=X\), as computed in Algorithm 4. Indeed, Algorithm 4 simulates Mechanism 1 for all \(j\in N\setminus\{i\}\) and iteratively builds \(\succ_{i}^{\prime}\), so that in every turn of Round-Robin\((\succ_{i}^{\prime},\succs_{-i})\) the good allocated to agent \(i\) is one of maximum marginal value. As a result, the goods in \(A_{i}^{\prime}=X=\{x_{1},x_{2},\ldots,x_{k}\}\) are already indexed in the order they are allocated. Now consider an arbitrary \(j\in N\setminus\{i\}\) and let \(A_{j}^{\prime}=Y=\{y_{1},y_{2},\ldots,y_{k}\}\), where goods are again indexed in the order they are allocated in Round-Robin\((\succ_{i}^{\prime},\succs_{-i})\). Notice that when good \(x_{r}\) is allocated to agent \(i\) in round \(r\), goods \(y_{r+1},y_{r+2},\ldots\) are still available and, by construction of \(X\), their marginal value with respect to the set \(\{x_{1},x_{2},\ldots,x_{r-1}\}\) is no better than the marginal value of \(x_{r}\). In particular, \(v_{i}(x_{r}\,|\,\{x_{1},\ldots,x_{r-1}\})\geq v_{i}(y_{r+1}\,|\,\{x_{1},\ldots,x_{r-1}\})\). Also, recall the use of \(X_{-}^{r},X_{+}^{r},\,Y_{-}^{r},\,Y_{+}^{r}\) notation from the proof of Theorem 3.7. We will use a similar calculation here as well, but we will omit the first element of \(Y\). For any \(r\in[k]\), we have \[v_{i}(X^{r}_{-})-v_{i}(X^{r-1}_{-}) =v_{i}(X_{r}\,|\,X^{r-1}_{-})\] \[\geq v_{i}(y_{r+1}\,|\,X^{r-1}_{-})\] \[\geq v_{i}(y_{r+1}\,|\,X^{r-1}_{-}\cup Y^{r+2}_{+})\] \[=v_{i}(X^{r-1}_{-}\cup Y^{r+2}_{+}\cup\{y_{r+1}\})-v_{i}(X^{r-1}_{ -}\cup Y^{r+2}_{+})\] \[=v_{i}(X^{r-1}_{-}\cup Y^{r+1}_{+})-v_{i}(X^{r-1}_{-}\cup Y^{r+2}_ {+})\] \[\geq v_{i}(X^{r-1}_{-}\cup Y^{r+1}_{+})-v_{i}(X^{r}_{-}\cup Y^{r+2 }_{+})\,,\] where we used the convention that \(Y^{k+1}_{+}=Y^{k+2}_{+}=\emptyset\). The first inequality holds by the construction of \(X\) as discussed above, the second inequality follows from submodularity, and the last inequality holds because \(v_{i}(\cdot)\) is non-decreasing. Using these inequalities and a standard expression of the value of a set as a sum of marginals, we have \[v_{i}(X) =v_{i}(X^{k}_{-})-v_{i}(X^{0}_{-})\] \[=\sum_{r=1}^{k}\big{(}v_{i}(X^{r}_{-})-v_{i}(X^{r-1}_{-})\big{)}\] \[\geq\sum_{r=1}^{k}\big{(}v_{i}(X^{r-1}_{-}\cup Y^{r+1}_{+})-v_{i} (X^{r}_{-}\cup Y^{r+2}_{+})\big{)}\] \[=v_{i}(X^{0}_{-}\cup Y^{2}_{+})-v_{i}(X^{k}_{-}\cup Y^{k+2}_{+})\] \[=v_{i}(Y\setminus\{y_{1}\})-v_{i}(X)\;.\] Thus, we have \(v_{i}(A^{\prime}_{i})=v_{i}(X)\geq\frac{1}{2}\cdot v_{i}(Y\setminus\{y_{1}\})= \frac{1}{2}\cdot v_{i}(A^{\prime}_{j}\setminus\{y_{1}\})\). ### The Case of Two Agents As a warm-up, we begin with the easier case of \(n=2\). Not only the proofs of our main results for submodular and additive functions are much simpler here, but the fairness guarantees are stronger as well. **Theorem 4.2**.: _Let \(\alpha\in(0,1]\). Assume we have a fair division instance with two agents, whose valuation functions \(v_{1},v_{2}\) are submodular. Then any allocation that corresponds to a \(\alpha\)-approximate PNE of the Round-Robin mechanism is \(\frac{\alpha}{2}\)-EF1 with respect to \(v_{1},v_{2}\)._ Proof.: Let \(\succ=(>_{1},>_{2})\) be a \(\alpha\)-approximate PNE of Mechanism 1 for a given instance, and let \((A_{1},A_{2})\) be the allocation returned by \(\text{Round-Robin}(\succ)\). Consider one of the two agents; we call this agent \(i\in[2]\) and the other agent \(j\). We are going to show that \(v_{i}(A_{i})\geq\frac{\alpha}{2}\cdot v_{i}(A_{j}\setminus\{g\})\) for some good \(g\in A_{j}\). Suppose that agent \(i\) deviates to \(>^{\prime}_{i}\) produced by Algorithm 4 when given \(\succ_{-i}=(>_{j})\) and \(v_{i}\), and let \((A^{\prime}_{1},A^{\prime}_{2})\) be the allocation returned by \(\text{Round-Robin}(>^{\prime}_{i},\succ_{-i})\). Let \(A^{\prime}_{i}=\{x_{1},x_{2},\ldots,x_{k}\}\) and \(A_{j}\setminus A^{\prime}_{i}=\{y_{t_{1}},y_{t_{2}},\ldots,y_{t_{\ell}}\}\), where in both sets goods are indexed by the round in which they were allocated in the run of \(\text{Round-Robin}(>^{\prime}_{i},\succ_{-i})\). Note that all indices in \(A_{j}\setminus A^{\prime}_{i}\) are distinct exactly because \(n=2\) and, thus, all these goods are allocated to agent \(j\). This indexing guarantees that when \(x_{t_{\lambda}-1}\) gets allocated, \(y_{t_{\lambda}}\) is still available for \(2\leq\lambda\leq\ell\) and, thus, \[v(x_{t_{\lambda}-1}\,|\,\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\})\geq v(y_{t_{ \lambda}}\,|\,\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\})\,, \tag{1}\] by the way \(>_{i}^{\prime}\) is constructed (see also the proof of Lemma 4.1). Using Theorem 2.1, we have \[v_{i}(A_{j}\setminus\{y_{t_{1}}\}) \leq v_{i}(A_{i}^{\prime})+\sum_{g\in(A_{j}\setminus\{y_{t_{1}}\}) \setminus A_{i}^{\prime}}v(g\,|\,A_{i}^{\prime})\] \[=v_{i}(A_{i}^{\prime})+\sum_{\lambda=2}^{\ell}v(y_{t_{\lambda}}\,| \,A_{i}^{\prime})\] \[\leq v_{i}(A_{i}^{\prime})+\sum_{\lambda=2}^{\ell}v(y_{t_{\lambda }}\,|\,\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\})\] \[\leq v_{i}(A_{i}^{\prime})+\sum_{\lambda=2}^{\ell}v(x_{t_{\lambda }-1}\,|\,\{x_{1},x_{2},\ldots,x_{t_{\lambda}-2}\})\] \[\leq v_{i}(A_{i}^{\prime})+\sum_{\lambda=1}^{k}v(x_{\lambda}\,|\, \{x_{1},x_{2},\ldots,x_{\lambda-1}\})\] \[=v_{i}(A_{i}^{\prime})+v_{i}(A_{i}^{\prime})\] \[\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})\,,\] where the first inequality follows directly from Theorem 2.1, the second one follows from submodularity, the third inequality holds because of (1), the fourth one follows from the monotonicity of \(v_{i}\), and the last inequality follows from the fact that \(\succ\) is a \(\alpha\)-approximate PNE and thus \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{i}^{\prime})\). We conclude that \((A_{1},A_{2})\) is \(\frac{\alpha}{2}\)-EF1 with respect to the underlying valuation functions. For additive valuation functions we can get a slightly stronger fairness guarantee, which we show that is also tight for any \(\alpha\), with an even easier proof. Note that this reproduces the result of Amanatidis et al. [5] for exact PNE in the case of two agents. **Theorem 4.3**.: _Let \(\alpha\in(0,1]\). Assume we have a fair division instance with two agents, whose valuation functions \(v_{1},v_{2}\) are additive. Then any allocation that corresponds to a \(\alpha\)-approximate PNE of the Round-Robin mechanism is \(\frac{\alpha}{2-\alpha}\)-EF1 with respect to \(v_{1},v_{2}\). This is tight, i.e., for any \(\varepsilon>0\), there are instances where a \(\alpha\)-approximate PNE does not correspond to a \((\frac{\alpha}{2-\alpha}+\varepsilon)\)-EF1 allocation._ Proof.: Let \(\succ\) = \((>_{1},>_{2})\), \(A_{1},A_{2}\) be as in the proof of Theorem 4.2, but now consider the deviation of agent \(i\) to \(>_{i}^{\prime}\) which is a strict version of her true preference ranking \(\succcurlyeq_{i}^{*}\). Again, let \((A_{1}^{\prime},A_{2}^{\prime})\) be the allocation returned by Round-Robin\((>_{i}^{\prime},\succ_{-i})\). Let \(g\) be good of maximum value in \(A_{i}^{\prime}\) according to \(v_{i}\). Since \(>_{i}^{\prime}\) is a true preference ranking of agent \(i\), according to Proposition 2.5\((A_{1}^{\prime},A_{2}^{\prime})\) is EF1 from the point of view of agent \(i\). That is, we have \(v_{i}(A_{i}^{\prime})\geq v_{i}(A_{j}^{\prime}\setminus\{g\})\) and, thus, \(v_{i}(A_{i}^{\prime})\geq\frac{1}{2}\cdot v_{i}(M\setminus\{g\})\). Therefore, \[v_{i}(A_{j}\setminus\{g\}) =v_{i}(M\setminus\{g\})-v_{i}(A_{i})\] \[\leq 2\cdot v_{i}(A_{i}^{\prime})-v_{i}(A_{i})\] \[\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})-v_{i}(A_{i})\] \[=\frac{2-\alpha}{\alpha}\cdot v_{i}(A_{i})\,,\] where the second inequality follows from the fact that \(\succ\) is a \(\alpha\)-approximate PNE and thus \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{i}^{\prime})\). We conclude that \((A_{1},A_{2})\) is \(\frac{\alpha}{2-\alpha}\)-EF1 with respect to \(v_{1},v_{2}\). To see that this guarantee is tight, consider an instance with two agents, and a set of five goods \(\{g_{1},g_{2},\ldots,g_{5}\}\). In addition, let the valuation functions of the agents to be additive and defined by: \[v_{1}(g_{j})=\begin{cases}6,&\text{if }j=1\\ 3+\delta,&\text{if }j=2\\ 3,&\text{if }j=3\\ 0.5+\delta,&\text{if }j=4\\ 0.5,&\text{if }j=5\end{cases}v_{2}(g_{j})=\begin{cases}6\beta,&\text{if }j=1\\ 3\beta+\delta,&\text{if }j=2\\ 3\beta,&\text{if }j=3\\ 0.5+\delta,&\text{if }j=4\\ 0.5,&\text{if }j=5\end{cases}\] where \(0.5\gg\delta\), and \(\beta>\frac{1}{6}+\delta\). Now suppose that the agents bid as follows: Agent 1 bids truthfully (i.e., an ordering \(\succ_{1}\) that is consistent with her true valuation function), while agent 2 bids \(g_{5}\succ_{2}g_{4}\succ_{2}g_{1}\succ_{2}g_{2}\succ_{2}g_{3}\). It is easy to confirm that the produced allocation is \(A=(A_{1},A_{2})=(\{g_{1},g_{2},g_{3}\},\{g_{4},g_{5}\})\). Regarding agent 1, she takes her three most desirable goods in this allocation so there is no profitable deviation for her. For the same reason, she is envy-free towards agent 2. Moving to agent 2, by observing her valuation function, we immediately derive that she is \(\frac{1+\delta}{6\beta+\delta}\)EF1 towards agent 1. The only thing that remains, is to check how much agent 2 can improve her utility through deviating. Initially notice that agent 2 cannot get good \(g_{1}\) regardless of her bid as this good is taken by agent 1 in round 1. At the same time, it is easy to verify that she cannot get both goods \(g_{2}\) and \(g_{3}\) due to the declared ordering of agent 1. Thus, the best bundle of goods that she can acquire is \(\{g_{2},g_{4}\}\) by deviating to the bid: \(g_{2}\succ_{2}^{\prime}g_{4}\succ_{2}^{\prime}g_{1}\succ_{2}^{\prime}g_{3} \succ_{2}^{\prime}g_{5}\) and attain a value of \(3\beta+0.5+2\delta\). By setting \(\alpha=\frac{1+\delta}{3\beta+0.5+2\delta}\) we trivially have that \((\succ_{1},\succ_{2})\) is a \(\alpha\)-approximate PNE. On the other hand, for a given \(\varepsilon>0\), we have \(\frac{\alpha}{2-\alpha}+\varepsilon=\frac{1+\delta}{6\beta+3\delta}+\varepsilon\) which is strictly larger than \(\frac{1+\delta}{6\beta+\delta}\) for sufficiently small \(\delta\). That is, there is a choice of \(\delta\) so that the \(\alpha\)-approximate PNE \((\succ_{1},\succ_{2})\) is not \(\frac{\alpha}{2-\alpha}+\varepsilon\)EF1. ### The Case of \(n\) Agents Looking back at the proofs of Theorems 4.2 and 4.3, the obvious fact that everything not in \(A_{i}\) or \(A_{i}^{\prime}\) was allocated to agent \(j\) played a key role in proving our sharp bounds. Moving to the general case of \(n\) agents, there is no reason to expect that we have some control on how the goods are redistributed between agents in \(N\setminus\{i\}\) when agent \(i\) deviates from an (approximate) equilibrium. Surprisingly, we show that this redistribution does not favor any agent too much from \(i\)'s perspective when the valuation functions are submodular or subadditive cancelable (Lemmata 4.6 and 4.7). Consequently, the main results of this section have similar flavor not only with respect to their statements, but with respect to their proofs as well. **Theorem 4.4**.: _Let \(\alpha\in(0,1]\). For instances with submodular valuation functions \(\{v_{i}\}_{i\in N}\), any \(\alpha\)-approximate PNE of the Round-Robin mechanism is \(\frac{\alpha}{3}\)-EF1 with respect to \(\{v_{i}\}_{i\in N}\)._ **Theorem 4.5**.: _Let \(\alpha\in(0,1]\). For instances with subadditive cancelable valuation functions \(\{v_{i}\}_{i\in N}\), any \(\alpha\)-approximate PNE of the Round-Robin mechanism is \(\frac{\alpha}{2}\)-EF1 with respect to \(\{v_{i}\}_{i\in N}\)._ As the proofs of both theorems have the same general structure and share Lemmata 4.6 and 4.7, we begin with some common wording and notation, consistent with our proofs for two agents. Given any instance, we use \(\succ\) = \((\succ_{1},\ldots,\succ_{n})\) for an arbitrary \(\alpha\)-approximate PNE of Mechanism 1. We then consider the deviation of some agent \(i\) to a preference ranking \(\succ_{i}^{\prime}\); in the submodular case \(\succ_{i}^{\prime}\) is the output of Algorithm 4 when given \(\succ_{-i}\) and \(v_{i}\), whereas in the cancelable case \(\succ_{i}^{\prime}\) is a strict version of \(i\)'s true preference ranking \(\succ_{i}^{*}\). We use \((A_{1},\ldots,A_{n})\) and \((A_{1}^{\prime},\ldots,A_{n}^{\prime})\) to denote the allocations returned by Round-Robin(\(\succ\)) and Round-Robin(\(\succ_{i}^{\prime},\succ_{-i}\)), respectively. In order to show that \((A_{1},\ldots,A_{n})\) as \(\frac{\alpha}{\kappa}\)-EF1 from agent \(i\)'s perspective (where \(\kappa\) is \(3\) for submodular and \(2\) for cancelable functions), we use the stronger EF1 guarantees that \((A^{\prime}_{1},\ldots,A^{\prime}_{n})\) has from her perspective. To this end, we use \(h^{\prime}_{r}\) to denote the good that was allocated to an agent \(\ell\in N\) in round \(r\) of Round-\(\text{Robin}(\succ^{\prime}_{i},\succ_{-i})\). In particular, \(A^{\prime}_{i}=\{h^{i}_{1},h^{i}_{2},\ldots,h^{i}_{k}\}\); recall that \(k=m/n\). Further, given that we have fixed agent \(i\), we use \(S_{r}\) and \(S^{\prime}_{r}\), for \(0\leq r\leq k-1\), to denote the set of goods that had been allocated up to right before a good was allocated to \(i\) in round \(r+1\) of Round-\(\text{Robin}(\succ^{\prime}_{i},\succ_{-i})\), respectively. That is, for \(0\leq r\leq k-1\), \(S_{r}\) and \(S^{\prime}_{r}\) contain the goods allocated in steps \(1\) through \(rn+i-1\) of Round-\(\text{Robin}(\succ)\) and Round-\(\text{Robin}(\succ^{\prime}_{i},\succ_{-i})\), respectively. For the next technical lemma we assume that the valuation functions are either submodular or cancelable and, in each case, we use the corresponding \(\succ^{\prime}_{i}\) as described above. **Lemma 4.6**.: _For any \(r\in[k]\), right before an \(r\)-th good is allocated to agent \(i\) in Round-\(\text{Robin}(\succ)\), there are at most \(r-1\) goods from \(S^{\prime}_{r-1}\) that are still unallocated, i.e., \(\left|S^{\prime}_{r-1}\setminus S_{r-1}\right|\leq r-1\)._ Proof.: We will prove the statement using induction on \(r\). For \(r=1\), it is straightforward that \(S_{0}=S^{\prime}_{0}\), as the preference rankings of agents \(1\) through \(i-1\) are the same in the two runs of the mechanism and, thus, the first goods allocated to them are exactly the same. Now suppose that the statement is true for every round up to round \(r\); we will show that it is true for round \(r+1\) as well. Initially, observe that if the number of unallocated goods from \(S^{\prime}_{r-1}\) is \(r-1\) right before a good is allocated to agent \(i\) in round \(r\), it will trivially be at most \(r-1\) right before a good is allocated to agent \(i\) in round \(r+1\) (as the number of unallocated goods from any set cannot increase as the allocation progresses). That is, \(\left|S^{\prime}_{r-1}\setminus S_{r}\right|\leq r-1\). Notice that the goods that might cause \(S^{\prime}_{r}\setminus S_{r}\) to increase are the elements of \[S^{\prime}_{r}\setminus S^{\prime}_{r-1}=\left\{h^{i}_{r},h^{i+1}_{r},\ldots, h^{n}_{r},h^{1}_{r+1},h^{2}_{r+1},\ldots,h^{i-1}_{r+1}\right\},\] and suppose that there are \(\lambda\) goods therein which are still unallocated right before a good is allocated to agent \(i\) in round \(r+1\) of Round-\(\text{Robin}(\succ)\). Clearly, if \(\lambda\leq 1\), we are done. So, assume that \(\lambda\geq 2\). This means that there are \(\lambda-1\geq 1\) unallocated goods in \((S^{\prime}_{r}\setminus S^{\prime}_{r-1})\setminus\{h^{i}_{r}\}\). Let \(g\) be one of these goods and let \(j\) be the agent to whom \(g\) was given, i.e., \(g=h^{j}_{r}\), where \(\bar{r}=r\), if \(j>i\), and \(\bar{r}=r+1\), if \(j<i\). In either case, notice that according to \(\succ_{j}\) the good \(g\) is better than any good in \(M\setminus S^{\prime}_{r}\) or else it would not have been allocated to \(j\) at round \(\bar{r}\) of Round-\(\text{Robin}(\succ^{\prime}_{i},\succ_{-i})\) when everything in \(M\setminus S^{\prime}_{r}\) is still available. We claim that \(g\) does not increase the number of elements in \(S^{\prime}_{r}\setminus S_{r}\). Indeed, given that \(g\) was available during step \((\bar{r}-1)n+j\) of Round-\(\text{Robin}(\succ)\) and that \(j\)'s declared preference ranking is still \(\succ_{j}\), the only possibility is that during that step one of the unallocated goods from \(S^{\prime}_{r-1}\cup\{h^{i}_{r},h^{i+1}_{r},\ldots,h^{j-1}_{r}\}\) was allocated to \(j\) instead. Therefore, the only good out of the \(\lambda\) candidate goods of \(S^{\prime}_{r}\setminus S^{\prime}_{r-1}\) which might count towards the number of elements in \(S^{\prime}_{r}\setminus S_{r}\) is \(h^{i}_{r}\). We conclude that \(S^{\prime}_{r}\setminus S_{r}\leq(r-1)+1=r\). Lemma 4.6 is global, illustrating that the sets \(S_{r}\) and \(S^{\prime}_{r}\) cannot differ in more than a \(1/n\)-th of their elements. The next lemma shows that no agent can accumulate too many goods from \(S^{\prime}_{r}\), for any \(0\leq r\leq k-1\). Again, we assume that the valuation functions are either submodular or cancelable and, in each case, the appropriate \(\succ^{\prime}_{i}\) is used as discussed after the statements of Theorems 4.2 and 4.3. Note that \(S^{\prime}_{0}\) in the lemma's statement contains exactly these goods which we will exclude when showing the EF1 guarantee for our two theorems. **Lemma 4.7**.: _For any \(r\in[k]\) and any \(j\in N\), agent \(j\) gets at most \(2(r-1)\) goods from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\) in the allocation \((A_{1},\ldots,A_{n})\) returned by Round-\(\text{Robin}(\succ)\), i.e., \(\left|A_{j}\cap(S^{\prime}_{r-1}\setminus S^{\prime}_{0})\right|\leq 2(r-1)\)._ Proof.: Fix an \(r\in[k]\) and a \(j\in N\). Consider the end of step \((r-1)n+i-1\) of Round-Robin(\(\succ\)), i.e., right before an \(r\)-th good is allocated to agent \(i\). Ignoring all the goods allocated before \(i\) got her first good, agent \(j\) has received exactly \(r-1\) goods up to this point. As a result, the number of goods allocated to \(j\) from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\) at this point is at most \(r-1\). At the same time, the number of goods from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\) that might end up in \(A_{j}\) in any future steps of Round-Robin(\(\succ\)) are at most as many as the goods from \(S^{\prime}_{r-1}\) that are still unallocated at the end of step \((r-1)n+i-1\). The latter, by Lemma 4.6, are also at most \(r-1\). From these two observations, we have that the final bundle \(A_{j}\) of agent \(j\) may contain at most \(2(r-1)\) goods from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\). With Lemma 4.7 at hand, we are now ready to prove Theorems 4.4 and 4.5; Proof of Theorem 4.4.: We, of course, adopt the notation that has been used throughout this section, focusing on an arbitrary agent \(i\in N\) and assuming that her deviation \(>^{\prime}_{i}\) has been the output of Algorithm 4 with input \(\succ_{-i}\) and \(v_{i}\). In particular, \((A_{1},\ldots,A_{n})\) and \((A^{\prime}_{1},\ldots,A^{\prime}_{n})\) are the allocations returned by Round-Robin(\(\succ\)) and Round-Robin(\(>^{\prime}_{i},\succ_{-i}\)), respectively. Consider another agent \(j\in N\setminus\{i\}\). Let \(A^{\prime}_{i}=\{x_{1},x_{2},\ldots,x_{k}\}\) and \(A_{j}=\{y_{1},y_{2},\ldots,y_{k}\}\), where in both sets goods are indexed in the order in which they were allocated in the run of Round-Robin(\(>^{\prime}_{i},\succ_{-i}\)). For \(A^{\prime}_{i}\), this means that \(x_{r}\) was allocated in round \(r\) for all \(r\in[k]\). For \(A_{j}\), this indexing guarantees that for every \(0\leq\ell<r\leq k-1\), the goods in \(A_{j}\cap(S^{\prime}_{\ell}\setminus S^{\prime}_{r-1})\) all have smaller indices than the goods in \(A_{j}\cap(S^{\prime}_{r}\setminus S^{\prime}_{r-1})\) (where we use the convention that \(S^{\prime}_{-1}=\emptyset\)). We further partition \(A_{j}\setminus\{y_{1}\}\) to \(Y_{1}=\{y^{1}_{1},\ldots,y^{1}_{\tau_{1}}\}\) and \(Y_{2}=\{y^{2}_{1},\ldots,y^{2}_{\tau_{2}}\}\) which contain the goods of \(A_{j}\setminus\{y_{1}\}\) with odd and even indices, respectively, and are both renamed according to Algorithm 3 with inputs \(A^{\prime}_{i}\), \(Y_{1}\), \(v_{i}\), and \(A^{\prime}_{i}\), \(Y_{2}\), \(v_{i}\), respectively. Clearly, \(\tau_{1}=\lfloor\frac{k-1}{2}\rfloor\) and \(\tau_{2}=\lceil\frac{k-1}{2}\rceil\). By Lemma 4.7, we have that \(A_{j}\) contains at most \(2(r-1)\) goods from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\), for any \(r\in[k]\). The original ordering \(y_{1},y_{2},\ldots\) of the goods in \(A_{j}\) and the way \(A_{j}\setminus\{y_{1}\}\) was partitioned into \(Y_{1}\) and \(Y_{2}\) imply that \(\left\|Y_{1}\cap(S^{\prime}_{r-1}\setminus S^{\prime}_{0})\right|-|Y_{2}\cap (S^{\prime}_{r-1}\setminus S^{\prime}_{0})|\right\|\leq 1\) and, thus, each of \(Y_{1}\) and \(Y_{2}\) contains at most \(r-1\) goods from \(S^{\prime}_{r-1}\setminus S^{\prime}_{0}\). We also claim that, for \(\ell\in\{1,2\}\) and \(r\in[\tau_{\ell}]\), we have \[v_{i}(x_{r}\,|\,\{x_{1},\ldots,x_{r-1}\})\geq v_{i}(y^{\ell}_{r}\,|\,\{x_{1}, \ldots,x_{r-1}\})\,. \tag{2}\] Suppose not. That is, there are \(\ell\in\{1,2\}\) and \(r\in[\tau_{\ell}]\) so that (2) is violated. Note that, by the way Algorithm 3 ordered the elements of \(Y_{1}\) and \(Y_{2}\), this implies \[v_{i}(x_{r}\,|\,\{x_{1},\ldots,x_{r-1}\})<v_{i}(y^{\ell}_{r}\,|\,\{x_{1}, \ldots,x_{r-1}\})\leq v_{i}(y^{\ell}_{r}\,|\,\{x_{1},\ldots,x_{r-1}\})\,,\] for all \(t\in[r]\). Since \(x_{r}\) was the good allocated to agent \(i\) at step \((r-1)n+i\) of Round-Robin(\(>^{\prime}_{i},\succ_{-i}\)), \(x_{r}\) had maximum marginal value for \(i\) with respect to \(\{x_{1},\ldots,x_{r-1}\}\) among the available goods. Thus, none of the goods \(y^{\ell}_{1},\ldots,y^{\ell}_{r}\) were available at the time, i.e., \(y^{\ell}_{1},\ldots,y^{\ell}_{r}\in S^{\prime}_{r-1}\). Given that the only good of \(A_{j}\) that could possibly be in \(S^{\prime}_{0}=S_{0}\) was \(y_{1}\) which is not in \(Y_{1}\cup Y_{2}\). Therefore, \(y^{\ell}_{1},\ldots,y^{\ell}_{r}\in S^{\prime}_{r-1}\setminus S^{\prime}_{0}\), which contradicts the fact that \(|Y_{\ell}\cap(S^{\prime}_{r-1}\setminus S^{\prime}_{0})|\leq r-1\). We conclude that (2) holds for all \(\ell\in\{1,2\}\) and \(r\in[\tau_{\ell}]\). We are now ready to apply Theorem 2.1 to bound the value of \(A_{j}\setminus\{y_{1}\}\). We have \[v_{i}(A_{j}\setminus\{y_{1}\}) \leq v_{i}(A^{\prime}_{i})+\sum_{g\in(A_{j}\setminus\{y_{1}\}) \setminus A^{\prime}_{i}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=v_{i}(A_{i}^{\prime})+\sum_{\ell=1}^{n_{1}}v(y_{\ell}^{1}\,|\,A_{i}^ {\prime})+\sum_{\ell=1}^{n_{2}}v(y_{\ell}^{2}\,|\,A_{i}^{\prime})\] \[\leq v_{i}(A_{i}^{\prime})+\sum_{\ell=1}^{n_{1}}v(y_{\ell}^{1}\,| \,\{x_{1},\ldots,x_{\ell-1}\})+\sum_{\ell=1}^{n_{2}}v(y_{\ell}^{2}\,|\,\{x_{1}, \ldots,x_{\ell-1}\})\] \[\leq v_{i}(A_{i}^{\prime})+\sum_{\ell=1}^{n_{1}}v(x_{\ell}\,|\,\{ x_{1},\ldots,x_{\ell-1}\})+\sum_{\ell=1}^{n_{2}}v(x_{\ell}\,|\,\{x_{1},\ldots,x_{ \ell-1}\})\] \[\leq v_{i}(A_{i}^{\prime})+2\cdot\sum_{\ell=1}^{k}v(x_{\ell}\,|\, \{x_{1},x_{2},\ldots,x_{\ell-1}\})\] \[=v_{i}(A_{i}^{\prime})+2\cdot v_{i}(A_{i}^{\prime})\] \[\leq\frac{3}{\alpha}\cdot v_{i}(A_{i})\,,\] where the first inequality follows directly from Theorem 2.1, the second one follows from submodularity, the third inequality holds because of (2), the fourth one follows from the monotonicity of \(v_{i}\), and the last inequality follows from the fact that \(\succ\) is a \(\alpha\)-approximate PNE and thus \(v_{i}(A_{i})\geq\alpha\cdot v_{i}(A_{i}^{\prime})\). We conclude that \((A_{1},A_{2},\ldots,A_{n})\) is \(\frac{\alpha}{3}\)-EF1 with respect to the underlying valuation functions. Proof of Theorem 4.5.: Note that in the proof of Theorem 4.2, the submodularity of \(v_{i}\) is not used until the final bounding of \(A_{j}\setminus\{y_{1}\}\). Up to that point, the proof here is essentially identical (the only difference being that now \(\succ_{i}^{\prime}\) is a strict version of \(i\)'s true preference ranking \(\succ_{i}^{*}\) but this does not change any of the arguments). In particular, for \(A_{i}^{\prime}=\{x_{1},x_{2},\ldots,x_{k}\}\), \(A_{j}=\{y_{1},y_{2},\ldots,y_{k}\}\), \(Y_{1}=\{y_{1}^{1},\ldots,y_{\tau_{1}}^{1}\}\), and \(Y_{2}=\{y_{1}^{2},\ldots,y_{\tau_{2}}^{2}\}\), like in the proof of Theorem 4.2, we still have (2), for any \(\ell\in\{1,2\}\) and \(r\in[\tau_{\ell}]\), i.e., \(v_{i}(x_{r}\,|\,\{x_{1},\ldots,x_{r-1}\})\geq v_{i}(y_{r}^{\ell}\,|\,\{x_{1}, \ldots,x_{r-1}\})\). Notice that (2) can be rewritten as \(v_{i}(\{x_{1},\ldots,x_{r-1},x_{r}\})\geq v_{i}(\{x_{1},\ldots,x_{r-1},y_{r}^{ \ell}\})\). Since \(v_{1}\) is cancelable, the latter implies that \(v_{i}(x_{r})\geq v_{i}(y_{r}^{\ell})\), for \(\ell\in\{1,2\}\) and \(r\in[\tau_{\ell}]\). Now we apply Lemma 3.3 to get \(v_{i}(\{x_{1},x_{2},\ldots,x_{\tau_{\ell}}\})\geq v_{i}(Y_{\ell})\), for \(\ell\in\{1,2\}\). At this point, we can easily bound the value of \(A_{j}\setminus\{y_{1}\}\). We have \[v_{i}(A_{j}\setminus\{y_{1}\}) =v_{i}(Y_{1}\cup Y_{2})\] \[\leq v_{i}(Y_{1})+v_{i}(Y_{2})\] \[\leq v_{i}(\{x_{1},x_{2},\ldots,x_{\tau_{1}}\})+v_{i}(\{x_{1},x_{2 },\ldots,x_{\tau_{2}}\})\] \[\leq v_{i}(A_{i}^{\prime})+v_{i}(A_{i}^{\prime})\] \[\leq\frac{2}{\alpha}\cdot v_{i}(A_{i})\,,\] where the first inequality follows from subadditivity, the third one follows from the monotonicity of \(v_{i}\), and the last inequality follows from the fact that \(\succ\) is a \(\alpha\)-approximate PNE. We conclude that \((A_{1},\ldots,A_{n})\) is \(\frac{\alpha}{2}\)-EF1 with respect to the underlying valuation functions. The \(\alpha/(2-\alpha)\) upper bound of Theorem 4.3 for the additive case applies to both submodular and subadditive cancelable valuation functions, leaving a very small gap for the latter. For the submodular case, we improve this upper bound to \(\alpha/2\). **Proposition 4.8**.: _Let \(\alpha,\varepsilon\in(0,1]\). For instances with submodular valuation functions \(\{v_{i}\}_{i\in N}\), a \(\alpha\)-approximate PNE of the Round-Robin mechanism may not be \((\frac{\alpha}{2}+\varepsilon)\)-EF1 with respect to \(\{v_{i}\}_{i\in N}\)._ Proof.: We construct an instance with four agents and nine goods, i.e., \(N=[4]\) and \(M=\{g_{1},g_{2},\ldots,g_{9}\}\). Let \(1\gg\epsilon_{1}>\epsilon_{2}>\epsilon_{3}>\epsilon_{4}>\epsilon_{5}>\epsilon_{6}\) and \(\beta>(1+\epsilon_{4})/2\). The first three agents have additive valuation functions, defined as follows: \[v_{1}(g_{j})=\begin{cases}5,&\text{if }j=1\\ \epsilon_{5},&\text{if }j=2\\ \epsilon_{6},&\text{if }j=3\\ 1,&\text{if }j=4\\ 2,&\text{if }j=5\\ \epsilon_{1},&\text{if }j=6\\ \epsilon_{2},&\text{if }j=7\\ \epsilon_{3},&\text{if }j=8\\ \epsilon_{4},&\text{if }j=9\end{cases}\quad v_{2}(g_{j})=\begin{cases} \epsilon_{5},&\text{if }j=1\\ 5,&\text{if }j=2\\ 1,&\text{if }j=4\\ \epsilon_{1},&\text{if }j=5\\ \epsilon_{2},&\text{if }j=6\\ 2,&\text{if }j=7\\ \epsilon_{3},&\text{if }j=8\\ \epsilon_{4},&\text{if }j=9\end{cases}\quad v_{3}(g_{j})=\begin{cases} \epsilon_{5},&\text{if }j=1\\ \epsilon_{6},&\text{if }j=2\\ 5,&\text{if }j=3\\ \epsilon_{1},&\text{if }j=4\\ \epsilon_{2},&\text{if }j=5\\ 2,&\text{if }j=6\\ \epsilon_{3},&\text{if }j=7\\ \epsilon_{4},&\text{if }j=8\\ 1,&\text{if }j=9.\end{cases}\] Agent 4 has an OXS (and, thus, submodular) valuation function that is defined by the maximum weight matchings in the bipartite graph below. Now consider a bidding profile where the first three agents bid truthfully (i.e., they bid the strict preference rankings \(>_{1}^{*},>_{2}^{*},>_{3}^{*}\) which are consistent with \(v,v_{2},v_{3}\)), while the fourth agent bids the preference ranking \(>_{4}\): \(g_{3}>_{4}g_{6}>_{4}g_{8}>_{4}g_{1}>_{4}g_{2}>_{4}g_{4}>_{4}g_{5}>_{4}g_{7}>_{4}g _{9}\). It is easy to confirm that the produced allocation is \((A_{1},A_{2},A_{3},A_{4})=\{\{g_{1},g_{4},g_{5}\},\{g_{2},g_{7}\},\{g_{3},g_{9}\},\{g_{6},g_{8}\}\}\). We first examine the first three agents. Agents 1 and 2 get their most valuable goods in this allocation something that implies that there is no profitable deviation for them. For the same reason they are also envy-free towards the other agents. Regarding agent 3, the only bundle that improves her utility is \(\{g_{3},g_{6}\}\). However, there is no bid that she can report and get these two goods. The reason for this is that if she does not get good \(g_{3}\) in round 1 of Mechanism 1 (by not declaring it as her best good among the available ones), then \(g_{3}\) is lost to agent 4. If, on the other hand, she gets good \(g_{3}\) in round 1 (by declaring it as her best good among the available ones), then good \(g_{6}\) is lost to agent 4. Therefore, there is no profitable deviation for her. Finally, it is easy to see that she is also envy-free towards the other agents. Moving to agent 4, we have that \[v_{4}(A_{i})=\begin{cases}v_{4}(g_{1})+4\beta-\varepsilon_{4},&\text{if $i=1$}\\ v_{4}(g_{2})+1-\varepsilon_{3},&\text{if $i=2$}\\ v_{4}(g_{3})+\varepsilon_{2},&\text{if $j=3$}\\ 1+\varepsilon_{1},&\text{if $j=4$},\end{cases}\] where \(g_{1},g_{2},g_{3}\) are the most valuable goods from sets \(A_{1},A_{2},A_{3}\), respectively, according to agent 4. Therefore, \(v_{4}(A_{1}\setminus\{g_{1}\})>v_{4}(A_{2}\setminus\{g_{2}\})>v_{4}(A_{3} \setminus\{g_{3}\})\), and by comparing \(v_{4}(A_{4})\) with \(v_{4}(A_{1}\setminus\{g_{1}\})\) we get that agent 4 is \(\frac{1+\varepsilon_{1}}{4\beta-\varepsilon_{4}}\)-EF1 towards agent 1. The only thing that remains is to explore the possible deviations of agent 4. Initially, notice that regardless of what agent 4 declares, she cannot get goods \(g_{1},g_{2},g_{3}\) as these are taken in round 1 by the agents that precede her. With that in mind, we will examine what is the best attainable value through deviating, based on what she gets in round 1. Take note that she can get any goods from \(\{g_{4},g_{5},\ldots,g_{9}\}\) in round 1 as they are available when her turn comes: * **Agent \(4\) gets good \(g_{4}\) in round 1**. Based on the reported preferences \(>_{1}^{*}\), \(>_{2}^{*}\), \(>_{3}^{*}\) of the other agents, in round 2 we have the following: Good \(g_{5}\) is lost to agent 1, good \(g_{7}\) is lost to agent 2, and good \(g_{6}\) to agent 3. Therefore, only goods \(g_{8}\) and \(g_{9}\) remain available for agent 4, and she can get only one of them. Thus, the maximum attainable value for her is \(2\beta+\varepsilon_{1}\). * **Agent \(4\) gets good \(g_{5}\) in round 1**. In that case, based on the declaration of the rest of the agents, in round 2 we have the following: Good \(g_{4}\) is lost to agent 1, good \(g_{7}\) is lost to agent 2, and good \(g_{6}\) to agent 3. Therefore, only goods \(g_{8}\) and \(g_{9}\) remain available for agent 4, and once more she can get only one of them. Thus, the maximum attainable value for her is \(2\beta-\varepsilon_{4}+\varepsilon_{1}\). * **Agent \(4\) gets good \(g_{6}\) in round 1**. Based on the reported preferences \(>_{1}^{*}\), \(>_{2}^{*}\), \(>_{3}^{*}\) of the other agents, in round 2 we have the following: Good \(g_{5}\) is lost to agent 1, good \(g_{7}\) is lost to agent 2, and good \(g_{9}\) to agent 3. Therefore, only goods \(g_{4}\) and \(g_{9}\) remain available for agent 4. Now observe that \(v_{4}(g_{4},g_{6})=2\beta\) (as this is the value of the maximum matching), while \(v_{4}(g_{9},g_{6})=1+\varepsilon_{2}\). Thus, the maximum attainable value for her is \(2\beta\). * **Agent \(4\) gets good \(g_{7}\) in round 1**. Based on the reported preferences \(>_{1}^{*}\), \(>_{2}^{*}\), \(>_{3}^{*}\) of the other agents, in round 2 we have the following: Good \(g_{5}\) is lost to agent 1, good \(g_{4}\) is lost to agent 2, and good \(g_{6}\) to agent 3. Therefore, only goods \(g_{8}\) and \(g_{9}\) remain available for agent 4, and once more she can get only one of them. Thus, the maximum attainable value for her is \(1-\varepsilon_{3}+\varepsilon_{1}\). * **Agent \(4\) gets good \(g_{8}\) in round 1**. Based on the reported preferences \(>_{1}^{*}\), \(>_{2}^{*}\), \(>_{3}^{*}\) of the other agents, in round 2 we have the following: Good \(g_{5}\) is lost to agent 1, good \(g_{7}\) is lost to agent 2, and good \(g_{6}\) to agent 3. Therefore, only goods \(g_{4}\) and \(g_{9}\) remain available for agent 4, and once more she can get only one of them. Thus, the maximum attainable value for her is \(2\beta+\varepsilon_{1}\). * **Agent \(4\) gets good \(g_{9}\) in round 1**. In that case, based on the declaration of the rest of the agents, in round 2 we have the following: Good \(g_{5}\) is lost to agent 1, good \(g_{7}\) is lost to agent 2, and good \(g_{6}\) to agent 3. Therefore, only goods \(g_{4}\) and \(g_{8}\) remain available for agent 4, and once more she can get only one of them. Thus, the maximum attainable value for her is \(2\beta+\varepsilon_{2}\). From the above discussion we get that the maximum value that agent 4 can attain through a deviation is \(2\cdot\beta+\varepsilon_{1}\). At the same time \(v_{4}(A_{4})=1+\varepsilon_{1}\). By setting \(\alpha=\frac{1+\varepsilon_{1}}{2\cdot\beta+\varepsilon_{1}}\) we trivially have that \((>_{1},>_{2})\) is a \(\alpha\)-approximate PNE. On the other hand, for a given \(\varepsilon>0\), we have that \(\frac{1+\varepsilon_{1}}{2\cdot\beta+\varepsilon_{1}}+\varepsilon\) is strictly larger than \(\frac{1+\varepsilon_{1}}{4\beta-\varepsilon_{1}}\) for sufficiently small \(\varepsilon_{1}\). That is, there is a choice of \(\varepsilon_{1},\ldots,\varepsilon_{6}\) so that the \(\alpha\)-approximate PNE (\(>_{1}^{*},>_{2}^{*},>_{3}^{*},>_{4}\)) is not \(\frac{\alpha}{2}+\varepsilon\)-EF1. ## 5 Discussion and Future Directions In this work we studied the existence and fairness guarantees of the approximate pure Nash equilibria of the Round-Robin mechanism for agents with cancelable and submodular valuation functions. In both cases, we generalized the surprising connection between the stable states of the mechanism and its fairness properties, a connection that was only known for exact equilibria and additive valuation functions. For the function classes considered, we provide tight or almost tight bounds, thus giving a complete picture of the strengths and the limitations of the Round-Robin mechanism for these scenarios. There are several interesting related directions, some of which we discuss below. An obvious first direction is to explore function classes beyond the ones studied here, with XOS or subadditive functions being prominent candidates. Since our results heavily rely on the properties of cancelable and submodular functions, it is likely that different approaches are needed for this endeavour. As we mention in the introduction, a second interesting direction, related to this one, is the study of the stability and fairness properties of variants of the Round-Robin mechanism that allow the agents to be more expressive. Analyzing mechanisms that take as an input value oracles seems to be highly non-trivial, and although some of our results might transfer in this setting, we suspect that, in general, strong impossibility results hold regarding the fairness guarantees of approximate PNE. Finally, although here we focused on Round-Robin and EF1, most fair division algorithms have not been considered in the strategic setting. One promising such algorithm, which is both fundamental in a number of variants of the problem and simple enough, is the Envy-Cycle-Elimination algorithm of Lipton et al. [28] which is known to compute EF1 allocations for general non-decreasing valuation functions. An appealing alternative here is studying the existence of equilibria of approximation algorithms for MMS allocations. An impoertant advantage in this case is that once the existence of an approximate PNE is shown, the corresponding MMS guarantee comes for free (see also the related discussion in Remark 2.9 of Amanatidis et al. [5]).
2309.10579
TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm Teleoperation using a Digital Twin
We present TELESIM, a modular and plug-and-play framework for direct teleoperation of a robotic arm using a digital twin as the interface between the user and the robotic system. We tested TELESIM by performing a user survey with 37 participants on two different robots using two different control modalities: a virtual reality controller and a finger mapping hardware controller using different grasping systems. Users were asked to teleoperate the robot to pick and place 3 cubes in a tower and to repeat this task as many times as possible in 10 minutes, with only 5 minutes of training beforehand. Our experimental results show that most users were able to succeed by building at least a tower of 3 cubes regardless of the control modality or robot used, demonstrating the user-friendliness of TELESIM.
Florent P Audonnet, Jonathan Grizou, Andrew Hamilton, Gerardo Aragon-Camarasa
2023-09-19T12:38:28Z
http://arxiv.org/abs/2309.10579v2
# TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm Teleoperation using a Digital Twin ###### Abstract We present TELESIM, a modular and plug-and-play framework for direct teleoperation of a robotic arm using a digital twin as the interface between the user and the robotic system. We tested TELESIM by performing a user survey with 37 participants on two different robots using two different control modalities: a virtual reality controller and a finger mapping hardware controller using different grasping systems. Users were asked to teleoperate the robot to pick and place 3 cubes in a tower and to repeat this task as many times as possible in 10 minutes, with only 5 minutes of training beforehand. Our experimental results show that most users were able to succeed by building at least a tower of 3 cubes regardless of the control modality or robot used, demonstrating the user-friendliness of TELESIM. ## I Introduction Robot teleoperation is difficult for non-experts [1, 2]. Recently, the ANA Avatar XPRIZE Challenge [3] set a series of challenging tasks to test the limits of teleoperation. The best systems that completed the challenges were rewarded with a prize pool of $10 million. At its core, the challenge involves the direct teleoperation of a robot with minimal latency and the capacity to experience the environment from the robot's perspective. However, direct teleoperation still places a heavy physical and mental strain on the user, as Pettinger _et al._[4] reported that a user performing a pick and place task was faster and had fewer errors while reporting the task was more accessible when shared autonomy systems were enabled. Hence, researchers from HCI, medicine, robotics, and others, have explored different means of control for teleoperation to address these limitations. While most research efforts focus on a physical control device such as a Virtual Reality Controller [5, 6], a Joystick [7, 8], or phone [9], others decided to use cameras to track the whole body [10, 11], or just the gaze [12]. There has yet to be an overall consensus on the most appropriate type of control for direct teleoperation with specific applications requiring specific implementations. In this paper, we develop a modular and plug-and-play direct teleoperation framework called TELESIM that non-experts can use without specialised training using off-the-shelf Virtual Reality (VR) technologies. Specifically, TELESIM objective is to allow for the direct teleoperation of any robotic arm using a digital twin as the interface between the user and the robotic system. We then demonstrate TELESIM's user-friendliness using a user study and the users' success rate at completing the task using two different types of control and grasping systems. Specifically, we use a virtual reality controller and a finger mapping hardware controller mounted on two robotic manipulators using different grasping systems. We compare their performance to study whether additional degrees of freedom in the control scheme enhance performance while performing a simple task. Our contributions are: * A modular and plug-and-play framework for teleoperation for any robotic arm using a digital twin. * An experimental validation for testing the framework's performance through a simple non-expert task. * A rigorous evaluation involving 37 participants demonstrating the user-friendliness of TELESIM. ## II Background Direct teleoperation is considered a stepping stone for shared autonomy [15]. This is because direct teleoperation Fig. 1: Our modular and plug-and-play TELESIM framework is being used to control a UR3 Robot (top-left) and a Baxter Robot (top-right) and its digital twin (bottom-right). The robot’s digital twins can be seen underneath their respective real robots causes significant cognitive strain on the user [4], and the user may not be capable of millimetre-scale adjustment to the position of the robot end effector. While in medicine, the user's movement is scaled down to allow for more precision [16, 6], it may not be suitable for all types of manipulation tasks as some require significant arm movements to move an object from one place to another. Hence, researchers have explored different control methods to reduce the cognitive strain while giving the highest amount of precision. For instance, low degree-of-freedom control methods such as a keyboard [17], a joystick [18, 7], a touchscreen [19], or a gamepad[20] have brought an improved level of control [17] to address the user's mental strain. However, with the advent of VR technologies, researchers have investigated whether these technologies are appropriate for direct teleoperation. For example, they have proposed using a VR controller such as [21, 5] or a phone [22]. While others have investigated the use of motion mapping of the user's body [10, 23] or only gaze control [12]. However, for the latter, the added mobility generates a higher cognitive load [4], and mapping motions to robot movements is challenging due to differences in kinematics chains between robot arms and users [24]. Recently, Gottardi _et al._[25] have investigated combining multiple control systems, such as a VR controller and a tracking band on the upper arm, to track the user's movements. Rakita _et al.[6]_ also compared various control methods; a stylus, a touchscreen, and a VR controller. These were then integrated into a custom inverse kinematics solver that adjusted the tolerance level when matching the end-effector pose to that of the user. The authors showed that users preferred the VR controller as they were more successful at completing pick-and-place tasks, such as picking up bottles or plates. To mitigate the limitation of direct teleoperation, researchers have focused on how much shared autonomy improved the success of a given task. For this, research works have aimed at comparing direct teleoperation with respect to an assisted version to analyze the impact of shared autonomy on task success. For example, Chen _et al._[10] created a system in which the operator, using a joystick, manipulated the robot's end effector to an object, and then the robot could either grasp the object autonomously or assist the user in fine-tuning the robot position for a more optimal grasp. Later, [11, 4, 25] built on [10] where the user teleoperated the robots directly to a planned position but allowed the robot to perform the grasp automatically or, in the case of [4], turn a valve handle. Lin _et al._ and Gottardi _et al._ conducted a user survey and confirmed that users preferred the shared autonomy approach, as it reduced complexity and mental strain. Furthermore, [25] observed results similar to [8], who hypothesised that users preferred to give up control if it meant increasing the task completion rate. However, Javdani _et al._[8, 26] have falsified this hypothesis using a system similar to [11, 4, 25]. The authors concluded that users preferred to lose control if it meant an increase in a task's success rate only for a more complex task, while, for simple tasks, users still preferred to have more control. These works have focused on one robotic system and conducted their experimental survey on a small user base (between 8 and 12 participants). Furthermore, they focused on different autonomy levels and not on different control methods. Fig. 2: Overview of the experimental setup. The Steam Index VR Headset [13] is marked as (1) on the far left, which acts as the world’s origin. The Baxter robot on the left (2) is controlled by the Steam Index controller (5). In front of it, the UR3 is on the right (3), with the Yale OpenHand T42 gripper [14], controlled by the Senseglove and HTC Vive tracker (4) on the left side of the brown table. Additionally, in the upper right corner (7), a bird eye view of the task, which consists of 3 cubes in a triangle pattern (described in Section IV), while on the brown table, the cubes are arranged in the goal configuration (6) Although this paper focuses on TELESIM as a framework, our evaluation also addresses two main limitations of previous work: (1) researchers have only used one robot per study, and (2) most user studies consider a small user base, which does not represent a statistically significant sample. It also addresses a gap discussed by Rea & Seo [1], which states that there needs to be more non-expert evaluation of robotic teleoperation for general tasks such as picking and placing common objects. Therefore, to advance the state-of-the-art in robotic teleoperation, we investigate the performance of different control modalities for direct teleoperation using a VR controller and finger mapping. Similarly, we evaluate our framework on 2 different robots, a Rethink Robotics Baxter and a Universal Robotic 3, and ask 37 non-expert participants to carry out a simple pick-and-place task. Additionally, by using two different control modalities, our goal is to bridge the gap between VR control methods[21, 5] and complete body mapping [10, 11] by investigating the performance of hand and finger tracking through a SenseGlove. gripper, as without it, it leads to breakage of the controlling string or exceeding the amount of resistance allowed by the motor. Since we were interested in developing a simple task that non-experts can carry out, we decided not to implement haptic feedback as it would give the users an advantage of sensing whether an object is grasped. Thus, this will result in an unfair comparison between the VR controller and the SenseGlove. Therefore, we leave haptic feedback for future work. The VR headset (1 in Fig. 2) acts as the origin of both robots, giving the user an easy reference point for teleoperation. The SteamVR outputs the controller position in a 3D space with respect to the headset. The origin is thus transformed into the user's resting hand position when initiated. This method of tracking the position is preferred by multiple researchers[4, 30], as well as many of the participants in the ANA Avatar XPRIZE Challenge [5, 31]. The Senseglove is also used by the winning team [31], but to control a Schunk robotic hand that replicates a human hand. ### _Digital Twin_ The position in the 3D space from the SteamVR is transmitted through ROS2 to a full digital twin created in NVIDIA Isaac Sim[32]. This flow of information can be seen in Fig. 3 point 2. Isaac Sim, a recent ray-tracing simulation software, is used to calculate the robot motion plan using RMPFlow [33], which is a motion generation based on Riemannian Motion Policies [34] (Fig. 3 point 3). Isaac Sim was chosen as it is the most realistic simulation software compatible with ROS2 from our simulation benchmark [35]. Isaac Sim takes in a URDF of a robot for visualisation along with a robot description file, describing the joints that can be actuated by the motion planner and the robot's collision as spheres, as Isaac Sim uses them for collision checking. This file can be created with an included extension (Lula Robot Description Editor[36]). The ability to add new robots is a significant part of what makes TELESIM modular, along with ROS2. We decided to use ROS2 as it is the most used framework for controlling a variety of robots, making our framework plug-and-play. Views of the UR3 and the Baxter robot from Isaac Sim are shown in 1. The grey square in between the gripper (Fig. 3 point 4) is the point that is controlled by the teleoperation system and indicates where Isaac Sim must find a path. The fact that Isaac Sim acts as a complete digital twin allows the robot to avoid collision with the world around it and damage itself. Additionally, both robots have systems that allow them to work alongside humans. Our system is capable, with minimal configuration, of handling the restrictions placed by their need to be safe around humans. Finally, Isaac Sim transmits the position of each joint to the real robot through ROS2 and passes this information to the robotic system, as shown in Fig. 3 points 5 and 6. ### _Robot Control_ ROS control [37], provides a wrapper that facilitates the interface of different robots. However, each robot needs to be adapted to work with specific hardware. For this paper, we implemented the Universal Robot ROS2 Control package and used it to transfer the joint states from Isaac Sim to the robots (Fig. 3 point 5). Specifically, for the UR3, the size of the gripper and the safety regulations of the laboratory reduced the available workspace for the robot. We handled these limitations by adding safety planes and limitations on the range of motion of the real robot. For Baxter, we used a ROS1 to ROS2 bidirectional bridge, as Baxter only works with ROS1 internally. The pipeline is thus converting Isaac Sim's joint states into Baxter messages in ROS2 and then they are sent through the bridge as ROS1 messages to the robot. The robot outputs its current state, which is converted to ROS2 through the same bridge. Our framework introduces a slight amount of lag (500 ms) between the user movement and the robot movement, partly due to the path planning step, the time taken by the actuator to move the robot arm to the desired position and mostly due to safety restrictions that caps the maximum speed of our robots. The faster the user moves from one position to another, the higher the delay as the arm tries to catch up. This is easily accounted for by the user by making small and slow movements, as confirmed by Schwarz and Behnkle [38]. Although we acknowledge lag is present in our system, Schwarz and Behnkle [38] found that minor delays do not impact performance, and none of our users mentioned lag as an issue in our experiments (see Section V). ## IV Experiments ### _Methodology_ As described in Section III-A, the user controls a robot by teleoperating it using either a VR controller or the SenseGlove. For this experiment, we consider a simple task where the user has to pick up 3 cubes on a table. The user then needs to bring them individually to the centre of the table to complete a tower. Once the tower is completed, the cubes are returned to their original position, and the user is asked to repeat the task as many times as possible within 10 minutes. The task definition consists of 3 cubes positioned at each vertex of an isosceles triangle and at similar distances from the robots' base on each robot's table (Fig. 3 point 7). We have placed markers of the vertices of this isosceles triangle on each robot's operating table for repeatability of the task between attempts and among users. The front cube at the top of the triangle is positioned such that it is at the maximum reach limit of each robot's overhead rotation. This means that the end-effector's z-axis is perpendicular to the table, which makes it difficult for the user to pick up the cube from overhead. Thus, the user has to add some rotation in the x- or y-axis of the gripper to pick the cube successfully. The right cube is the furthest away from the user in both robots and adds a degree of difficulty due to the user's viewpoint, but still within reach. Finally, the left cube is placed such that the user has to move their body. That is, for Baxter, the location where the user needs to pick up the left cube is approximately at the waist of the user, while for the UR3, the location is on the other side of (farthest from) the headset. The cube positions were chosen to let users be spatially aware of their position and its relationship to the robot. Finally, the location where the users need to stack the three cubes is easily accessible by the robot, and this position is marked by a red square in red tape, 2 cm larger than the cube. This position can be seen in Fig. 2 point 6, with the cubes stacked as required. In order to teleoperate using the Baxter robot, the users need to stand with their back to the VR headset, while for teleoperating using the UR3, users need to stand with the headset on their right and the Baxter robot behind them. This difference in position is due to the space constraint of the room in which we ran our experiments. The operating room can be seen in Fig. 2, with the headset on the left of the picture shown as point 1. ### _User Survey_ In our experiments, we asked 37 participants (29 male and 8 female) from various backgrounds aged 19 to 51 (mean: 25.32, 1 standard deviation: 6.26) to teleoperate both robots and stack 3 cubes without a monetary reward. Participants reported having, on a 5-point Likert scale (going from "Experienced" with a score of 1 to "No Experience" with a score of 5), a 3.03 mean experience with Virtual Reality with a standard deviation of 1.2. They also reported having a mean experience of 3.24 with a robot with a standard deviation of 1.24 Each participant completed the short questionnaire described above at the beginning of the experiment. After being asked to position their back to the VR headset, an explanation was given on how to control the robot, emphasising that all of their hand movements and rotation will be mapped one-to-one to the robot. They were instructed to try to grasp a cube from both sides. They had 5 minutes to get used to the control without a specific task objective. Most of the participants picked up and placed a cube during this time. After 5 minutes, the participants performed the task of stacking the 3 cubes in the given location without any restriction on the cubes' pose and order. Users were asked to stack cubes as many times as possible in 10 minutes. Once a tower has been completed, we reset the cubes to their initial configuration. Users' actions were recorded, such as the time taken for each tower and for individual actions for each pick, place, and drop (i.e. failures). After 10 minutes, users were given the option to take a break while answering the Single-Ease Question (SEQ) [39]. Then, they were asked to repeat the same experiment but with the UR3 robot. SEQ was chosen instead of other metrics such as the System Usability Scale (SUS) [40] as Hodrien and Fernando [39] have argued that it is a good end-of-task metric. ## V Evaluation Fig. 4 shows that \(85\%\) of the participants can build at least one tower in 10 minutes using Baxter and the VR controller. However, there is a steady decline for each of the following towers, with only \(5\%\) of the users able to build 8 towers. This is in direct comparison to the UR3 as shown in Fig. 5, with slightly less than \(50\%\) of the population failing to build one tower and \(5\%\) managing to build 4, half as many towers for Baxter. The box plots in Fig. 4 and 5 show the average and variance of the time taken by users to complete the towers. In particular, the first tower for both robots took most of the task duration because some participants could not build one tower. This time completion trend shows TELESIM's user-friendliness as \(60\%\) of users for Baxter managed with minimal training to complete a full tower, which means 3 different pick-and-place operations in around 2 minutes. Similarly, this can also be observed for the UR3 as \(25\%\) of the users managed to build a tower in 4 minutes. Table I shows the additional statistics collected during the experiment, such as the percentage of times the user dropped a cube that caused the tower to collapse. Table Ia indicates that for the Baxter robot, \(75\%\) of the picking actions Fig. 4: Average Time Taken and Percentage of Population for each Tower Completed for Baxter Fig. 5: Average Time Taken and Percentage of Population for each Tower Completed for UR resulted in a correctly placed cube that did not collapse due to incorrect placement or the user inadvertently moved the robot in the tower's path. Similarly, in Table Ib, \(46\%\) of all the picking actions resulted in a correct place. The difference in the number of towers built, shown in Fig. 5, can be explained by a greater difficulty in picking the cube. Specifically, our results indicate that there is no significant difference in the difficulty of picking a cube (\(P>0.05\)), nor is there a difference in the amount of time that the user collapses a tower while placing a cube (\(P>0.1\)). This lack of difference shows the stability of the teleoperation, as the difference in placing rates and the number of towers can be explained by the difference in control modality and the difference in robots. The collapse rate in both Table Ia and Table Ib is similar and indicates that the type of robot does not influence the difficulty in safely placing a cube in a specific spot. However, the difference in drop rate for the UR3 can be related to the limitations of the gripper described in Section III-A, such as the limitation of the grip strength of the closed finger, to prevent the cable from breaking, and the limited range of motions, since we observed that some users let the cube fall while the gripper was closed. However, successful users moved slowly to prevent unnecessary movement, thus reducing the risk of dropping. This limitation is also visible in the placing rate; the placing and dropping rates are complementary, as these are the only two outcomes after picking up a cube. Results of the Single Ease Question asked at the end of each task, in which a higher score means that TELESIM is easy to use, can be seen in Fig. 6. They show that the user was able to detect how well they performed and that their estimate is consistent with the result shown in Fig. 4 and Fig. 5. Specifically, Baxter obtained a mean of 3.32 with a standard deviation of 1.27, while UR3 obtained a mean of 2.19 with a standard deviation of 1.14. Furthermore, Fig. 6 shows that no user gave the maximum score for UR3, while they did for Baxter. Additionally, the UR3 has a sharp decline in score after a SEQ score of 3, while Baxter's is more spread out. ## VI Conclusion and Future Works In this paper, we have investigated the performance of TELESIM by conducting a medium-scale user survey with 37 participants who were asked to build towers of 3 cubes by teleoperating robots. We tested TELESIM's modularity on two different robots with two different control modalities. Our experimental results show that TELESIM is modular, plug-and-play and user-friendly, as not only were we able to deploy it on 2 robots with different modalities, but most users were able to succeed by building at least once a tower of 3 cubes, with only 5 minutes of training, regardless of the control modality or robot used. We thus bridged the gap pointed out by Rea & Seo [1], where they state that there is a lack of non-expert evaluation of robotic teleoperation for general tasks such as picking and placing common objects. TELESIM is available on GitHub1, allowing developers to perform teleoperation on their robots with minimal setup time. Footnote 1: [https://github.com/cvas-ug/telesim_pnp](https://github.com/cvas-ug/telesim_pnp) Our underlying motivation for choosing direct teleoperation in this paper is to establish a baseline for further research on shared autonomy, which could combine human intuition and a high-level overview of a task while giving freedom to the robot to perform, for example, accurate picking and placing objects. Additionally, we plan to remove the constraint of having the VR headset behind the user and allow them to wear the headset to operate either in VR in the digital twin view of Isaac Sim or Augmented Reality by allowing the user to move around the robot and have different viewpoints while manipulating, thus enhancing the precision of the teleoperation. However, the choice of control input is fundamental to success. Future work consists of carrying out a survey using the VR controller and the UR3 to dissociate the robot and control method; as for our current evaluation, we hypothesise they are closely linked. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Min & Mean \(\pm\) Std & Max \\ \hline Placing Rate & 25.00\% & 77.42\(\pm\) 15.54\% & 100.00 \\ Dropping Rate & 3.70\% & 23.83\% \(\pm\) 14.08\% & 66.67 \\ Collapse Rate & 5.56\% & 18.44\% \(\pm\) 11.66\% & 57.14 \\ Still in Place Rate & 24.31\% & 75.21\% \(\pm\) 15.20\% & 95.92 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline & Min & Mean \(\pm\) Std & Max \\ \hline Placing Rate & 12.50\% & 46.29\% \(\pm\) 17.97\% & 86.67 \\ Dropping Rate & 13.33\% & 53.37\% \(\pm\) 18.63\% & 87.50 \\ Collapse Rate & 4.76\% & 22.25\% \(\pm\) 12.46\% & 50.00 \\ Still in Place Rate & 14.88\% & 46.93\% \(\pm\) 16.75\% & 84.44 \\ \hline \end{tabular} \end{table} TABLE I: Additional Statistics Collected Fig. 6: Single Ease question violin plot for Baxter (orange) and UR3 (blue) (Higher number means easier to use)
2309.08336
Investigating radial-flow-like effects via pseudorapidity and transverse spherocity dependence of particle production in pp collisions at the LHC
Recent observations of quark-gluon plasma (QGP) like signatures in high multiplicity proton-proton (pp) collisions, have compelled the heavy-ion physics community to re-examine the pp collisions for proper baseline studies. Event-shape-based studies in pp collisions have succeeded to a certain extent in identifying rare events mimicking such heavy-ion-like behaviour. In this manuscript, we incorporate PYTHIA8 to study radial flow-like signatures in pp collisions at $\sqrt{s} = 13$ TeV as a function of transverse spherocity and pseudo-rapidity. The pseudo-rapidity dependence would help understand the scientific community for future upgrades. At the same time, the transverse spherocity will serve its purpose of identifying soft-QCD-dominated events in small collision systems. We present the mean transverse momentum, particle ratios, and kinetic freezeout parameters as a function of transverse spherocity and pseudo-rapidity in pp collisions at $\sqrt{s}$ = 13 TeV using PYTHIA8. We observe that the isotropic events show enhanced radial-flow effects and jetty events show the absence of radial-flow-like effects. For the first time, we show the transverse spherocity and pseudorapidity dependence of partonic modification factor in pp collisions, which clearly shows that by choosing transverse spherocity one can directly probe the radial-flow-like effects in pp collisions at the LHC.
Aswathy Menon K R, Suraj Prasad, Sushanta Tripathy, Neelkamal Mallick, Raghunath Sahoo
2023-09-15T11:45:44Z
http://arxiv.org/abs/2309.08336v1
Investigating radial-flow-like effects via pseudorapidity and transverse spherocity dependence of particle production in pp collisions at the LHC ###### Abstract Recent observations of quark-gluon plasma (QGP) like signatures in high multiplicity proton-proton (pp) collisions, have completed the heavy-ion physics community to re-examine the pp collisions for proper baseline studies. Event-shape-based studies in pp collisions have succeeded to a certain extent in identifying rare events mimicking such heavy-ion-like behaviour. In this manuscript, we incorporate PYTHIA8 to study radial flow-like signatures in pp collisions at \(\sqrt{s}=13\) TeV as a function of transverse spherocity and pseudo-rapidity. The pseudo-rapidity dependence would help understand the scientific community for future upgrades. At the same time, the transverse spherocity will serve its purpose of identifying soft-QCD-dominated events in small collision systems. We present the mean transverse momentum, particle ratios, and kinetic freezeout parameters as a function of transverse spherocity and pseudo-rapidity in pp collisions at \(\sqrt{s}=13\) TeV using PYTHIA8. We observe that the isotropic events show enhanced radial-flow effects and jetty events show the absence of radial-flow-like effects. For the first time, we show the transverse spherocity and pseudorapidity dependence of partonic modification factor in pp collisions, which clearly shows that by choosing transverse spherocity one can directly probe the radial-flow-like effects in pp collisions at the LHC. ## I Introduction The primary goal of heavy-ion collisions at ultra-relativistic energies is to probe the quantum chromodynamic (QCD) phase diagram. Such collisions at the Large Hadron Collider (LHC) and Relativistic Heavy-ion collider (RHIC) form a state of deconfined partons in thermal equilibrium called the quark-gluon plasma (QGP), which is believed to have existed a few microseconds after the Bigbang. It is nearly impossible to directly observe such a deconfined medium in heavy-ion collisions at the colliders due to its short lifetime. However, there are a few indirect signatures that can signify the presence of such a medium. A few of these signatures require a baseline. Traditionally, proton-proton (pp) collisions have been used as the baseline to study these signatures for last few decades; however, recent observations of ridge-like structures [1; 2], observation of strangeness enhancement [3] and radial flow-like signatures [4; 5; 6], in high multiplicity pp collisions have impelled the scientific community to examine the origin of such observations in pp collisions. Perturbative quantum chromodynamics (pQCD) inspired models such as PYTHIA [7] with the implementation of color reconnection (CR) and multi-partonic interactions (MPI) are able to imitate the radial flow-like effects in pp collisions [8]. The radial flow is believed to give a boost to the particles based on their transverse momentum, where higher momentum particles get a greater boost compared to the lower momentum particles for a given particle species. This transverse momentum-dependent boost due to radial flow gives rise to the broadening of the transverse momentum spectra of the particle, thus enhancing mean transverse momenta, and it depends upon the particle mass as particles with lower mass are less affected due to this radial flow compared to the particles with higher mass [9]. This broadening of transverse momentum spectra for particles of different masses leads to a peak-like structure around 2-3 GeV/\(c\) in transverse momentum in the ratios of particle yields for different masses. In Pb-Pb collisions, the central collisions are observed to have more such radial flow effect compared to the peripheral collisions [10]. The mean transverse radial flow velocity (\(\langle\beta_{\rm T}\rangle\)) is commonly extracted from the simultaneous Boltzmann Gibbs blastwave function fit of the identified particles' transverse momentum spectra. A study of the mean radial expansion velocity in Cu + Cu collisions at \(\sqrt{s_{\rm NN}}=200\) GeV, when performed as a function of pseudo-rapidity, it has been observed that as one goes towards the higher rapidity, the expansion velocity reduces as the system gets less dense [11]. This mean value of radial expansion velocity in the transverse plane is studied in a transport model approach and is found to have transverse spherocity dependence indicating that the isotropic events can have more radial flow compared to the jetty events [12; 13]. These radial flow-like effects in pp collisions using PYTHIA8 are shown to have proportionality with the number of multi-partonic interactions (\(N_{\rm mpi}\)). The high value of \(N_{\rm mpi}\) is prone to have enhanced radial flow-like effects [14]; while the effects are reduced for low \(N_{\rm mpi}\). However, in experiments, it is impossible to measure \(N_{\rm mpi}\) directly. Event shape-based studies are one such method to probe \(N_{\rm mpi}\) as they show a significant correlation in simulations. Transverse spherocity being an event-shape observable, is capable of separating the
2305.19688
VIPriors 3: Visual Inductive Priors for Data-Efficient Deep Learning Challenges
The third edition of the "VIPriors: Visual Inductive Priors for Data-Efficient Deep Learning" workshop featured four data-impaired challenges, focusing on addressing the limitations of data availability in training deep learning models for computer vision tasks. The challenges comprised of four distinct data-impaired tasks, where participants were required to train models from scratch using a reduced number of training samples. The primary objective was to encourage novel approaches that incorporate relevant inductive biases to enhance the data efficiency of deep learning models. To foster creativity and exploration, participants were strictly prohibited from utilizing pre-trained checkpoints and other transfer learning techniques. Significant advancements were made compared to the provided baselines, where winning solutions surpassed the baselines by a considerable margin in all four tasks. These achievements were primarily attributed to the effective utilization of extensive data augmentation policies, model ensembling techniques, and the implementation of data-efficient training methods, including self-supervised representation learning. This report highlights the key aspects of the challenges and their outcomes.
Robert-Jan Bruintjes, Attila Lengyel, Marcos Baptista Rios, Osman Semih Kayhan, Davide Zambrano, Nergis Tomen, Jan van Gemert
2023-05-31T09:31:54Z
http://arxiv.org/abs/2305.19688v1
# VIPriors 3: Visual Inductive Priors for Data-Efficient Deep Learning Challenges ###### Abstract The third edition of the "VIPriors: Visual Inductive Priors for Data-Efficient Deep Learning" workshop featured four data-impaired challenges, focusing on addressing the limitations of data availability in training deep learning models for computer vision tasks. The challenges comprised of four distinct data-impaired tasks, where participants were required to train models from scratch using a reduced number of training samples. The primary objective was to encourage novel approaches that incorporate relevant inductive biases to enhance the data efficiency of deep learning models. To foster creativity and exploration, participants were strictly prohibited from utilizing pre-trained checkpoints and other transfer learning techniques. Significant advancements were made compared to the provided baselines, where winning solutions surpassed the baselines by a considerable margin in all four tasks. These achievements were primarily attributed to the effective utilization of extensive data augmentation policies, model ensembling techniques, and the implementation of data-efficient training methods, including self-supervised representation learning. This report highlights the key aspects of the challenges and their outcomes. Visual inductive priors, challenge, image classification, object detection, instance segmentation, action recognition. ## 1 Introduction Data is fueling deep learning, yet obtaining high quality annotations is often costly. In recent years, extensive research has been dedicated to exploring ways to utilize large quantities of data to train comprehensive foundation models for vision and language [5], and to combine multiple modalities for weak supervision [43]. While these approaches have demonstrated impressive results, self-supervision is not yet the holy grail. Training on massive datasets still requires a significant amount of energy, contributing to carbon emissions. Furthermore, only a handful of deep learning behemoths have access to billions of data points and expensive deep learning hardware. In addition, large quantities of data may simply not be available in certain domains. The Visual Inductive Priors for Data-Efficient Deep Learning workshop (VIPriors) therefore aims to encourage research on learning efficiently from few data samples by combining the power of deep learning with hard-won knowledge priors from various fields. We focus on data efficiency through visual inductive priors. The Visual Inductive Priors for Data-Efficient Deep Learning workshop has now been organized for the third year in a row, with the latest 2022 edition taking place at ECCV in Tel Aviv, Israel. In order to stimulate research in data-efficient computer vision, the workshop includes challenges in which participants train computer vision models on small subsets of (publicly available) datasets. We challenge participants to submit solutions that are able to learn an effective representation of the dataset without access to the large quantities of data that is used to train state-of-the-art deep computer vision models. In this report, we present the outcomes of the third edition of these challenges. We discuss specific details and top-ranking solutions of each challenge. It was observed that the top competitors in all challenges heavily relied on model ensembling and data augmentation to improve the data efficiency of their solutions. Additionally, many of the participants' solutions utilized a limited number of backbones and baseline methods, which seem to possess properties conducive to learning from small data. To recognize submissions that introduce innovative methods, a jury prize for each challenge was awarded to the most noteworthy submission. ## 2 Challenges The workshop accommodates four common computer vision challenges in which the number of training samples are reduced to a small fraction of the full set: **Image classification**: We use a subset of Imagenet [15]. The subset contains 50 images from 1,000 classes for training, validation and testing. **Object detection**: DelftBikes [30] dataset is used for the object detection challenge. The dataset includes 8,000 bike images for training and 2,000 images for testing (Fig. 1). Each image contains 22 different bike parts that are annotated as bounding box, class and object state labels. **Instance segmentation**: The main objective of the challenge is to segment basketball players and the ball on images recorded of a basketball court. The dataset is provided by SynergySports1 and contains a train, validation and test set of basketball games recorded at different courts with instance labels. Footnote 1: [https://synergysports.com](https://synergysports.com) \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113. **Action recognition**: For this challenge we have provided Kinetics400ViPriors, which is an adaptation of the well-known Kinetics400 dataset [27]. The training set consists of approximately 40k clips, while the validation and test sets contain about 10k and 20k clips, respectively. We provide a toolkit2 which consists of guidelines, baseline models and datasets for each challenge. The competitions are hosted on the Codalab platform. Each participating team submits their predictions computed over a test set of samples for which labels are withheld from competitors. Footnote 2: [https://github.com/VIPriors/vipriors-challenges-toolkit](https://github.com/VIPriors/vipriors-challenges-toolkit) The challenges include certain rules to follow: * Models ought to train from scratch with only the given dataset. * The usage of other data rather than the provided training data, pretraining the models and transfer learning methods are prohibited. * The participating teams need to write a technical report about their methodology and experiments. **Shared rankings.** Due to confusion around the exact deadline of the competitions, we have merged rankings of two different moments. This has resulted in shared places in some of the rankings of the individual challenges. ### _Classification_ Image classification serves as an important benchmark for the progress of deep computer vision research. In particular, the ImageNet dataset [15] has been the go-to benchmark for image classification research. ImageNet gained popularity because of its significantly larger scale than those of existing benchmarks. Ever since, even larger datasets have been used to improve computer vision, such as the Google-owned JFT-300M [47]. However, we anticipate that relying on the increasing scale of datasets is problematic, as increased data collection is expensive and can clash with privacy interests of the subjects. In addition, for domains like medical imaging, the amount of labeled data is limited and the collection and annotation of such data relies on domain expertise. Therefore, we posit that the design of data efficient methods for deep computer vision is crucial. As in the two earlier editions of this workshop, in our image classification challenge we provide a subset [29] of the Imagenet dataset [15] consisting of 50 images per class for each of the train, validation and test splits. The classification challenge had 14 participating teams, of which six teams submitted a report. The final ranking and the results can be seen in Table II. #### 2.1.1 First place The team from Xidian University led by Tianzhi Ma uses an ensemble of SE+PyramidNet, ResNeSt200e, ReXNet, EfficientNet-B8 and ConvNeXt-XL* models. They apply diverse data augmentation strategies to increase the diversity in the data, and include several other optimization tricks like RICP and hard fusion. Ultimately, they were able to improve their top-1 accuracy from last year's challenge (68.6) to a winning top-1 accuracy of 79%. #### 2.1.2 Second place & jury prize The team from Xidian University led by Xiaoqiang Lu uses only two models in their ensemble, and instead gain performance by using cross-decoupled knowledge distillation [74]. Other than this, only Automix [42] and label smoothing are required to secure second place. For this minimal yet effective solution we award this team the jury prize. #### 2.1.3 Third place The team from Xidian University lead by Yi Zou uses five different encoder architectures. They exhaustively apply knowledge distillation from all encoders to all other encoders to train twenty models, which are all ensembled for the final model. All models are trained with severe data augmentation: CutMix, random erasing, MixUp, AutoAugment. #### 2.1.4 Conclusion As in previous editions [6, 33], the crucial components of a competitive submission to the image classification competition are ensembling of many different classification architectures, as well as combining multiple different augmentation policies. Aside from label smoothing and training with larger image sizes, knowledge distillation gained in popularity among the methods used to train the networks. ### _Object Detection_ Similar to the object detection challenge last year [33], we also use DelftBikes [30] dataset this year (Fig. 1). Each image in the DelftBikes contains 22 labeled bike parts as class and bounding box labels of each part. In addition, the dataset includes extra object state labels as intact, missing, broken or occluded. The dataset has 10k bike images in total and 2k of the images are used for only testing purposes. The dataset contains different object sizes, and contextual and location biases that can cause false positive detections [28, 30]. Note that, some of the object boxes are noisy which introduces more challenges to detect object parts. We provide a baseline detector as a Faster RCNN with a Resnet-50 FPN [44] backbone from scratch for 16 epochs. This baseline network is trained with the original image size without any data augmentation. It performs 25.8% AP score on the test set. Note that, the evaluation is done on available parts which are intact, damaged and occluded parts. Fig. 1: Some images from the DelftBikes dataset. Each image has a single bike with 22 labeled parts. The detection challenge had 41 participant teams. The team from Xidian University obtained first place by 33% AP scores. Terminus Technologies and Vision Intelligence Department from Meituan followed them by 32.9% AP and 32.1% AP respectively. The team from Huawei Technologies and Tongji University won the jury prize for their 'coarse-to-fine' idea. #### 2.2.1 First place Lu et al. employ an ensemble of various YOLO detectors [3, 54, 55] and CBNetv2 [34] (Fig. 2). They design two-stage training: (i) pre-training by using weak data augmentation and (ii) fine-tuning by using strong data augmentation such as mosaic [3], mix-up [72], and copy-paste [32] and a weighted training strategy based on image uncertainty. The authors further improved the results by weighted boxes fusion (WBF) [45] and TTA strategies and obtain 33 % AP on the test set. #### 2.2.2 Second place Xu et al. train Cascade RCNN [7] using Swin Transformer [38], ConvNext [41] and ResNext [63] as backbone architectures. These backbones are pretrained by using self-supervised methods such as MoCoV3 [10] and MoBY [64]. In addition, they use AutoAugment [13], random flip and multi-scale augmentation methods [37] to improve the detection performance. Finally, the non-maximum weighted (NMW) [75] method, Soft-NMS [4] and model ensemble methods are used on the test set. The method obtained 32.9% AP on the test set. #### 2.2.3 Third place Zhao et al. initially train Cascade RCNN [7] detector using ConvNext backbone [41]. Then, they create a synthetic dataset (Fig. 3) from the training set and obtain pseudo labels on this dataset with the initial trained model. Afterwards, they train the same model only on the pseudo labels with smaller-resolution images. In the end, they retrain the pseudo-label pretrained network with the original train set and select some hard classes to improve the detector performance on them. During training phases, they also use various data augmentation methods as colour jittering and RGB shifting, mix-up [72] and AutoAugment [13]. The method obtains 32.1% AP detection performance. \begin{table} \begin{tabular}{l l r} \hline \hline Ranking & Teams & Top-1 Accuracy \\ \hline 1 & **Tianzhi Ma, Zihan Gao, Wenxin He, Licheng Jiao** & **78.7** \\ & _School of Artificial Intelligence, Xidian University._ & 77.9 \\ 2 \& J & Xiaojiang Lu, Chao Li, Chenghui Li, Xiao Tan, Zhongjian Huang, Yuting Yang & 77.9 \\ & _School of Artificial Intelligence, Xidian University._ & 77.7 \\ 3 & Yiu Zuo, Zitao Wang, Xiaowen Zhang, Licheng Jiao & 77.7 \\ & _School of Artificial Intelligence, Xidian University._ & 76.8 \\ 4 & Jhahao Wang, Hao Wang, Hua Yang, Fang Liu, Licheng Jiao & 75.4 \\ 5 & Wenxuan Sheng, Mengjia Wang, Zixiao Zhang, Fang Liu, Licheng Jiao & 75.4 \\ & _School of Artificial Intelligence, Xidian University._ & 70.8 \\ 6 & Boaliang Chen, Yuxuan Zhao, Fang Liu, Licheng Jiao & 70.8 \\ & _School of Artificial Intelligence, Xidian University._ & \\ \hline \hline \end{tabular} \end{table} TABLE II: Final rankings of the Image Classification challenge. Fig. 2: Bag of Freebies for training detector [73]. They train different models during the pretraining and fine-tuning phases with different types of data augmentation methods. They also use image uncertainty to improve object detection performance. #### 2.2.4 Jury prize Method of Zhao et al. has two phases: pretraining and adaptation phases (Fig. 4). In the pretraining phase, they utilize mosaic [3] and mix-up [72] data augmentations on object and image level features and train SimMIM [65]. In the adaptation phase, a pretrained encoder of SimMIM is used to initialize the backbone of Cascade RCNN [7]. In 'coarse detection', the model detects bike objects. In the 'fine detection' phase, the fine detection module runs on the cropped bike object from the previous phase and tries to detect relevant bike parts. The final model obtains 30.94% AP. The team earned the jury prize because of their 'coarse-to-fine' idea, well-written article and discussion of strategies that did not work. ### _Instance Segmentation_ Instance segmentation is the task of detecting and segmenting specific objects and instances in an image. With applications ranging from autonomous driving, surveillance, remote sensing to sport analysis, it is a fundamental computer vision problem. Similarly to last year, our challenge is based on the basketball dataset provided by SynergySports [53], consisting of images recorded during various basketball games played on different courts. The goal is to detect and predict segmentation masks of all players and ball objects in the images. With a mere 184, 62, and 64 samples for the train, validation and test splits, respectively, the dataset is considered very small. The test labels are withheld from the challenge participants and final performance on the test set is evaluated on an online server. The main metric used is the Average Precision (AP) @ 0.50:0.95. Our baseline method is based on the Detectron2 [62] implementation of Mask-RCNN [20]. Twelve teams submitted solutions to the evaluation server, of which six teams submitted a report to qualify their submission to the challenge. The final rankings are shown in Table IV. #### 2.3.1 First place The method of Yan et al. [66] introduces a task-specific data augmentation (TS-DA) strategy to generate additional training data, and a task-specific inference processing (TS-IP) which is applied at test time. TS-DA employs CopyPaste [18] augmentations with constraints on the absolute locations of the synthetic players and ball objects to ensure all objects are placed inside the court, and their relative locations to mimic player-ball interactions. Subsequently, geometric and photometric augmentations are applied to the image to further increase the variety in their appearance. During inference, random scaling and cropping is applied to the images, and additional filtering employed to the predictions to ensure only one basketball of reasonable dimensions is present on the court. The complete data augmentation policy is illustrated in Figure 5. The segmentation model is based on the Hybrid Task Cascade (HTC) detector [8] and the CBSwin-T backbone with CBFPN [35]. Mask Scoring R-CNN [23] is used to further improve segmentation quality. After training the model, it is further finetuned using the SWA [71] strategy. Fig. 4: Method pipeline. First, the backbone is pretrained by using SimMIM [65] to obtain strong features. In the adaptation phase, coarse to fine detection strategy improves detection. Fig. 5: Data augmentation policy of the first-place instance segmentation submission by Yan et al. [66]. Fig. 3: Synthetic images generated for backbone pretraining. #### 2.3.2 Shared second place - A Leng et al. demonstrate that a straightforward combination of well-proven methods can yield near-SoTA performance. The approach uses a Swin Transformer-Large [38] as the backbone, and the pipeline is based on CBNetV2 [35], as shown in Figure 6. In terms of data augmentations the method relies on a combination of AutoAugment [13], ImgAug [26] and Copy-Paste [18]. #### 2.3.3 Shared second place - B Lu et al. make use of the popular HTC detector [8] with CBSwin-T [35] backbone with CBFPN [35] using group normalization, Mosaic, test-time augmentations and the Task-Specific Copy-Paste Data Augmentation Method [18] from a previous edition of the VIPriors Instance Segmentation challenge. Moreover, different backbones, namely ResNet [21], ConvNeXt [41], Swinv2 [38] and CBNetv2 [35] are trained and combined using Model Soups [59]. Multiple predictions are combined together using mask voting. #### 2.3.4 Third place The method of Zhang et al. employs Location-aware MixUp, RandAugment, GridMask, Random scaling, Copy-paste, Multi-scale augmentation, and test-time augmentation in terms of data augmentation techniques. The model used is the popular HTC detector [8] and soft non-maxima suppression is applied on the predicted target boxes. The overall framework is depicted in figure 7. #### 2.3.5 Jury prize This year's jury prize is awarded to Sparse Instance Activation for Real-Time Instance Segmentation [12] by Cheng et al. The paper presents a method for instance segmentation using a novel representation of instance activation maps. These maps highlight informative regions for each object, which are then used to obtain instance-level features for recognition and segmentation. The method avoids the need for non-maximum suppression in post-processing by predicting objects in a one-to-one style using bipartite matching. ### _Action Recognition_ Many of the popular Action Recognition models are deep networks that require a large amount of data for training, which can be challenging when there is limited data availability or insufficient compute resources. In line with the workshop's goals, we present the Kinetics400ViPriors dataset, which is a modified version of the well-known Kinetics400 dataset. We have created a smaller variant with 40k, 10k, and 20k clips for the train, validation, and test sets, respectively, while preserving the original number of action classes. Our aim is to motivate researchers in Action Recognition to develop efficient models that can leverage visual prior knowledge from the data. For evaluation, we use the average classification accuracy across all classes on the test set. The accuracy for a single class is calculated as \(\mathrm{Acc}=\frac{P}{N}\), where \(P\) represents the number of correct predictions for the evaluated class and \begin{table} \begin{tabular}{l l r} \hline \hline Ranking & Teams & \% AP @ 0.50:0.95 \\ \hline 1 & **Bo Yan, Xingran Zhao, Yadong Li, Hongbin Wang.** & **53.1** \\ **_Ant Group, China_. & & **50.6** \\ 2 (shared) & Fuxing Leng, Jinghua Yan, Peibin Chen, Chenglong Yi. & 50.6 \\ _ByteDance, Huazhong University of Science and Technology._ & & 50.6 \\ _Xiaoqiang Lu, Yuting Yang, Zhongjian Huang._ & & 50.6 \\ _School of Artificial Intelligence, Xidian University, Xi’an, China._ & & 49.8 \\ 3 & Junpei Zhang, Kexin Zhang, Rui Peng, Yanbiao Ma, Licheng Jiao Fang Liu. & 49.8 \\ 3 & _Team Yunjiao,Mta._ & & 47.6 \\ 4 & Yi Cheng, ShuHan Wang, Yifei Chen, Zhongjian Huang, & & 47.6 \\ 4 & _School of Artificial Intelligence, Xidian University, Xi’an, China._ & & 34.0 \\ 5 \& J & (J) School of EIC, Huazhong University of Science \& Technology, (2) Horizon Robotics; \\ & & (3) Institute of Automation, Chinese Academy of Sciences (CASIA) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Final rankings of the Instance Segmentation challenge. J indicates jury prize. Fig. 6: Instance segmentation model architecture and training pipeline of the method of Leng et al. Fig. 7: Overview of instance segmentation method by the third place competitor, Zhang et al. \(N\) is the total number of samples in that class. The average accuracy is determined by taking the mean of accuracies for all classes. 9 teams submitted solutions to the evaluation server, of which 3 teams submitted a report to qualify their submission to the challenge. The final rankings are shown in Table V. #### 2.4.1 First place The authors train a selection of models, including R(2+1)D [49], SlowFast [16], CSN [50], X3D [17], TANet [40] and Timesformer [2], and apply a model fusion approach by assigning different weights to the models and using the soft voting method to combine their results. In terms of data augmentation, frames were extracted from videos and a subset was selected by choosing every second frame. The videos were resized and noise was added through random flipping. During testing, TenCrop was used as a test-time enhancement. The evaluation involved ten-fold cross-validation, where the training dataset was combined with the validation dataset. An overview of the method is provided in Fig. 8. #### 2.4.2 Second place The method proposes a multi-network dynamic fusion model combining a variety of backbones, including SlowFast [16], Timesformer [2], TIN [46], TPN [67], Video Swin Transformers [39], R(2+1)D [49], X3D [17], DirecFormer [51]. Model predictions are combined as a weighted average by the prediction score of each model. Test-time augmentation with majority voting is used, as well as AutoAugment [13], CutMix [68], and a variety of other spatial, photometric and temporal augmentations during training. An overview of the method is provided in Fig. 9. #### 2.4.3 Third place & Jury prize The method combines self-supervised pre-training of various backbone models, optical flow estimation and model ensembling to train a data efficient video classification model. First, the 2D model encoders are pre-trained using the MoCo [19] self-supervised representation learning framework on image data using the individual frames of the provided dataset. Next, optical flow features are extracted using the TVL-1 [70] method. To correct for camera movement, consecutive image frames are aligned by calculating the transformation matrix based on extracted SIFT features. Finally, a range of models including TSN [57], TANet [40], TPN [67], SlowFast [16], CSN [50] and Video MAE [48] are trained on the training data and pre-extracted optical flow features. MixUp [72] and CutMix [68] data augmentation is employed. Model ensembling is performed by concatenating the features of all models and training a single linear classifier layer after normalization. An ablation study is performed to show that self-supervised pre-training improves model performance. ## 3 Conclusion We have summarized all solutions in Table I in terms of the encoder architecture, data augmentation techniques and main methods used. Organizing the same challenges for the third year in a row gives a unique perspective on trends: which methods and/or architectures prevail over time, and which are replaced? The use of combining large numbers of models in ensembles and heavy data augmentation have been unchanging throughout the VIPiors challenge series. The models used in the ensembles are a mix of CNNs and Vision Transformers for the tasks of object detection and instance segmentation, whereas for image classification and action recognition Vision Transformers are not seeing use in our challenges. As for data augmentation, AutoAugment, MixUp and CutMix are unchanging constants in the training regimes of our competitors, regardless of the task. Though we did not explicitly perform the required analysis, we cannot escape the impression that the simplicity of model ensembling is hard to beat with task-specific or domain-specific knowledge, especially when considering the effort required in design and implementation. Winning methods tend to use ensembling, and the bigger the ensemble, the better, as is shown in the ablation studies of some of the competitors' reports. If one is to follow this approach, we speculate that choosing models with a variety of inductive Fig. 8: Method proposed by first place Song et al. from Xidian University. Fig. 9: Schematic diagram of the method (inference mode) proposed by second place He et al. from Xidian University. biases (e.g. CNNs and Vision Transformers) could make the ensemble more effective. However, such heavy use of ensembles may just be possible in our challenges because of the limited size of the datasets, which makes training many models feasible.
2305.19850
On Newton's identities in positive characteristic
Newton's identities provide a way to express elementary symmetric polynomials in terms of power polynomials over fields of characteristic zero. In this article, we study the failure of this relation in positive characteristic and what can be recovered. In particular, we show how one can write the elementary symmetric polynomials as rational functions in the power polynomials over any commutative unital ring.
Sjoerd de Vries
2023-05-31T13:40:51Z
http://arxiv.org/abs/2305.19850v3
# On Newton's identities over rings ###### Abstract Newton's identities provide a way to express elementary symmetric polynomials in terms of power polynomials over fields of characteristic zero. In this article we study symmetric polynomials over arbitrary commutative rings with unity. We show that in this setting, one can recover the elementary symmetric polynomials as rational functions in the power polynomials. ###### Contents * 1 The subalgebra generated by power polynomials * 2 Elementary symmetric polynomials in terms of power polynomials * 2.1 The case of a general ring * 2.2 The case of \(\mathbb{F}_{r}\)-algebras ## Introduction The elementary symmetric polynomials, resp. power polynomials, in \(n\) variables are defined as \[e_{k}(x_{1},\ldots,x_{n}) =\sum_{1\leq i_{1}<\ldots<i_{k}\leq n}x_{i_{1}}x_{i_{2}}\ldots x_{ i_{k}}; \tag{1}\] \[p_{k}(x_{1},\ldots,x_{n}) =\sum_{i=1}^{n}x_{i}^{k}. \tag{2}\] In the following, we write \(e_{k}\) (resp. \(p_{k}\)) for the elementary symmetric (resp. power) polynomials in \(n\) variables, where \(n\) is some fixed number which should be clear from the context. Equation (1) holds for \(k\geq 1\), and \(e_{0}\) is defined to be \(1\). Equation (2) holds for \(k\geq 0\), so that \(p_{0}(x_{1},\ldots,x_{n})=n\). In particular, \(e_{k}=0\) for \(k>n\), but \(p_{k}\neq 0\) for all \(k\). For an integer partition \(\lambda=(k_{1},\ldots,k_{l})\), we write \(e_{\lambda}:=e_{k_{1}}\ldots e_{k_{l}}\) and \(p_{\lambda}:=p_{k_{1}}\ldots p_{k_{l}}\). The importance of the elementary symmetric polynomials is made evident by the following theorem (cf. Theorem 2.4): **Theorem 0.1**.: Let \(K\) be any field. Then the \(K\)-algebra of symmetric polynomials in \(n\) variables is equal to \(K[e_{1},\ldots,e_{n}]\). The power polynomials are related to the elementary symmetric polynomials via Newton's identities: \[ke_{k}=\sum_{i=1}^{k}(-1)^{i-1}e_{k-i}p_{i}. \tag{3}\] It follows that for every \(1\leq k\leq n\) such that \(k!\) is invertible, one has (see e.g. [10, p.28]) \[e_{k}=\frac{1}{k!}\det\left(\begin{array}{ccccc}p_{1}&1&0&\cdots&0\\ p_{2}&p_{1}&2&\cdots&0\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ p_{k-1}&p_{k-2}&\cdots&p_{1}&k-1\\ p_{k}&p_{k-1}&\cdots&p_{2}&p_{1}\end{array}\right). \tag{4}\] An immediate corollary of Theorem 0.1 and the above equation is the following: **Theorem 0.2**.: Let \(K\) be a field of characteristic zero. Then \(K[e_{1},\ldots,e_{n}]=K[p_{1},\ldots,p_{n}]\). On the other hand, suppose \(K\) has characteristic \(r>0\) (we avoid using \(p\) to avoid confusion with the power polynomials). Then we have algebraic relations \[p_{kr}=x_{1}^{kr}+\ldots+x_{n}^{kr}=(x_{1}^{k}+\ldots+x_{n}^{k})^{r}=p_{k}^{r} \tag{5}\] for any integer \(k\geq 1\). Thus, contrary to Theorem 0.2, it is not true that \(K[e_{1},\ldots,e_{n}]=K[p_{1},\ldots,p_{n}]\) when \(n\geq r\), because \(K[e_{1},\ldots,e_{n}]\) has transcendence degree \(n\) while \(K[p_{1},\ldots,p_{n}]\) has transcendence degree at most \(n-\lfloor n/r\rfloor\). This is related to the fact that the identity (4) is not valid when \(k!\) is not invertible. However, one may wonder if the set of _all_ power polynomials still generates the ring of symmetric polynomials over fields of positive characteristic. This question has a negative answer, as the following example shows. Suppose we are working with symmetric polynomials in two variables over \(\mathbb{F}_{2}\). Then \(p_{k}(1,1)=1^{k}+1^{k}=0\) for all values of \(k\). If \(e_{2}=x_{1}x_{2}\) could be written as an algebraic combination of \(\{p_{k}\}_{k\in\mathbb{N}}\), i.e. as a finite sum over integer partitions \[e_{2}=\sum_{\lambda}c_{\lambda}p_{\lambda} \tag{6}\] with \(c_{\lambda}\in\mathbb{F}_{2}\), then evaluating both sides at \((1,1)\) (resp. \((0,0)\)) gives \(c_{0}=1\) (resp. \(c_{0}=0\)). Hence no such relation (6) can exist. However, notice that in the above case, one can write \(e_{2}\) in two variables in terms of power polynomials as \[e_{2}=x_{1}x_{2}=\frac{x_{1}^{2}x_{2}+x_{1}x_{2}^{2}}{x_{1}+x_{2}}=\frac{(x_{ 1}^{3}+x_{1}^{2}x_{2}+x_{1}x_{2}^{2}+x_{2}^{3})+x_{1}^{3}+x_{2}^{3}}{x_{1}+x_{ 2}}=\frac{p_{1}^{3}+p_{3}}{p_{1}}. \tag{7}\] A natural question is then whether it is true that the elementary symmetric polynomials can always be obtained as rational functions in the power polynomials. This question has been answered in the affirmative by Schonhage: **Theorem 0.3** ([12, Theorem 2]).: Let \(K\) be any field. Then \(K(e_{1},\ldots,e_{n})=K(p_{1},p_{2},\ldots)\). In this paper, we give a new proof which holds over any base ring: **Theorem 0.4** (cf. Theorem 2.8).: Let \(R\) be a commutative ring with unity. Then \[Q(R[e_{1},\ldots,e_{n}])=Q(R[p_{1},p_{2},\ldots]),\] where \(Q(-)\) denotes the total ring of fractions. We prove Theorem 2.8 by providing a simple algorithm to explicitly express any elementary symmetric polynomial as a rational function of power polynomials. Moreover, we give an explicit description of the denominators which appear in these expressions (cf. Remark 2.10). The outline of this paper is as follows. In SS1, we study the subalgebra generated by the power polynomials in positive characteristic. In SS2.1, we prove Theorem 2.8 constructively and give examples and applications. In SS2.2, we express elementary symmetric polynomials in terms of power polynomials over \(\mathbb{F}_{r}\)-algebras, using an algorithm which differs from the method of [10]. As far as we know, the main results in SS1 and SS2.1 have not been obtained before. ### Acknowledgements I am grateful to Per Alexandersson for helpful discussions and providing a combinatorist's perspective on the contents of this paper. I much appreciated helpful comments from Bruce Sagan and Chris Bowman about the existing literature, and thank Alin Bostan for pointing out the paper [10]. I would also like to thank my PhD-advisor Jonas Bergstrom and my co-advisor Olof Bergvall. ## 1 The subalgebra generated by power polynomials In this section, we work over a field \(K\) of characteristic \(r>0\). We assume that \(n\geq r\), as otherwise the situation is identical to the characteristic \(0\) case. A natural question is whether one can describe the subalgebra \(K[p_{1},p_{2},\ldots]\subseteq K[e_{1},\ldots,e_{n}]\) in terms of elementary symmetric polynomials. For this purpose, we work in the \(K\)-basis \(\{e_{\lambda}\ |\ \lambda\mbox{ is an integer partition}\}\) of \(K[e_{1},\ldots,e_{n}]\), and when we write "monomial", we mean a monomial expressed in terms of elementary symmetric polynomials; that is, a constant multiple of some \(e_{\lambda}\). By applying the Newton identities inductively, one obtains the following expression for the power polynomials in terms of the elementary symmetric polynomials: \[p_{m}=(-1)^{m}\sum_{t_{1}+2t_{2}+\ldots+mt_{m}=m}c_{t_{1},\ldots,t_{m}}\prod_{ i=1}^{m}(-e_{i})^{t_{i}} \tag{8}\] where \[c_{t_{1},\ldots,t_{m}}=\frac{m\cdot(t_{1}+t_{2}+\ldots+t_{m}-1)!}{t_{1}!t_{2}! \cdots t_{m}!}.\] Since the coefficients \(c_{t_{1},\ldots,t_{m}}\) are integers, this is a formula that holds in \(K\). However, it is not clear from the formula (8) how to judge whether a given symmetric polynomial lies in \(K[p_{1},p_{2},\ldots]\) or not. This motivates the question of how one can describe the subalgebra generated by power polynomials and what its properties are. Proposition 1.1 shows that the containment \(K[p_{1},p_{2},\ldots]\subset K[e_{1},\ldots,e_{n}]\) is always proper when \(n\geq r\). Proposition 1.3 shows that no finite number of power polynomials generates \(K[p_{1},p_{2},\ldots]\). **Proposition 1.1**.: Denote by \(E\) the \(K\)-algebra generated by monomials \(e_{\lambda}\) such that at least one part of \(\lambda\) is coprime to \(r\). Then \[K[p_{1},p_{2},\ldots]\subseteq E\subsetneq K[e_{1},\ldots,e_{n}].\] Proof.: Since \(p_{kr}=p_{k}^{r}\) for all \(k\), we see that \(K[p_{1},p_{2},\ldots]\) is generated by power polynomials \(p_{k}\) with \((k,r)=1\). Any such power polynomial lies in \(E\). Indeed, any monomial \(e_{\lambda}\) corresponding to a partition \(\lambda\) with all parts divisible by \(r\) has degree a multiple of \(r\), while \(p_{k}\) is homogeneous of degree \(k\), which by assumption is not divisible by \(r\). The first inclusion now follows because \(E\) is a subalgebra of \(K[e_{1},\ldots,e_{n}]\). To see that \(E\) is a proper subalgebra, note that e.g. \(e_{r}\notin E\). **Example 1.2**.: One can check by hand that for \(K=\mathbb{F}_{2}\) and \(n=3\), one cannot write \(e_{3}=x_{1}x_{2}x_{3}\) as an algebraic combination of power polynomials. This shows that the containment \(K[p_{1},p_{2},\ldots]\subseteq E\) is not an equality in general. **Proposition 1.3**.: Let \(K\) be a field of characteristic \(r>0\). Fix an integer \(k\) not divisible by \(r\). Then \[K[p_{1},\ldots,p_{k-1}]\subsetneq K[p_{1},\ldots,p_{k}].\] Proof.: Write \(k=ar+b\) for integers \(a\geq 0\) and \(0<b<r\). Applying the identity (8) with \(m=k\), \(t_{b}=1\), and \(t_{r}=a\) tells us that \(p_{k}\) contains the monomial \((-1)^{a+1}e_{r}^{a}e_{b}\). We claim that there are no elements in \(K[p_{1},\ldots,p_{k-1}]\) which, when written in terms of the elementary symmetric polynomials, contain the monomial \(e_{r}^{a}e_{b}\); this will finish the proof. Suppose for contradiction that \(f\) is a counterexample. Consider an expression \[f=\sum_{v\in\mathbb{N}^{k-1}}c_{v}p_{1}^{v_{1}}\ldots p_{k-1}^{v_{k-1}},\] with \(v_{i}=0\) for all \(r\mid i\); we can achieve this because \(p_{jr}=p_{j}^{r}\). By the algebraic independence of the elementary symmetric polynomials, the monomial \(e_{r}^{a}e_{b}\) in \(f\) must be a product of monomials of the form \(e_{r}^{\alpha}e_{b}^{\beta}\), with \(\alpha\leq a\) and \(\beta\leq 1\), occurring in \(p_{1}^{v_{1}},\ldots,p_{k-1}^{v-1}\). Since there are no monomials of this form with \(\beta=0\) by Proposition 1.1, we see that the entire monomial \(e_{r}^{a}e_{b}\) must occur in some \(p_{m}\). This is a contradiction for degree reasons: we obtain \(m=ra+b=k>m\). This finishes the proof. **Corollary 1.4**.: Whenever \(n\geq r\), the subalgebra \[K[p_{1},p_{2},\ldots]\subset K[x_{1},\ldots,x_{n}]\] is not finitely generated. Proof.: Given any finite set \(S=\{f_{1},\ldots,f_{m}\}\) of algebraic combinations of power polynomials in \(n\) variables, we have \(K[S]\subseteq K[p_{1},\ldots,p_{N}]\) for \(N=\max\{\deg f_{i}\}\). Thus Proposition 1.3 implies that \(K[S]\) is properly contained in \(K[p_{1},p_{2},\ldots]\). **Example 1.5**.: Let \(K=\mathbb{F}_{2}\) and \(n=2\). Then we have for any \(N=2m+1\): \[\mathbb{F}_{2}[p_{1},p_{3},\ldots,p_{N}]=\mathbb{F}_{2}[e_{1},e_{1}e_{2},e_{1} e_{2}^{2},\ldots,e_{1}e_{2}^{m}].\] Note that this example shows that \(K[e_{1},\ldots,e_{n}]\) is not the integral closure of \(K[p_{1},p_{2},\ldots]\). **Remark 1.6**.: As there is no finite generating set for \(K[p_{1},p_{2},\ldots]\), it is not clear whether there is a more satisfactory description of this subalgebra in terms of elementary symmetric polynomials than the expressions given by the identities (8). Note that these relations may be slightly simplified by the fact that \(e_{1},\ldots,e_{r-1}\in K[p_{1},\ldots,p_{r-1}]\). **Remark 1.7**.: The discrepancy between the cases \(n<r\) and \(n\geq r\) is reminiscent of the modular representation theory of the symmetric group \(S_{n}\). It would be interesting to know whether this theory can be applied to offer a more conceptual interpretation of the results in this paper. Elementary symmetric polynomials in terms of power polynomials In this section, we give an effective proof that any symmetric polynomial over any ring \(R\) can be expressed as a rational function in the power polynomials. We then consider the case where \(R\) contains a field of positive characteristic, where some improvements can be made. ### The case of a general ring Let \(R\) be any commutative ring with unity. For \(k\geq 0\), we view \(e_{k}\), \(p_{k}\) as elements of \(R[x_{1},\dots,x_{n}]\). We first prove some preliminary results needed in the proof of Theorem 2.8. We fix the following notation. For a polynomial \(f\in R[x_{1},\dots,x_{n}]\) and \(k\in\mathbb{N}\), let \[f(\hat{x}_{k}):=f(x_{1},\dots,\hat{x}_{k},\dots,x_{n}):=f(x_{1},\dots,0,\dots, x_{n}).\] In particular, \(f(\hat{x}_{k})=f\) if \(k>n\). **Lemma 2.1**.: Let \(M=(m_{i,j})\) be a \(d\times d\) matrix with entries given by power polynomials: \(m_{i,j}=p_{r_{i,j}}(x_{1},\dots,x_{n_{i,j}})\) for some integers \(r_{i,j},n_{i,j}\geq 0\). Define \[\epsilon_{i,j,k}:=\begin{cases}x_{k}^{r_{i,j}}&k\leq n_{i,j};\\ 0&\text{otherwise}.\end{cases}\] Then for any \(1\leq i\leq d\) and any \(k\geq 1\), we have \[\det M=\det\begin{pmatrix}m_{1,1}&m_{1,2}&\cdots&m_{1,d}\\ \vdots&\vdots&&\vdots\\ m_{i-1,1}&m_{i-1,2}&\cdots&m_{i-1,d}\\ \epsilon_{i,1,k}&\epsilon_{i,2,k}&\cdots&\epsilon_{i,d,k}\\ m_{i+1,1}&m_{i+1,2}&\cdots&m_{i+1,d}\\ \vdots&\vdots&&\vdots\\ m_{d,1}&m_{d,2}&\cdots&m_{d,d}\end{pmatrix}+\det\begin{pmatrix}m_{1,1}&m_{1,2}& \cdots&m_{1,d}\\ \vdots&\vdots&&\vdots\\ m_{i-1,1}&m_{i-1,2}&\cdots&m_{i-1,d}\\ m_{i,1}(\hat{x}_{k})&m_{i,2}(\hat{x}_{k})&\cdots&m_{i,d}(\hat{x}_{k})\\ m_{i+1,1}&m_{i+1,2}&\cdots&m_{i+1,d}\\ \vdots&\vdots&&\vdots\\ m_{d,1}&m_{d,2}&\cdots&m_{d,d}\end{pmatrix}\] Proof.: This follows from the \(i\)-th row expansion of \(\det M\), since \(m_{i,j}=\epsilon_{i,j,k}+m_{i,j}(\hat{x}_{k})\). **Proposition 2.2**.: Let \(d,n\in\mathbb{Z}_{\geq 1}\) and consider the Hankel matrix \[P_{d,n}(x_{1},\dots,x_{n})=\begin{pmatrix}p_{1}&p_{2}&p_{3}&\cdots&p_{d}\\ p_{2}&p_{3}&p_{4}&\cdots&p_{d+1}\\ p_{3}&p_{4}&p_{5}&\cdots&p_{d+2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ p_{d}&p_{d+1}&p_{d+2}&\cdots&p_{2d-1}\end{pmatrix} \tag{9}\] Then the following hold: 1. For any \(d\), we have \[\det P_{d,d}=e_{d}\prod_{1\leq i<j\leq d}(x_{i}-x_{j})^{2}.\] 2. For any \(d\) and \(n\), we have \[\det P_{d,n}=\sum_{1\leq i_{1}<i_{2}<\dots<i_{d}\leq n}\det P_{d,d}(x_{i_{1}},x _{i_{2}},\dots,x_{i_{d}}).\] In particular, \(\det P_{d,n}\) is non-zero if and only if \(d\leq n\). Proof.: **Proof of 1.** If we write \[P_{d,d}=\begin{pmatrix}1&1&\cdots&1\\ x_{1}&x_{2}&\cdots&x_{d}\\ x_{1}^{2}&x_{2}^{2}&\cdots&x_{d}^{2}\\ \vdots&\vdots&\ddots&\vdots\\ x_{1}^{d-1}&x_{2}^{d-1}&\cdots&x_{d}^{d-1}\end{pmatrix}\begin{pmatrix}x_{1}&x_{ 1}^{2}&\cdots&x_{1}^{d}\\ x_{2}&x_{2}^{2}&\cdots&x_{2}^{d}\\ x_{3}&x_{3}^{2}&\cdots&x_{3}^{d}\\ \vdots&\vdots&\ddots&\vdots\\ x_{d}&x_{d}^{2}&\cdots&x_{d}^{d}\end{pmatrix}\] then the first matrix is a standard Vandermonde matrix, which has determinant \[\prod_{1\leq i<j\leq d}(x_{j}-x_{i}).\] The second matrix is of the same form after dividing row \(i\) by \(x_{i}\) and taking the transpose; hence it has determinant \(e_{d}\prod_{1\leq i<j\leq d}(x_{j}-x_{i})\). The result follows. **Proof of 2.** Fix \(d\). The idea is to decompose \(\det P_{d,n}\) completely using Lemma 2.1. First consider the case \(n=d\). By applying Lemma 2.1 with \(i=k=1\), we obtain \[\det P_{d,d}=\det\begin{pmatrix}x_{1}&x_{1}^{2}&\cdots&x_{1}^{d}\\ p_{2}&p_{3}&\cdots&p_{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ p_{d}&p_{d+1}&\cdots&p_{2d-1}\end{pmatrix}+\det\begin{pmatrix}p_{1}(\hat{x}_{ 1})&p_{2}(\hat{x}_{1})&\cdots&p_{d}(\hat{x}_{1})\\ p_{2}&p_{3}&\cdots&p_{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ p_{d}&p_{d+1}&\cdots&p_{2d-1}\end{pmatrix}\] In the first matrix, one can replace each power polynomial \(p_{j}\) with \(p_{j}(\hat{x}_{1})\), by replacing row \(i\) with row \(i-(x_{1}^{i-1}\) times row \(1)\) for all \(i=2,\ldots,d\). The second determinant we can reduce again by applying Lemma 2.1 with \(i=1\) and \(k=2\). Continuing this process gives the expression \[\det P_{d,d}=\sum_{k=1}^{d}\det\begin{pmatrix}x_{k}&x_{k}^{2}&\cdots&x_{k}^{d} \\ p_{2}(\hat{x}_{k})&p_{3}(\hat{x}_{k})&\cdots&p_{d+1}(\hat{x}_{k})\\ \vdots&\vdots&\ddots&\vdots\\ p_{d}(\hat{x}_{k})&p_{d+1}(\hat{x}_{k})&\cdots&p_{2d-1}(\hat{x}_{k})\end{pmatrix}\] By applying Lemma 2.1 inductively to the other rows, one obtains the following expression for \(\det P_{d,d}\): \[\det P_{d,d}=\sum_{\sigma\in S_{d}}\det\begin{pmatrix}x_{\sigma(1)}&x_{\sigma (1)}^{2}&\cdots&x_{\sigma(1)}^{d}\\ x_{\sigma(2)}^{2}&x_{\sigma(2)}^{3}&\cdots&x_{\sigma(2)}^{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ x_{\sigma(d)}^{d}&x_{\sigma(d)}^{d+1}&\cdots&x_{\sigma(d)}^{2d-1}\end{pmatrix} \tag{10}\] where \(S_{d}\) denotes the symmetric group on \(d\) letters. For arbitrary \(n\), one can use the same method to express \(\det P_{d,n}\) as \[\det P_{d,n}=\sum_{1\leq i_{1},\ldots,i_{d}\leq n}\det\begin{pmatrix}x_{i_{1} }&x_{i_{1}}^{2}&\cdots&x_{i_{1}}^{d}\\ x_{i_{2}}^{2}&x_{i_{2}}^{3}&\cdots&x_{i_{2}}^{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ x_{i_{d}}^{d}&x_{i_{d}}^{d+1}&\cdots&x_{i_{d}}^{2d-1}\end{pmatrix}\] where such a determinant is automatically zero if there is a repetition in the indices (since one row will be a multiple of another). Thanks to (10), we may write this determinant as \[\det P_{d,n}=\sum_{1\leq i_{1}<i_{2}<\ldots<i_{d}\leq n}\det P_{d,d}(x_{i_{1}},x_ {i_{2}},\ldots,x_{i_{d}}).\] When \(d\leq n\), the above expression is non-zero because it contains the monomial \(x_{1}x_{2}^{3}\ldots x_{d}^{2d-1}\). This finishes the proof. **Remark 2.3**.: The method used in the proof of part 2 of Proposition 2.2 works for any matrix \(M=(m_{i,j})\) consisting of power polynomials in \(n\) variables such that \(\deg m_{i,j}-\deg m_{i+1,j}\) is independent of \(j\) for all \(i\). In other words, given such a matrix \(M\), one can apply Lemma 2.1 repeatedly to obtain an expression for \(\det M\) in terms of a simpler determinant. We now come to the qualitative study of symmetric polynomials over \(R\). It is important to note that Theorem 0.1 continues to hold: **Theorem 2.4** (Fundamental Theorem on Symmetric Polynomials).: Let \(R\) be a commutative ring with unity. Then \(R[e_{1},\ldots,e_{n}]\subset R[x_{1},\ldots,x_{n}]\) is equal to the ring of symmetric polynomials over \(R\), and the set \(\{e_{1},\ldots,e_{n}\}\) is algebraically independent. Proof.: Let \(S_{n}\) denote the symmetric group on \(n\) letters, which acts on the variables \(x_{1},\ldots,x_{n}\) by permutation. By [1, I.2.4], the theorem holds with \(R=\mathbb{Z}\). In particular, for the trivial action of \(S_{n}\) on \(R\), we have \[R[e_{1},\ldots,e_{n}]=R\otimes\mathbb{Z}[e_{1},\ldots,e_{n}]=R\otimes\mathbb{Z} [x_{1},\ldots,x_{n}]^{S_{n}}=R[x_{1},\ldots,x_{n}]^{S_{n}}.\] This implies the first part of the theorem. The algebraic independence follows from [1, I.2.3], which is stated for \(\mathbb{Z}\) but whose proof works over any ring \(R\). **Remark 2.5**.: The fact that \(\{e_{1},\ldots,e_{n}\}\) is algebraically independent over any ring \(R\) does not follow directly from the statement for \(R=\mathbb{Z}\) without using properties of the elementary symmetric polynomials. For instance, as shown in [1, Example 1.4], there are rings \(R\) and polynomials \(f_{1},\ldots,f_{m}\in\mathbb{Z}[x_{1},\ldots,x_{n}]\) such that \[\operatorname{trdeg}_{R}\{f_{1},\ldots,f_{m}\}<\operatorname{trdeg}_{\mathbb{ Z}}\{f_{1},\ldots,f_{m}\},\] even when none of the coefficients of the \(f_{i}\) are zero divisors in \(R\). Another example of this phenomenon is that for a prime number \(r\), \[\operatorname{trdeg}_{\mathbb{F}_{r}}\{p_{1},p_{r}\}=1\] for any number of variables \(n\). In order to state the main theorem over general rings \(R\), we fix the following notation. **Definition 2.6**.: Let \(R\) be a commutative ring with unity and let \(\{f_{i}\mid i\in I\}\) be a set of polynomials in \(R[x_{1},\ldots,x_{n}]\). Let \(S\) denote the multiplicative system of non-zero divisors in \(R[f_{i}\mid i\in I]\subset R[x_{1},\ldots,x_{n}]\). We denote by \[R(f_{i}\mid i\in I):=S^{-1}R[f_{i}\mid i\in I]=Q(R[f_{i}\mid i\in I])\] the total ring of fractions of \(R[f_{i}\mid i\in I]\). **Lemma 2.7**.: [1, Exercise 1.2] Let \(R\) be a commutative ring with unity, and let \(f\) be a zero divisor in \(R[x_{1},\dots,x_{n}]\). Then there exists a zero divisor \(r\in R\) such that \(rm=0\) for any monomial \(m\) in \(f\). **Theorem 2.8**.: Let \(R\) be any commutative unital ring not containing \(\mathbb{Q}\), and let \(r_{0}\geq 2\) be the smallest integer which is not invertible in \(R\). Fix the number of variables \(n\geq r_{0}\). Then \[R(e_{1},\dots,e_{n})=R(p_{1},\dots,p_{2n+1-r_{0}}).\] Proof.: The power polynomials are symmetric polynomials, so \(p_{k}\in R(e_{1},\dots,e_{n})\) for all \(k\) by Theorem 2.4. We prove the converse direction of the theorem by providing an algorithm to compute \(e_{1},\dots,e_{n}\) as rational functions in the power polynomials, and then prove that this algorithm always works. **Algorithm 2.9**.: Input: Power polynomials \(p_{1},p_{2},\dots,p_{2n+1-r_{0}}\). Output: Expressions for \(e_{1},e_{2},\dots,e_{n}\) as rational functions of power polynomials. 1. For \(k<r_{0}\), apply the usual Newton identities \[ke_{k}=\sum_{i=1}^{k}(-1)^{i-1}e_{k-i}p_{i}\] recursively to obtain expressions for \(e_{k}\) in terms of power polynomials. 2. Solve the system of equations \[\begin{pmatrix}p_{1}&-p_{2}&\cdots&(-1)^{n}p_{n+1}\\ -p_{2}&p_{3}&\cdots&(-1)^{n+1}p_{n+2}\\ \vdots&\vdots&\ddots&\vdots\\ (-1)^{n-r_{0}}p_{n+1-r_{0}}&\cdots&\cdots&(-1)^{k}p_{2n+1-r_{0}}\end{pmatrix} \begin{pmatrix}e_{n}\\ e_{n-1}\\ \vdots\\ e_{1}\\ 1\end{pmatrix}=\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ 0\end{pmatrix}\] (11) by reducing it to the form \[\begin{pmatrix}1&0&\cdots&0&c_{1,n+2-r_{0}}&\cdots&c_{1,n+1}\\ 0&1&\cdots&0&c_{2,n+2-r_{0}}&\cdots&c_{2,n+1}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&1&c_{n+1-r_{0},n+2-r_{0}}&\cdots&c_{n+1-r_{0},n+1}\end{pmatrix} \begin{pmatrix}e_{n}\\ e_{n-1}\\ \vdots\\ e_{1}\\ 1\end{pmatrix}=\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ 0\end{pmatrix}\] (12) by performing elementary row operations over \(R(p_{1},\dots,p_{2n+1-r_{0}})\). 3. For \(r_{0}\leq k\leq n\), this gives the expressions \[e_{k}=-\sum_{i=0}^{r_{0}-1}c_{n+1-k,n+1-i}e_{i}\] with each \(c_{a,b}\in R(p_{1},\dots,p_{2n+1-r_{0}})\). Step 2 of the algorithm needs to be justified. Since \(e_{N}=0\) for \(N>n\), the Newton identities imply that \[\sum_{i=N-n}^{N}(-1)^{i-1}e_{N-i}p_{i}=0 \tag{13}\] for all \(N>n\). Applying (13) with \(N=n+1,\ldots,2n+1-r_{0}\) yields the system of equations (11). The \((n+1-r_{0})\times(n+1)\) matrix in this equation is of the form \((\tilde{P}|*)\) for a square matrix \(\tilde{P}\). Multiplying each second row of \(\tilde{P}\) by \(-1\) and then multiplying each second column by \(-1\), we see that \[\det\tilde{P}=\det P_{n+1-r_{0},n},\] with notation from (9). This determinant is not a zero-divisor in \(R[x_{1},\ldots,x_{n}]\) by Proposition 2.2 and Lemma 2.7, since \(\det P_{d,n}\) for \(d\leq n\) always contains the monomial \(x_{1}x_{2}^{3}\ldots x_{d}^{2d-1}\) with coefficient \(1\in R^{\times}\). Hence the matrix in (11) can be reduced to the form \((I_{n+1-r_{0}}|*)\) by performing elementary row operations over \(R(p_{1},\ldots,p_{2n+1-r_{0}})\). This finishes the proof. **Remark 2.10**.: The reduction of the system (11) to the system (12) only requires division by \(\det P_{n+1-r_{0},n}\), which we know explicitly by Proposition 2.2. Thus, one only needs to invert algebraic expressions in the power polynomials \(p_{i}\) for \(1\leq i\leq 2n+1-2r_{0}\) (not \(2n+1-r_{0}\)) to obtain all elementary symmetric polynomials. Moreover, this observation gives a test as to whether one can compute \(e_{k}(\alpha_{1},\ldots,\alpha_{n})\) for \(k\notin R^{\times}\) given only the \(p_{i}(\alpha_{1},\ldots,\alpha_{n})\) for \(i\leq 2n+1-k\) and the \(e_{i}(\alpha_{1},\ldots,\alpha_{n})\) for \(i<k\): namely, it suffices that \[\det P_{n+1-k,n}(\alpha_{1},\ldots,\alpha_{n})\neq 0.\] Indeed, this is the denominator in the expression for \(e_{k}(\alpha_{1},\ldots,\alpha_{n})\) obtained from Algorithm 2.9. Since \(e_{k}\) is a polynomial, the denominator divides the numerator whenever the denominator is non-zero, so the denominator need only be non-zero (rather than a unit). **Example 2.11**.: Implementing the algorithm from the proof, we obtain for \(r_{0}=2\): \[e_{2}(x_{1},x_{2}) =\frac{p_{1}p_{2}-p_{3}}{p_{1}};\] \[e_{2}(x_{1},x_{2},x_{3}) =\frac{p_{1}p_{2}p_{3}-p_{1}^{2}p_{4}-p_{2}p_{4}+p_{1}p_{5}}{p_{2 }^{2}-p_{1}p_{3}};\] \[e_{2}(x_{1},x_{2},x_{3},x_{4}) =\frac{p_{1}p_{3}^{2}p_{4}+p_{1}^{2}p_{4}p_{5}+p_{1}p_{2}^{2}p_{6 }+p_{2}p_{4}p_{5}+p_{2}p_{3}p_{6}+p_{1}p_{3}p_{7}}{p_{3}^{3}-p_{1}p_{3}p_{5}-2 p_{2}p_{3}p_{4}+p_{1}p_{4}^{2}+p_{2}^{2}p_{5}}\] \[\qquad-\frac{p_{1}p_{2}p_{4}^{2}+p_{1}p_{2}p_{3}p_{5}+p_{3}^{2}p_ {5}+p_{1}^{2}p_{3}p_{6}+p_{1}p_{4}p_{6}+p_{2}^{2}p_{7}}{p_{3}^{3}-p_{1}p_{3}p_{ 5}-2p_{2}p_{3}p_{4}+p_{1}p_{4}^{2}+p_{2}^{2}p_{5}}.\] Note that the expression for \(e_{2}(x_{1},x_{2})\) reduces to Equation (7) modulo 2, since \(p_{2}=p_{1}^{2}\) in characteristic 2. For \(r_{0}\in\{2,3\}\), we obtain \[e_{3}(x_{1},x_{2},x_{3}) =\frac{-p_{1}p_{3}+p_{2}e_{2}+p_{4}}{p_{1}}; \tag{14}\] \[e_{3}(x_{1},x_{2},x_{3},x_{4}) =\frac{p_{1}^{2}p_{5}-p_{1}p_{2}p_{4}-p_{1}e_{2}p_{4}-p_{1}p_{6}+ p_{2}e_{2}p_{3}+p_{2}p_{5}}{p_{2}^{2}-p_{1}p_{3}}.\] **Corollary 2.12**.: Let \(K\) be a field of characteristic \(r>0\), \(V\) an \(n\)-dimensional \(K\)-vector space, and \(T\in\operatorname{End}_{K}(V)\) a linear operator on \(V\). Suppose one knows \(\operatorname{Tr}(T^{d})\) for all \(1\leq d\leq 2n+1-r\). If the determinant \[\det\begin{pmatrix}\operatorname{Tr}(T)&\operatorname{Tr}(T^{2})&\cdots& \operatorname{Tr}(T^{n+1-r})\\ \operatorname{Tr}(T^{2})&\operatorname{Tr}(T^{3})&\cdots&\operatorname{Tr}(T ^{n+2-r})\\ \vdots&\vdots&\ddots&\vdots\\ \operatorname{Tr}(T^{n+1-r})&\operatorname{Tr}(T^{n+2-r})&\cdots& \operatorname{Tr}(T^{2n+1-2r})\end{pmatrix} \tag{15}\] is non-zero, then one can compute the characteristic polynomial of \(T\). Proof.: If \(v\) is an eigenvector for \(T\) with eigenvalue \(\lambda\), then \(v\) is an eigenvector for \(T^{d}\) with eigenvalue \(\lambda^{d}\). Hence \[\operatorname{Tr}(T^{d})=\sum_{i=1}^{n}\lambda_{i}^{d}=p_{d}(\lambda_{1}, \ldots,\lambda_{n}),\] where \(\lambda_{1},\ldots,\lambda_{n}\in\overline{K}\) are the eigenvalues of \(T\). On the other hand, the characteristic polynomial of \(T\) is given by \[\operatorname{ch}_{T}(X)=\prod_{i=1}^{n}(X-\lambda_{i})=\sum_{k=0}^{n}(-1)^{k} e_{k}(\lambda_{1},\ldots,\lambda_{n})X^{n-k}.\] Algorithm 2.9 allows one to compute each \(e_{k}\) as a rational function in \(p_{1},p_{2},\ldots,p_{2n+1-r}\). This rational function can be evaluated at \((\lambda_{1},\ldots,\lambda_{n})\) if the denominator of the rational function is non-zero at this point, which happens if and only if (15) is non-zero (cf. Remark 2.10). **Example 2.13**.: Due to removable poles, it may be possible to compute the characteristic polynomial of \(T\) even if the determinant (15) is zero, as the following example shows. Let \(K=\mathbb{F}_{3}\) and suppose \(T\) acts on the \(3\)-dimensional vector space \(V\) with the traces \(\operatorname{Tr}(T)=0,\operatorname{Tr}(T^{2})=-1,\operatorname{Tr}(T^{3})= 0,\operatorname{Tr}(T^{4})=-1\). Since \(2\in K^{\times}\), we can use Newton's identities (3) to obtain \[e_{2}(\lambda_{1},\lambda_{2},\lambda_{3})=2^{-1}(\operatorname{Tr}(T)^{2}- \operatorname{Tr}(T^{2}))=-1.\] By Equation (14), this gives \[e_{3}(\lambda_{1},\lambda_{2},\lambda_{3})=\frac{-\operatorname{Tr}(T) \operatorname{Tr}(T^{3})+\operatorname{Tr}(T^{2})e_{2}+\operatorname{Tr}(T^{4} )}{\operatorname{Tr}(T)}=\frac{-\operatorname{Tr}(T)\operatorname{Tr}(T^{3} )}{\operatorname{Tr}(T)}=-\operatorname{Tr}(T^{3})=0.\] Hence the characteristic polynomial of \(T\) is \(\operatorname{ch}_{T}(X)=X^{3}-X\). **Remark 2.14**.: The complete homogeneous symmetric polynomials are defined by \[h_{k}(x_{1},\ldots,x_{n})=\sum_{1\leq i_{1}\leq i_{2}\leq\ldots\leq i_{k}\leq n }x_{i_{1}}x_{i_{2}}\ldots x_{i_{k}}.\] One can express the power polynomials in terms of the complete homogeneous symmetric polynomials without problems, but going the other way, there are again denominators. Fortunately, the fact that \(\mathbb{Z}[e_{1},e_{2},\ldots,e_{n}]=\mathbb{Z}[h_{1},h_{2},\ldots,h_{n}]\)[12, I.2.8] in conjunction with Theorem 2.8 allows one to compute the \(h_{k}\) in terms of the power polynomials over any ring. ### The case of \(\mathbb{F}_{r}\)-algebras Fix a prime number \(r\), and suppose from now on that \(R\) is a commutative \(\mathbb{F}_{r}\)-algebra. We describe an algorithm to compute \(e_{1},\ldots,e_{n}\) as rational functions in the power polynomials when \(n\geq r\). The algorithm is similar to Algorithm 2.9, but takes fewer power polynomials as input. The fact that this is possible is due to Schonhage [13], who considered the case where \(R\) is a field of characteristic \(r\). We give a new method to achieve Schonhage's result in the spirit of Algorithm 2.9. We hope that this makes Schonhage's result more easily available to the mathematical community. It is moreover useful to compare this section with the previous one to see what improvements can be made when one has more information about the ring \(R\). Let \(\ell:=n+\lfloor(n-1)/(r-1)\rfloor\). As proved in [13, Theorem 2], the elementary symmetric polynomials are elements of \(R(p_{1},p_{2},\ldots,p_{\ell})\). Note that this is the best possible result: since \(p_{rk}=p_{k}^{r}\) for each \(k\geq 1\), the transcendence degree of \(\{p_{1},p_{2},\ldots,p_{m}\}\) is at most \(m-\lfloor m/r\rfloor\), and \(\ell\) is the least integer such that this quantity equals \(n\). Since \(\ell\leq 2n+1-r\), this gives simpler expressions for the elementary symmetric polynomials than Theorem 2.8. **Algorithm 2.15**.: Input: Power polynomials \(p_{1},p_{2},\ldots,p_{\ell}\). Output: Expressions for \(e_{1},e_{2},\ldots,e_{n}\) as rational functions of power polynomials. 1. Recursively compute \(e_{1},\ldots,e_{r-1}\) via the Newton identities (3). 2. Let \(\bar{e}=(e_{n},e_{n-1},\ldots,e_{1},1)^{t}\). Consider the Newton identities \[ke_{k}-p_{1}e_{k-1}+p_{2}e_{k-2}-\ldots+(-1)^{k}p_{k}=0\] (16) for all \(k\) such that \(r<k\leq\ell\) and \((k,r)=1\). Let \(M\) be the \((n-r+1)\times(n+1)\) matrix whose rows consist of the coefficients of (16) for these values of \(k\), so that \(M\bar{e}=0\). 3. Diagonalise the leftmost \((n-r+1)\times(n-r+1)\)-block of \(M\) to express \(e_{r},\ldots,e_{n}\) in terms of power polynomials. **Example 2.16**.: Consider the case \(r=3,n=7\). Then \(\ell=10\), meaning we need to use the Newton identities for \(k\in\{4,5,7,8,10\}\). We obtain the matrix equation \[\begin{pmatrix}0&0&0&1&-p_{1}&p_{2}&-p_{3}&p_{4}\\ 0&0&2&-p_{1}&p_{2}&-p_{3}&p_{4}&-p_{5}\\ 1&-p_{1}&p_{2}&-p_{3}&p_{4}&-p_{5}&p_{6}&-p_{7}\\ -p_{1}&p_{2}&-p_{3}&p_{4}&-p_{5}&p_{6}&-p_{7}&p_{8}\\ -p_{3}&p_{4}&-p_{5}&p_{6}&-p_{7}&p_{8}&-p_{9}&p_{10}\end{pmatrix}\begin{pmatrix} e_{7}\\ e_{6}\\ e_{5}\\ e_{4}\\ e_{3}\\ e_{2}\\ e_{1}\\ 1\end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\\ 0\\ 0\end{pmatrix}\] The leftmost \(5\times 5\)-block in the above \(5\times 8\) matrix is invertible. Reducing it to the identity matrix via elementary row operations expresses each \(e_{3},\ldots,e_{7}\) as rational functions of power polynomials and \(e_{2}\), which can be obtained from the usual Newton identity. **Example 2.17**.: For \(r=3\) and \(n=4\), we have \(2n+1-r=6\) and \(\ell=n+\lfloor(n-1)/(r-1)\rfloor=5\). Algorithm 2.15 applied in this setting gives the expression \[e_{3}(x_{1},x_{2},x_{3},x_{4})=\frac{p_{1}^{2}p_{3}-p_{1}p_{2}e_{2}+p_{1}p_{4 }+p_{3}e_{2}+p_{5}}{p_{2}-p_{1}^{2}},\] which is simpler than the expression (14) obtained from Algorithm 2.9. **Remark 2.18**.: The system of equations in Algorithm 2.15 was also considered by Schonhage, who made the following comment [10, p. 412]: "_...it is hard to see how this redundant system should be solved or how to prove its solvability._" He then proves that \(K(e_{1},\ldots,e_{n})=K(p_{1},\ldots,p_{\ell})\) in a different way. A posteriori, it follows that the system in question was solvable. The fact that it is solvable in the way described by the algorithm can be seen from Proposition 1.3: the determinant of the leftmost \((n-r+1)\times(n-r+1)\) block has a non-zero summand in \(p_{\ell-r}K[p_{1},\ldots,p_{\ell-r}]\), while its other summands lie in \(K[p_{1},\ldots,p_{\ell-r-1}]\); this can be seen from the last row expansion. Since \(r\nmid\ell\), this implies that the determinant is non-zero, and one can show in a similar way that the determinant contains a monomial with coefficient a unit. Note also that the denominators in the expressions of the \(e_{i}\) lie in \(K[p_{1},\ldots,p_{\ell-r}]\), although we have no explicit description of the denominators like the one for Algorithm 2.9. One can use Algorithm 2.15 to give a variant of Corollary 2.12: given \(\operatorname{Tr}(T^{d})\) for all \(1\leq d\leq\ell\) such that the determinant of the leftmost \((n-r+1)\times(n-r+1)\)-block in the matrix \(M\) of Algorithm 2.15 is non-zero, one can compute the characteristic polynomial of \(T\).
2309.11101
A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making
In healthcare applications, understanding how machine/deep learning models make decisions is crucial. In this study, we introduce a neural network framework, $\textit{Truth Table rules}$ (TT-rules), that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks. TT-rules is built upon $\textit{Truth Table nets}$ (TTnet), a family of deep neural networks initially developed for formal verification. By extracting the necessary and sufficient rules $\mathcal{R}$ from the trained TTnet model (global interpretability) to yield the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for small to large tabular datasets. After outlining the framework, we evaluate TT-rules' performance on healthcare applications and compare it to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features.
Adrien Benamira, Tristan Guerand, Thomas Peyrin
2023-09-20T07:15:48Z
http://arxiv.org/abs/2309.11101v1
# A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making ###### Abstract In healthcare applications, understanding how machine/deep learning models make decisions is crucial. In this study, we introduce a neural network framework, _Truth Table rules_ (TT-rules), that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks. TT-rules is built upon _Truth Table nets_ (TTnet), a family of deep neural networks initially developed for formal verification. By extracting the necessary and sufficient rules \(\mathcal{R}\) from the trained TTnet model (global interpretability) to yield the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for small to large tabular datasets. After outlining the framework, we evaluate TT-rules' performance on healthcare applications and compare it to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features. ## 1 Related Work Traditional rule-based models, such as decision trees [5], rule lists [20, 3, 9], linear models, and rule sets [13, 7, 8, 18, 24], are commonly used for interpretable classification and regression tasks. However, these models face limitations in handling large datasets, binary classification tasks, and capturing complex feature relationships, which can result in reduced accuracy and limited practicality [25, 23]. To overcome these challenges, recent work by Benamira _et al._ introduced an architecture encoded into CNF formulas, demonstrating scalability on large datasets [4, 6]. Our objective is to extend this approach to handle diverse classification tasks and regression on tabular datasets of varying feature dimensions. There have been investigations into the connection between deep neural networks (DNNs) and rule-based models. Notable works include DNF-net [1], which focuses on the activation function, and RRL [23], which addresses classification tasks but raises concerns about interpretability due to its complexity and time-consuming training process. Another architecture, Neural Additive Models (NAMs) [2], combines the flexibility of DNNs with the interpretability of additive models but deviates from the strict rule-based model paradigm, posing challenges in interpretation, especially with a large number of features. ## 2 Methodology This paper introduces a novel neural network framework that effectively combines the interpretability of rule-based models with the high performance of DNNs. Our framework, called TT-rules, builds upon the advancements made by Benamira _et al._[4] Benamira _et al._[4] introduced a new Convolutional Neural Network (CNN) filter function called the Learning Truth Table (LTT) block. The LTT block has the unique property of its complete distribution being computable in constant and practical time, regardless of the architecture. This allows the transformation of the LTT block from weights into an exact mathematical Boolean formula. Since an LTT block is equivalent to a CNN filter, **the entire neural network model, known as Truth Table net (TTnet), can itself be represented as a Boolean formula.** We then optimize our formula set \(\mathcal{R}\) in two steps. We automatically integrate human logic into the truth tables. This reduces the size of each rule in the set \(\mathcal{R}\). Then we analyze the correlation to decrease the number of rules in \(\mathcal{R}\). These optimizations, specific to the TT-rules framework, automatically and efficiently transform the set \(\mathcal{R}\) into an optimized set in constant time. To enhance the interpretability of the model, we convert all boolean formulas into Reduced Ordered Binary Decision Diagrams. An example is given Figure 1. ## 3 Experiments ### Datasets We utilized a variety of healthcare datasets for our study, including the Diabetes 130 US-Hospitals dataset for multi-classification1[10], two single-cell RNA-seq analysis datasets (head and neck cancer2[17] and melanoma3[21]), the Breast Cancer Wisconsin (Original) dataset4, and the TCGA lung cancer dataset for regression5[15]. Footnote 1: [https://bit.ly/diabetes_130_uci](https://bit.ly/diabetes_130_uci) Footnote 2: [https://bit.ly/neck_head_rna](https://bit.ly/neck_head_rna) Footnote 3: [https://bit.ly/meloma_rna](https://bit.ly/meloma_rna) Footnote 4: [https://archive.ics.uci.edu/dataset/15/breast+cancer+wisconsin+original](https://archive.ics.uci.edu/dataset/15/breast+cancer+wisconsin+original) Footnote 5: [https://bit.ly/tcga_lung_rna](https://bit.ly/tcga_lung_rna) Our TT-rules framework's scalability is demonstrated using two DNA datasets. These include single-cell RNA-seq analysis datasets for head and neck cancer, melanoma cancer [17, 21], and the TCGA lung cancer dataset [15]. These datasets contain 23689 and 20530 features, respectively, and are commonly used in real-life machine learning applications [14, 11, 19, 22]. In the melanoma cancer setup, we trained on the head and neck dataset [17] and tested on the melanoma dataset [21] following established literature [14, 11, 19, 22]. ### Performance Comparison Table 1 presents a comparison of various rule-based models, including ours, on the datasets introduced before, in terms of RMSE, AUC and Accuracy. Our proposed model outperforms the others in terms of accuracy on the Diabetes dataset and on the Breast Cancer dataset. XGBoost and DNNs performs better on Diabetes but worse on bigger datasets as shown in the next section. Although GL provides a better tradeoff between performance and complexity, we highlight that GL does not support multi-class classification tasks and is not scalable for larger datasets such as DNA datasets, as shown in the next section. ### Scalability Comparison Our TT-rules framework demonstrated excellent scalability to real-life datasets with up to 20K features. This result is not surprising, considering the original TTnet paper [4] showed the architecture's ability to scale to ImageNet. Furthermore, our framework's superiority was demonstrated by outperforming other rule-based models that failed to converge to such large datasets (GL [24]). Regarding performance, the TT-rules framework outperforms all other methods. Our approach not only scales but also reduces the input feature set, acting as a feature selection method. We generated a set of 1064 rules out of 20530 features for the regression problem, corresponding to a drastic reduction in complexity. For the binary classification dataset, we generated 9472 rules, which more then halved the input size from 23689 to 9472. ## 4 Conclusion In conclusion, our proposed TT-rules framework provides a new and optimized approach for achieving global and exact interpretability in regression and classification tasks. With its ability to scale on large datasets and its potential for feature reduction, the TT-rules framework appears as a valuable tool towards explainable artificial intelligence for healthcare applications. \begin{table} \begin{tabular}{l|c|c c|c} \hline \hline & **Regression (RMSE)** & **Binary classification (AUC)** & **Multi-classification (Accuracy)** \\ \hline & TCCA Cancer & Melanoma & Breast Cancer & Diabetes \\ continuous/binary \# & 0/20530 & 0/23689 features & 0/81 features & 43/296 features \\ \hline Linear/ log & 0.092 & 0.833 & 0.985 & 0.581 \\ DT & - & - & 0.908 & 0.572 \\ GL & - & - & 0.984 & - \\ TT-rules (Ours) & 0.029 & 0.835 & 0.986 & 0.584 \\ \hline Random Forest & 0.42 & 0.729 & 0.950 & 0.587 \\ DNNs & 0.028 & 0.725 & 0.982 & 0.603 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison machine learning dataset of our method to Linear/Logistic Regression [16], Decision Trees (DT) [16], GL [24], Random Forest [12] and DNNs. Means and standard deviations are reported from 5-fold cross validation. Our TT-rules models were trained with a final linear regression with weights as floating points for the Breast Cancer and Diabetes dataset for better performances. The two other datasets were trained with a sparse binary linear regression to reduce the number of final features. The lower RMSE the better, the higher AUC/Accuracy the better. Figure 1: Our neural network model trained on the Breast Cancer dataset in the form of Boolean decision trees: the output of the DNN and the output of these decision trees are the same, reaching 99.30% AUC. On the same test set, Random Forest reaches 95.08% AUC, Decision Tree 90.36% AUC and XGBoost 97.79% AUC. Each rule \(r_{i}\) is a function \(r_{i}:\{0,1\}^{n}\mapsto\{-1,0,1\}\), i.e for each data sample I we associate for each rule \(r_{i}\) a score which is in \(\{-1,0,1\}\). The prediction of our classifier is then as stated above. As our model has 24 rules, we have only reported two positive rules out of the 24 to provide an example of the type of rules obtained.
2309.14079
Verification of Equivalence of Operator Bases in Unitarity Analysis
Equivalence theorem is a cornerstone of Standard Model Effective Field Theory, ensuring the equality of the S-matrix and reducing redundant operators. The Warsaw basis and the SILH basis of dimension-6 operators have been shown to be equivalent and connected through equations of motion and field redefinitions, but how to connect the two operator bases in interpreting data is not clear. Through a unitarity analysis we verify that the two operator bases are indeed equivalent, but achieved in a non-trivial way.
Qing-Hong Cao, Yandong Liu, Shu-Run Yuan
2023-09-25T12:14:04Z
http://arxiv.org/abs/2309.14079v1
# Verification of Equivalence of Operator Bases in Unitarity Analysis ###### Abstract Equivalence theorem is a cornerstone of Standard Model Effective Field Theory, ensuring the equality of the S-matrix and reducing redundant operators. The Warsaw basis and the SILH basis of dimension-6 operators have been shown to be equivalent and connected through equations of motion and field redefinitions, but how to connect the two operator bases in interpreting data is not clear. Through a unitarity analysis we verify that the two operator bases are indeed equivalent, but achieved in a non-trivial way. The Standard Model Effective Field Theory (SMEFT) is a theoretical framework utilized to explore the new physics (NP) beyond Standard Model (SM), assuming that no hidden light state exists in the spectrum with couplings to the SM, and only a SM Higgs boson triggers electroweak symmetry breaking. In the SMEFT, high dimension operators are constructed from SM fields and are expressed as \(\mathcal{C}_{j}^{(n)}\mathcal{O}_{j}/\Lambda^{n}\), where the Wilson coefficients \(\mathcal{C}_{j}^{(n)}\) and the cutoff scale \(\Lambda\) describe the strength of the new physics operator \(\mathcal{O}_{j}\). The Equivalence Theorem, derived from the equation of motion (EoM) or field redefinition [1; 2; 3; 4], serves to diminish redundancy among operators while ensuring the S-matrix identity remains intact with varying operator sets. For example, in the context of lepton number conservation, there are only 59 independent dimension-6 (dim-6) operators as defined by the Hilbert series [5; 6]. A set of 59 independent operators is named as a operator basis, which is used to describe quantum effects of new physics resonances. The selection of independent operators is flexible, and one ends up with many operator bases. For example, the Warsaw basis [7] and the SILH basis [8; 9] are two commonly used operator bases at the dimension-6. Different operator bases are interconverted through the equations of motion (EoMs) [1; 2; 3; 4], which yield identities among operators. As inserting those identities into any scattering process of initial state \(|i\rangle\) and final state \(|f\rangle\) does not affect the S-matrix, one concludes that the operator bases are equivalent. However, when one derives the parameter space of Wilson coefficients of operators by fitting the experimental data, the EoM identities do not give the conversion relationship between the global fitting results of different operator bases. The premise of equivalence can only be discussed meaningfully under operator base selection or complete operator set for all relevant processes. In this work, we will conduct a study on the unitarity analysis of 20 operators and demonstrate that the unitarity bounds obtained in the Warsaw basis [7] and the SILH basis [8; 9] are indeed equivalent, but achieved in a non-trivial way. The test of the equivalence theorem necessitates substantial experimental data of high precision across a multitude of processes, which is currently beyond the capabilities of Large Hadron Collider (LHC). We resort to employing "theoretical data" obtained from the unitarity constraint on operators, which is defined by the maximum permissible parameter space for Wilson coefficients. For consistency, it is necessary to satisfy the unitarity condition within the SMEFT, where the combination of \(\mathcal{C}_{j}s/\Lambda^{2}\) should remain reasonably small, under the condition that the effective field theory holds, i.e., \(\sqrt{s}\leq\Lambda\) and then it reduces the unitarity bounds on \(\mathcal{C}_{j}\). Unitarity bounds have been previously explored numerically in existing literature, including in references such as [10; 11; 12; 13; 14; 15; 16; 17; 18]. In this study, we analytically examine the unitarity bounds of SMEFT of scattering processes of \(VV\to VV\) and \(f\bar{f}\to VV\) within the Warsaw basis and the SILH basis to ascertain how the equivalence principle applies in the context of unitarity analysis, where \(V\) and \(f\) denotes \(W^{\pm}/Z/\gamma/h\) and \(e^{-}/\nu/u/d\), respectively. We perform a coupled channel analysis of scattering processes of \(VV\to VV\) and \(f\bar{f}\to VV\) and derive unitarity bounds on the operators. We first adopt the standard Warsaw basis [7] and consider only CP-even operators listed in Table 1. Note that the four-fermion operators are not considered as they are relatively independent of other operators, meaning that cancellation is rare, and the processes of \(ff\to ff\) induced by four-fermion operators are loosely related with SM electroweak sector. In the fermion sector, the operator being generation independent is understood, e.g., \(\mathcal{O}_{\varphi e}(1,1)=\mathcal{O}_{\varphi e}(2,2)=\mathcal{O}_{\varphi e }(3,3)\) and \(\mathcal{O}_{\varphi e}(m,n)=0\) for \(m\neq n\). For the SILH basis [19], six operators are added in the Warsaw basis with six operators eliminated, \[(\mathcal{O}_{W}^{\prime},\mathcal{O}_{B},\mathcal{O}_{2W}, \mathcal{O}_{2B},\mathcal{O}_{W}^{\text{SILH}},\mathcal{O}_{B}^{\text{SILH}})\] \[\leftrightarrow (\mathcal{O}_{\varphi W},\mathcal{O}_{\varphi WB},\mathcal{O}_{ \varphi 0},\mathcal{O}_{\varphi D},\mathcal{O}_{\varphi l}^{(1)},\mathcal{O}_{ \varphi l}^{(3)}), \tag{1}\] where \[\mathcal{O}^{\prime}_{W}\equiv\left(D_{\mu}\varphi\right)^{\dagger} \left(ig\frac{\tau^{I}}{2}W^{I;\mu\nu}\right)\left(D_{\nu}\varphi\right),\] \[\mathcal{O}_{B}\equiv\left(D_{\mu}\varphi\right)^{\dagger}\left( i\frac{g^{\prime}}{2}B^{\mu\nu}\right)\left(D_{\nu}\varphi\right),\] \[\mathcal{O}_{2W}\equiv-\frac{1}{2}(D^{\mu}W^{I}_{\mu\nu})^{2} \sim\mathcal{O}_{DW},\] \[\mathcal{O}_{2B}\equiv-\frac{1}{2}(\partial^{\mu}B_{\mu\nu})^{2} \sim\mathcal{O}_{DB}\] \[\mathcal{O}_{W}^{\text{SILH}}\equiv\frac{ig}{2}(\varphi^{\dagger} \tau^{I}\overleftrightarrow{D}^{\mu}\varphi)(D^{\nu}W^{I}_{\mu\nu}),\] \[\mathcal{O}_{B}^{\text{SILH}}\equiv\frac{ig^{\prime}}{2}(\varphi^ {\dagger}\overleftrightarrow{D}^{\mu}\varphi)(\partial^{\nu}B_{\mu\nu}). \tag{2}\] The two bases are connected by the EoMs, which between the SILH basis and the Warsaw basis read as \[2\mathcal{O}_{W}-\frac{1}{4}g^{2}\mathcal{O}_{\varphi W}-\frac{ 1}{4}g^{\prime}g\mathcal{O}_{\varphi WB}+\frac{3}{4}g^{2}\mathcal{O}_{\varphi \Box} \tag{3}\] \[= -\frac{g^{2}}{4}\left[\mathcal{O}_{\varphi l}^{(3)}+\mathcal{O}_{ \varphi q}^{(3)}\right]+\boxed{E}\] \[2\mathcal{O}_{B}-\frac{1}{4}g^{\prime 2}\mathcal{O}_{\varphi B}- \frac{1}{4}g^{\prime}g\mathcal{O}_{\varphi WB}+g^{\prime 2}\left[\mathcal{O}_{ \varphi D}+\frac{1}{4}\mathcal{O}_{\varphi\Box}\right]\] \[= -\frac{g^{\prime 2}}{2}\left[-\frac{1}{2}\mathcal{O}_{\varphi l}^{(1) }+\frac{1}{6}\mathcal{O}_{\varphi q}^{(1)}-\mathcal{O}_{\varphi ee}+\frac{2}{ 3}\mathcal{O}_{\varphi u}-\frac{1}{3}\mathcal{O}_{\varphi d}\right]+\boxed{E}\] \[\mathcal{O}_{2W}+\frac{3g^{2}}{8}\mathcal{O}_{\varphi\Box}=- \frac{g^{2}}{4}\left[\mathcal{O}_{\varphi l}^{(3)}+\mathcal{O}_{\varphi q}^{ (3)}\right]+\boxed{E}\] \[\mathcal{O}_{2B}+\frac{g^{\prime 2}}{2}\left[\frac{1}{4}\mathcal{O}_{ \varphi\Box}+\mathcal{O}_{\varphi D}\right]\] \[= -\frac{g^{\prime 2}}{2}\left[-\frac{1}{2}\mathcal{O}_{\varphi l}^{(1) }+\frac{1}{6}\mathcal{O}_{\varphi q}^{(1)}-\mathcal{O}_{\varphi ee}+\frac{2}{ 3}\mathcal{O}_{\varphi u}-\frac{1}{3}\mathcal{O}_{\varphi d}\right]+\boxed{E}\] \[\mathcal{O}_{W}^{\text{SILH}}+\frac{3g^{2}}{4}\mathcal{O}_{\varphi \Box}=-\frac{g^{2}}{4}\left[\mathcal{O}_{\varphi l}^{(3)}+\mathcal{O}_{ \varphi q}^{(3)}\right]+\boxed{E}\] \[\mathcal{O}_{B}^{\text{SILH}}+g^{\prime 2}\left[\frac{1}{4} \mathcal{O}_{\varphi\Box}+\mathcal{O}_{\varphi D}\right]\] \[= -\frac{g^{\prime 2}}{2}\left[-\frac{1}{2}\mathcal{O}_{\varphi l}^{(1) }+\frac{1}{6}\mathcal{O}_{\varphi q}^{(1)}-\mathcal{O}_{\varphi ee}+\frac{2}{ 3}\mathcal{O}_{\varphi u}-\frac{1}{3}\mathcal{O}_{\varphi d}\right]+\boxed{E}\] where \(\boxed{E}\) stands for those operators that have null contribution to the \(VV\to VV\) or \(f\bar{f}\to VV\) scattering amplitudes up to order of \(O(s)\), e.g. \(\mathcal{O}_{e\varphi}=(\varphi^{\dagger}\varphi)(\bar{l}e\varphi)\), or those four-fermion operators. The transformation rules of helicity amplitudes between the Warsaw basis and the SILH basis are derived in Appendix A. The partial transformation rule reads as \[\mathcal{C}_{i}^{\text{A}}\xrightarrow{\text{Amplitude}}\mathcal{C}_{i}^{ \text{B}}+\mathcal{C}_{j}^{\text{B}}, \tag{4}\] and it directly translates the helicity amplitudes in the operator basis A into those in the operator basis B. After decomposing the scattering helicity amplitudes into partial wave amplitudes, we derive the marginalized unitarity constraint with coupled channel analysis for each individual Wilson coefficient. For the \(V_{1}V_{2}\to V_{3}V_{4}\) processes, the partial wave unitarity condition [17; 18] for a diagonalized coupled channel matrix [20] is given by \[\left|\text{Re}(a^{J}(V^{\prime}_{i}(\lambda_{i})V^{\prime}_{j}(\lambda_{j}) \to V^{\prime}_{i}(\lambda_{i})V^{\prime}_{j}(\lambda_{j}))\right|\leq 1, \tag{5}\] where \(V^{\prime}_{i}(\lambda_{i})=\sum_{a,\lambda_{a}}y^{i,\lambda_{i}}_{a,\lambda_{a }}V_{a}(\lambda_{a})\) is a linear combination of states that diagonalize the field space under the same quantum number (\(Q,J\)) (total charge and total angular momentum), and \(a^{J}\) is the partial wave amplitude. For the \(f_{1}\bar{f}_{2}\to V_{3}V_{4}\) processes, we have the constraint, for each \(J\), \[\left|a^{J}_{V^{\prime}_{j}V^{\prime}_{i}x^{i}}\right|\leq\frac{1}{2}, \tag{6}\] where \(x^{i}\) represents linearly recombined states of fermions, i.e., \(\left|x^{i}\right\rangle=\sum_{f\sigma}x^{i}_{f\sigma}\left|f(\sigma)\bar{f}(- \sigma)\right\rangle\), with \(x^{i}_{f\sigma}\) being the unitary transformation matrix element (\(i\) is the index in the diagonalized field space), and \(\lambda_{a}(=\pm 1,0),\sigma_{a}(=\pm 1)\) are the helicities of the vector bosons and fermions, respectively. This condition requires the eigenvalues of the coupled channel matrix to be less than \(1/2\). Note that the coupled channel matrix is not square for \(f\bar{f}\to VV\) processes, so we use singular value decomposition to obtain the eigenvalues \(a^{J}_{V^{\prime}_{j}V^{\prime}_{k};x^{i}}\). The unitarity bounds are obtained by diagonalizing the coupled channel matrices. The bound on each Wilson coefficient is defined as \[\max\left\{\mathcal{C}_{j}\ |\ \{\mathcal{C}_{j},\mathcal{C}_{k}\}\in S\right\}, \tag{7}\] where \(S\) is a hypersurface defined through Eq. 5 or 6, in which the equality holds. The analytical unitarity bounds on the 26 operators are presented in Table 2. The second and third columns present the unitarity bounds obtained in the Warsaw and SILH bases, respectively. These results are also confirmed through numerical scans using MultiNest [21; 22; 23]. Utilizing the constraints imposed by analytical unitarity bounds on operators, we herein articulate the transformation rules, with an emphasis on the equivalence theorem. The second partial transformation rules applying unitarity bounds read as \[\mathcal{C}_{i}^{\text{A}}\ \underrightarrow{\text{Constraint}}\ \mathcal{C}_{i}^{\text{B}}+ \mathcal{C}_{j}^{\text{B}}, \tag{8}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\mathcal{O}_{W}\) & \(\epsilon^{IJK}W^{\mu}_{\mu}W^{J}_{\nu}W^{K}_{\rho}\) & \(\mathcal{O}_{\varphi\Box}\) & \((\varphi^{\dagger}\varphi)\Box(\varphi^{\dagger}\varphi)\) \\ \hline \(\mathcal{O}_{\varphi D}\) & \((\varphi^{\dagger}D^{\mu}\varphi)^{*}(\varphi^{\dagger}D_{\mu}\varphi)\) & \(\mathcal{O}_{\varphi W}\) & \(\varphi^{\dagger}\varphi W^{I}_{\mu\nu}W^{I\mu\nu}\) \\ \hline \(\mathcal{O}_{\varphi\Box}\) & \(\varphi^{\dagger}\varphi B_{\mu\nu}B^{\mu\nu}\) & \(\mathcal{O}_{\varphi WB}\) & \(\varphi^{\dagger}\tau^{I}\varphi W^{I}_{\mu\nu}B^{\mu\nu}\) \\ \hline \(\mathcal{O}_{\varphi l}^{(1)}\) & \((\varphi^{\dagger}\bar{l}\,\overrightarrow{D}_{\mu}\varphi)(\bar{l}\, \overrightarrow{l}^{\mu}\varphi)(\bar{l}^{\prime\prime}l_{\nu})\) & \(\mathcal{O}_{\varphi l}^{(3)}\) & \((\varphi^{\dagger}\bar{l}\,\overleftrightarrow{D}_{\mu}\varphi)(\bar{l}_{\mu} \tau^{I}\gamma^{\mu}t_{\nu})\) \\ \hline \(\mathcal{O}_{\varphi c}\) & \((\varphi^{\dagger}\bar{l}\,\overrightarrow{D}_{\mu}\varphi)(\bar{e}_{\nu}\tau^{ \mu}e_{\tau})\) & \(\mathcal{O}_{\varphi l}^{(4)}\) & \((\varphi^{\dagger}\bar{l}\,\overleftrightarrow{D}_{\mu}\varphi)(\bar{q}_{\nu} \tau^{\mu}q_{\tau})\) \\ \hline \(\mathcal{O}_{\varphi d}^{(3)}\) & \((\varphi^{\dagger}\bar{ which translate the unitarity bounds in the operator basis B into the bounds in the operator basis A. However, it is unclear whether the aforementioned methodology would yield identical bounds for operators in the basis A from corresponding operators in the basis B to those derived directly. Next, we will probe into the applicability of these transformation rules in determining unitary bounds for one basis from another. We begin with deriving unitarity bounds in the SILH basis from those in the Warsaw basis with the help of transformation rules. We find that the transformation rules successfully reproduce the correct unitarity bounds in the SILH basis if all the operators involved in the transformation rules can reach their unitarity bounds simultaneously. For example, consider the operator \(\mathcal{O}_{2B}\), which appears only in the SILH basis and is related to \(\mathcal{O}^{(1)}_{\varphi l}\) and \(\mathcal{O}_{\varphi D}\) in the Warsaw basis as following: \[\mathcal{C}_{2B}=\frac{8}{g^{\prime 2}}\mathcal{C}^{(1)}_{\varphi l}+\frac{2}{g^ {\prime 2}}\mathcal{C}_{\varphi D}. \tag{9}\] Substituting the bounds of \(\mathcal{O}^{(1)}_{\varphi l}\) and \(\mathcal{O}_{\varphi D}\) in the Warsaw basis, \[(\mathcal{C}^{(1)}_{\varphi l})_{\text{max}}=2\sqrt{6}\pi,\qquad(\mathcal{C}_ {\varphi D})_{\text{max}}=\frac{64\pi}{3}, \tag{10}\] into Eq. 9, we obtain the bound \((\mathcal{C}_{2B})_{\text{max}}\) in the SILH basis, \[(\mathcal{C}_{2B})_{\text{max}}=\frac{16(8+3\sqrt{6})\pi}{3g^{\prime 2}}.\] The left panel of Fig. 1 shows the unitarity bounds of \(\mathcal{O}^{(1)}_{\varphi l}\) and \(\mathcal{O}_{\varphi D}\) in the Warsaw basis; see the rectangle region surrounded by the black lines. The red line denotes the transformation rule given in Eq. 9, and it exhibits a maximal intercept when crossing the black point, where both the \(\mathcal{C}^{(1)}_{\varphi l}\) and \(\mathcal{C}_{\varphi D}\) reach their unitarity bounds simultaneously. Unfortunately, the unitarity bounds of a few operators in the SILH basis cannot be derived from the transformation rule. For instance, the operator \(\mathcal{O}_{\varphi e}\) in the SILH basis is related to \(\mathcal{O}_{\varphi e}\) and \(\mathcal{O}^{(1)}_{\varphi l}\) in the Warsaw basis, i.e., \[\mathcal{C}^{\text{SILH}}_{\varphi e}=\mathcal{C}^{\text{Warsaw}}_{\varphi e}- 2\mathcal{C}^{(1)\text{Warsaw}}_{\varphi l}. \tag{11}\] Substituting the unitarity bounds \((\mathcal{C}_{\varphi e})_{\text{max}}=4\sqrt{3}\pi\) and \((\mathcal{C}^{(1)}_{\varphi l})_{\text{max}}=2\sqrt{6}\pi\) in the Warsaw basis into the above equation does not generate the correct \((\mathcal{C}_{\varphi e})_{\text{max}}\) in the SILH basis. It is due to the fact that the two operators are strongly correlated in the coupled channel analysis such that the two operators in the Warsaw basis never reach their unitary bounds at the same time. The ellipse in the right panel of Fig. 1 represents the unitarity bounds on \(\mathcal{O}_{\varphi e}\) and \(\mathcal{O}^{(1)}_{\varphi l}\) in the Warsaw basis, i.e., \[(\mathcal{C}_{\varphi e})^{2}+2\left(\mathcal{C}^{(1)}_{\varphi l}\right)^{2} =48\pi^{2}. \tag{12}\] The transformation rule in Eq. 11 is denoted by the red line, and the unitarity bounds on \(\mathcal{C}_{\varphi e}\) in the SILH basis \begin{table} \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Operator} & \multicolumn{2}{c}{bounds on \(\mathcal{C}_{i}\)’s} \\ \cline{2-3} & Warsaw & SILH \\ \hline \(\mathcal{O}_{W}\) & \(\frac{4\pi}{3g}\) & \(\frac{4\pi}{3g}\) \\ \hline \(\mathcal{O}_{\epsilon W}\) & \(4\pi\) & \(4\pi\) \\ \hline \(\mathcal{O}_{\epsilon B}\) & \(4\sqrt{3}\pi\) & \(4\sqrt{3}\pi\) \\ \hline \(\mathcal{O}_{uW}\) & \(\frac{4\pi}{\sqrt{3}}\) & \(\frac{4\pi}{\sqrt{3}}\) \\ \hline \(\mathcal{O}_{uB}\) & \(4\pi\) & \(4\pi\) \\ \hline \(\mathcal{O}_{dW}\) & \(\frac{4\pi}{\sqrt{3}}\) & \(\frac{4\pi}{\sqrt{3}}\) \\ \hline \(\mathcal{O}_{dB}\) & \(4\pi\) & \(4\pi\) \\ \hline \(\mathcal{O}_{\varphi ud}\) & \(4\pi\) & \(4\pi\) \\ \hline \(\mathcal{O}_{\varphi\Box}\) & \(\frac{16\pi}{4}\) & / \\ \hline \(\mathcal{O}_{\varphi D}\) & \(\frac{64\pi}{4}\) & / \\ \hline \(\mathcal{O}_{\varphi W}\) & \(\frac{4\sqrt{6}\pi}{3}\) & / \\ \hline \(\mathcal{O}_{\varphi B}\) & \(4\sqrt{2}\pi\) & \(4\sqrt{6}\pi\frac{3g^{4}+g^{\prime 4}}{3g^{2}}+8\pi\frac{g^{\prime}}{g}\) \\ \hline \(\mathcal{O}_{\varphi WB}\) & \(8\pi\) & / \\ \hline \(\mathcal{O}_{W}\) & / & \(\frac{32\sqrt{6}\pi}{3g^{2}}\) \\ \hline \(\mathcal{O}_{B}\) & / & \(\frac{64\pi}{9g^{2}}+\frac{32\sqrt{6}\pi}{3g^{2}}\) \\ \hline \(\mathcal{O}_{2W}\) & / & \(\frac{16(4+3\sqrt{6})\pi}{3g^{2}}\) \\ \hline \(\mathcal{O}_{2B}\) & / & \(\frac{16(8+3\sqrt{6})\pi}{3g^{2}}\) \\ \hline \(\mathcal{O}_{W}^{\text{SILH}}\) & / & \(\frac{8(8+3\sqrt{6})\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, is the maximum intercept of red line. It occurs when the red line is tangent to the ellipse, and the tangency point (TP) is \((\mathcal{C}_{\varphi e},\)\(\mathcal{C}_{\varphi l}^{(1)})\)=\((4\pi,\)\(-4\pi)\). Substituting the tangency point into the transformation rule in Eq. 11, we obtain \[(\mathcal{C}_{\varphi e})_{\text{max}}=(\mathcal{C}_{\varphi e})_{\text{TP}}-2 (\mathcal{C}_{\varphi l}^{(1)})_{\text{TP}}=12\pi, \tag{13}\] which is the correct \((\mathcal{C}_{\varphi e})_{\text{max}}\) in the SILH basis. Whether the operator in the transformation rule can reach each individual unitarity bound depend on their roles in the coupled channel analysis. The coupled channel matrix is block diagonalized to obtain the unitarity bounds on operators, and each block itself respects unitarity. We classify the 20 operators into six sets based on their appearance in the block. All of the operators in the same set must be taken into account to obtain the correct bounds, as interference effects prohibit operators from reaching their unitarity bounds simultaneously. However, operators from different sets can always reach their unitarity bounds at the same time, as the operators are decomposed into independent and orthogonal subspaces in the coupled channel analysis. In the Warsaw basis, the operator sets are \[\{\mathcal{O}_{W},\mathcal{O}_{\varphi\Box},\mathcal{O}_{\varphi W },\mathcal{O}_{\varphi B}\},\] \[\{\mathcal{O}_{\varphi D},\mathcal{O}_{\varphi\Box},\mathcal{O}_ {\varphi WB}\},\] \[\{\mathcal{O}_{W},\mathcal{O}_{\varphi l}^{(3)},\mathcal{O}_{ \varphi l}^{(3)}\},\] \[\{\mathcal{O}_{\varphi d},\mathcal{O}_{\varphi u},\mathcal{O}_{ \varphi l}^{(1)},\mathcal{O}_{\varphi q}^{(1)},\mathcal{O}_{\varphi e}\},\] \[\{\mathcal{O}_{uW},\mathcal{O}_{uB},\mathcal{O}_{dW},\mathcal{O} _{dB},\mathcal{O}_{eW},\mathcal{O}_{eB}\},\] \[\{\mathcal{O}_{\varphi ud}\},\] \[\{\mathcal{O}_{W}\}, \tag{14}\] and in the SILH basis \[\{\mathcal{O}_{W},\mathcal{O}_{\varphi B},\mathcal{O}_{W}^{\prime},\mathcal{O}_{B},\mathcal{O}_{2W},\mathcal{O}_{2B},\mathcal{O}_{W}^{\text{ SILH}},\mathcal{O}_{B}^{\text{SILH}}\},\] \[\{\mathcal{O}_{W}^{\prime},\mathcal{O}_{B},\mathcal{O}_{2W}, \mathcal{O}_{2B},\mathcal{O}_{W}^{\text{SILH}},\mathcal{O}_{B}^{\text{SILH}}\},\] \[\{\mathcal{O}_{W},\mathcal{O}_{\varphi q}^{(3)},\mathcal{O}_{W}^{ \prime},\mathcal{O}_{W}^{\text{SILH}},\mathcal{O}_{2W}\},\] \[\{\mathcal{O}_{\varphi d},\mathcal{O}_{\varphi u},\mathcal{O}_{ \varphi q}^{(1)},\mathcal{O}_{\varphi e},\mathcal{O}_{B},\mathcal{O}_{B}^{ \text{SILH}}\},\] \[\{\mathcal{O}_{uW},\mathcal{O}_{uB},\mathcal{O}_{dW},\mathcal{O} _{dB},\mathcal{O}_{eW},\mathcal{O}_{eB}\},\] \[\{\mathcal{O}_{\varphi ud}\},\] \[\{\mathcal{O}_{W}\}. \tag{15}\] In summary, we confirmed that the Warsaw and SILH bases are indeed equivalent in unitarity coupled channel analysis of \(ff\to VV\) and \(VV\to VV\) processes. The transformation rules between the two operator bases are derived, and using the rules can convert the helicity amplitudes from one operator basis to another, and vice versa. If all the operators involved in the transformation rule can reach their unitarity bounds simultaneously, the rule would translate the unitarity bounds of operators from one basis to another basis; otherwise, one has to figure out the best point in the boundary of the unitarity bound space to reproduce the correct unitarity bounds in another basis. _Acknowledgements_: We thank Jue Zhang and Ya Zhang for involvement in the early stage of this work. We thank Jiang-Hao Yu and Bin Yan for the valuable comments. The work is partly supported by the National Science Foundation of China under Grant Nos. 11635001, 11675002, 11725520, 11805013, 12075257, and 12235001. ## Appendix A Transformation rules of operator bases It is shown that the equivalence theorem ensures the \(S\)-matrix calculated in various operator bases can be mutually transformed through the Equations of Motion (EoMs) or field redefinitions [1; 2; 3; 4]. Below, we derive the transformation rules for the Wilson coefficients between the two operator bases, and the rules are process independent. The derivation is grounded on the completeness and independence of the operator basis. Consider two operator sets \(\{\mathcal{O}_{1}^{A},\mathcal{O}_{j},\mathcal{O}_{j}^{\text{EoM}}\}\) and \(\{\mathcal{O}_{1}^{B},\mathcal{O}_{j},\mathcal{O}_{j}^{\text{EoM}}\}\) connected through the EoMs. Here the superscript EoM denotes the operators appearing in the EoMs. Without loss of generality, we consider only one operator replaced between two sets, namely \(\mathcal{O}_{1}^{A}\leftrightarrow\mathcal{O}_{1}^{B}\), and the EoM reads as \[a_{1}\mathcal{O}_{1}^{A}+b_{1}\mathcal{O}_{1}^{B}+\sum_{j}c_{j}\mathcal{O}_{j}^ {\text{EoM}}=0. \tag{16}\] For a process of \(|i\rangle\rightarrow|f\rangle\), the helicity amplitude is \[\mathcal{M}=\mathcal{C}_{1}^{A}\mathcal{M}_{1}^{A}+\sum_{j}\mathcal{C}_{j} \mathcal{M}_{j}+\sum_{j}\mathcal{C}_{j}^{\text{EoM}}\mathcal{M}_{j}^{\text{ EoM}} \tag{17}\] in one operator basis A, where \(\mathcal{M}_{j}\equiv\langle f|O_{j}|i\rangle\), and \[\mathcal{M}=\mathcal{C}_{1}^{B}\mathcal{M}_{1}^{B}+\sum_{j}\mathcal{C}_{j} \mathcal{M}_{j}+\sum_{j}\mathcal{C}_{j}^{\text{EoM}}\mathcal{M}_{j}^{\text{ EoM}} \tag{18}\] in another basis B. The EoM reads as \[a_{1}\mathcal{M}_{1}^{A}+b_{1}\mathcal{M}_{1}^{B}+\sum_{j}c_{j}\mathcal{M}_{j}^ {\text{EoM}}=0. \tag{19}\] Utilizing the amplitude relation from the EoM yields \[\mathcal{M}_{1}^{B}=-\frac{a_{1}}{b_{1}}\mathcal{M}_{1}^{A}-\sum_{j}\frac{c_{j }}{b_{1}}\mathcal{M}_{j}^{\text{EoM}}, \tag{20}\] then the helicity amplitude \(\mathcal{M}\) in the basis B becomes \[\mathcal{M}=-\frac{a_{1}}{b_{1}}C_{1}^{B}\mathcal{M}_{1}^{A}+\sum_{j}\mathcal{C }_{j}\mathcal{M}_{j}+\sum_{j}(\mathcal{C}_{j}^{\text{EoM}}-\frac{c_{j}}{b_{1}} \mathcal{C}_{1}^{B})\mathcal{M}_{j}^{\text{EoM}}. \tag{21}\] As a result of the equivalence of the operator bases, Eqs. 17 and 21 must be the same, and it yields the transformation rules \[\mathcal{C}_{1}^{A}\rightarrow-\frac{a_{1}}{b_{1}}\mathcal{C}_{1}^{B},\] \[\mathcal{C}_{j}^{\text{EoM}}\to\mathcal{C}_{j}^{\text{EoM}}-\frac{c_{j}}{b_{1}} \mathcal{C}_{1}^{B}, \tag{10}\] which transform the helicity amplitude in basis A to that in basis B. The transformation rules of helicity amplitudes are shown below, and the transformation rules of the operator constraints go in the opposite direction. The rules from the Warsaw basis to the SILH basis read as \[\mathcal{C}_{\varphi W}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi B}+\frac{g^{\prime 2}}{8}\mathcal{C}_{B},\] \[\mathcal{C}_{\varphi WB}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{gg^{\prime}}{8}(\mathcal{C}_{W}^{\prime}+ \mathcal{C}_{B}),\] \[\mathcal{C}_{\varphi D}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,-g^{\prime 2}\mathcal{C}_{B}^{\text{SILH}}-\frac{g^{ \prime 2}}{2}(\mathcal{C}_{B}+\mathcal{C}_{2B}),\] \[\mathcal{C}_{\varphi l}^{(1)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\,\frac{g^{\prime 2}}{8}\mathcal{C}_{B}+\frac{g^{\prime 2 }}{4}(\mathcal{C}_{B}^{\text{SILH}}+\mathcal{C}_{2B}),\] \[\mathcal{C}_{\varphi l}^{(3)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,-\frac{g^{2}}{8}\mathcal{C}_{W}^{\prime}-\frac{g^{2}}{4} (\mathcal{C}_{W}^{\text{SILH}}+\mathcal{C}_{2W}),\] \[\mathcal{C}_{\varphi q}^{(1)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi q}^{(1)}-\frac{g^{\prime 2}}{24} \mathcal{C}_{B}-\frac{g^{\prime 2}}{12}(\mathcal{C}_{B}^{\text{SILH}}+\mathcal{C}_{2B}),\] \[\mathcal{C}_{\varphi q}^{(3)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi q}^{(3)}-\frac{g^{2}}{8}\mathcal{C}_{ W}^{\prime}-\frac{g^{2}}{4}(\mathcal{C}_{W}^{\text{SILH}}+\mathcal{C}_{2W}),\] \[\mathcal{C}_{\varphi\Box}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}-\frac{3g^{2}}{4}\mathcal{C}_{W}^{\text{SILH}}-\frac{3g^{2}}{8} (\mathcal{C}_{W}^{\prime}+\mathcal{C}_{2W})\] \[\qquad\qquad\qquad\qquad\qquad\left.-\frac{g^{\prime 2}}{4} \mathcal{C}_{B}^{\text{SILH}}-\frac{g^{\prime 2}}{8}(\mathcal{C}_{B}+\mathcal{C}_{2B}),\right.\] \[\mathcal{C}_{\varphi e}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi e}+\frac{g^{\prime 2}}{4}(\mathcal{C}_{B}+2 \mathcal{C}_{2B}+2\mathcal{C}_{B}^{\text{SILH}}),\] \[\mathcal{C}_{\varphi d}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi d}+\frac{g^{\prime 2}}{12}(\mathcal{C}_{B}+2 \mathcal{C}_{2B}+2\mathcal{C}_{B}^{\text{SILH}}),\] \[\mathcal{C}_{\varphi u}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi u}-\frac{g^{\prime 2}}{6}(\mathcal{C}_{B}+2 \mathcal{C}_{2B}+2\mathcal{C}_{B}^{\text{SILH}}). \tag{11}\] The rules from the SILH basis to the Warsaw basis are \[\mathcal{C}_{W}^{\prime}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{2}}\mathcal{C}_{\varphi W},\] \[\mathcal{C}_{\varphi e}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi e}-2\mathcal{C}_{\varphi l}^{(1)},\] \[\mathcal{C}_{\varphi d}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi d}-\frac{2}{3}\mathcal{C}_{\varphi l}^{(1)},\] \[\mathcal{C}_{\varphi u}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi u}+\frac{4}{3}\mathcal{C}_{\varphi l}^{(1)},\] \[\mathcal{C}_{\varphi q}^{(1)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi q}^{(1)}+\frac{1}{3}\mathcal{C}_{\varphi l }^{(1)},\] \[\mathcal{C}_{\varphi q}^{(3)}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi q}^{(3)}-\mathcal{C}_{\varphi l}^{(3)},\] \[\mathcal{C}_{B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{gg^{\prime}}\mathcal{C}_{\varphi WB}-\frac{8}{g^{2}} \mathcal{C}_{\varphi W}\] \[\mathcal{C}_{2B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{\prime 2}}\mathcal{C}_{\varphi l}^{(1)}+\frac{g^{ \prime 2}}{g^{\prime 2}}\mathcal{C}_{\varphi D}\] \[\mathcal{C}_{\varphi B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi B}+\frac{g^{\prime 2}}{g^{2}} \mathcal{C}_{\varphi W}-\frac{g^{\prime}}{g}\mathcal{C}_{\varphi WB}\] \[\mathcal{C}_{2W}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{2}}\mathcal{C}_{\varphi \Box}-\frac{2}{3g^{2}}\mathcal{C}_{\varphi D}-\frac{8}{g^{2}}\mathcal{C}_{ \varphi l}^{(3)}\] \[\mathcal{C}_{B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{2}}\mathcal{C}_{\varphi W}-\frac{2}{gg^{\prime 2}}\mathcal{C}_{\varphi WB}-\frac{8}{g^{2}}\mathcal{C}_{\varphi D}\] \[\mathcal{C}_{2B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{\prime 2}}\mathcal{C}_{\varphi l}^{(1)}+\frac{2}{g^{ \prime 2}}\mathcal{C}_{\varphi D}\] \[\mathcal{C}_{\varphi B}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\mathcal{C}_{\varphi B}+\frac{g^{\prime 2}}{g^{2}} \mathcal{C}_{\varphi W}-\frac{g^{\prime}}{g}\mathcal{C}_{\varphi WB}\] \[\mathcal{C}_{2W}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{2}}\mathcal{C}_{\varphi\Box}-\frac{2}{3g^{2}} \mathcal{C}_{\varphi D}-\frac{8}{g^{2}}\mathcal{C}_{\varphi l}^{(3)}\] \[\mathcal{C}_{B}^{\text{SILH}}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{8}{g^{2}}\mathcal{C}_{\varphi W}-\frac{8}{gg^{\prime}}\mathcal{C}_{\varphi WB}-\frac{2}{g^{\prime 2}}(\mathcal{C}_{\varphi D}+2\mathcal{C}_{\varphi l}^{(1)})\] \[\mathcal{C}_{W}^{\text{SILH}}\xrightleftharpoons[\text{Constraints}]{ \text{Amplitude}}\,\,\frac{2}{3g^{2}}(\mathcal{C}_{\varphi D}+6\mathcal{C}_{ \varphi l}^{(3)}-6\mathcal{C}_{\varphi W}-4\mathcal{C}_{\varphi\Box}). \tag{12}\]
2308.02526
Radiation Produced with Slow-Wave Fundamental Mode and Generalized Fundamental mode in Periodic Structures
This study suggests an idea that radiation in a periodic leaky-wave antenna (PLWA) should be considered to be produced with the fundamental mode, regardless of whether it is fast-wave or slow-wave. The idea is different from the conventional PLWA theory, which considers it a fact that a PLWA produces radiation with its fast-wave space harmonic when the fundamental mode is slow-wave. To elaborate the idea, it is proved that there is not an eigen-equation like Pythagorean theorem for the fundamental mode in PLWAs. Then a non-uniform structure antenna is designed to show that slow-wave modes can produce leaky-wave radiation. Again it is proved that the difference of the phase constants between a slow-wave fundamental mode and its fast-wave space harmonics has not any effect on the radiation pattern of a PLWA. Moreover, it is clarified that the fundamental mode has a more definite physical significance than space harmonics. Finally, a concept of generalized fundamental modes is proposed without using Fourier expansion. The generalized fundamental modes have the same phase and attenuation constants as any space harmonics, and have physical significance as the conventional fundamental mode. Therefore, it could replace the roles that the space harmonics used to play.
Yin Yifan, Li Shunli, Wu Ke
2023-07-31T13:48:21Z
http://arxiv.org/abs/2308.02526v3
Radiation Produced with Slow-Wave Fundamental Mode and Generalized Fundamental mode in Periodic Structures ###### Abstract This study suggests an idea that radiation in a periodic leaky-wave antenna (PLWA) should be considered to be produced with the fundamental mode, regardless of whether it is fast-wave or slow-wave. The idea is different from the conventional PLWA theory, which considers it a fact that a PLWA produces radiation with its fast-wave space harmonic when the fundamental mode is slow-wave. To elaborate the idea, it is proved that there is not an eigen-equation like Pythagorean theorem for the fundamental mode in PLWAs. Then a non-uniform structure antenna is designed to show that slow-wave modes can produce leaky-wave radiation. Again it is proved that the difference of the phase constants between a slow-wave fundamental mode and its fast-wave space harmonics has not any effect on the radiation pattern of a PLWA. Moreover, it is clarified that the fundamental mode has a more definite physical significance than space harmonics. Finally, a concept of generalized fundamental modes is proposed without using Fourier expansion. The generalized fundamental modes have the same phase and attenuation constants as any space harmonics, and have physical significance as the conventional fundamental mode. Therefore, it could replace the roles that the space harmonics used to play. Generalized fundamental mode, Leaky-wave antenna, periodic structure, non-uniform antenna, slow wave, space harmonics. ## I Introduction Leaky-wave radiation is well known to takes place in both -uniform and non-uniform leaky-wave antennas (LWAs) [1]. A uniform LWA, i.e. a rectangular air-filled waveguide with a longitudinal slot on its broadside wall, is a uniform transmission structure in which there are a dominant mode or high-order modes [2]-[3]. As a general perception, a uniform LWA produces leaky-wave radiation with its fundamental or high-order modes, which must be fast-wave [1]-[7]. What about a non-uniform LWA is? In general, there are two types of non-uniform LWAs, namely periodic leaky-wave antennas (PLWAs) [8]-[15] and non-periodic leaky-wave antennas (NPLWAs) [16]-[17]. A PLWA is no longer a uniform but a periodic structure, in which there are a fundamental mode and space harmonics [4]-[5],[18]. When the fundamental mode is a fast-wave, a PLWA produces fast-wave radiation with the fundamental mode. When the fundamental mode is slow-wave, in the classic theory of PLWA, it has always been thought of the fact that the PLWA produces slow-wave radiation with the space harmonic which should be fast-wave [1], [4]-[5], [8]-[10], [18]-[24]. It is natural to ask why the slow-wave radiation of a PLWA cannot be produced with the slow-wave fundamental mode, but with fast-wave harmonics? Is it because fast-wave modes cannot produce radiation? For the mode in a uniform LWA, there is an eigen-equation like Pythagorean theorem, which governs the three dimensional wave-numbers of the mode. Based on the eigen-equation, it can be concluded that a fast-wave mode cannot produce radiation in a uniform LWA. Is there such a eigen-equation for the fundamental mode in a PLWA, with which the slow-wave the fundamental mode would not produce slow-wave radiation. However, to the best knowledge of the authors, no one has yet proved the eigen-equation for the fundamental mode. On the other hand, there is work demonstrating that slow-wave dominant modes in non-uniform LWAs can produce radiation [16]-[17]. The antennas in [16] and [17] actually are two NPLWAs, which means that there is no space harmonics at all. The dominant modes in these antennas are slow-wave, so it is reasonable to think that the slow-wave dominant modes produce radiation of these antennas. This paper suggests an idea that the radiation of a PLWA should be considered to be produced with the fundamental mode, regardless of whether it is fast-wave or slow-wave [25]. The idea is different from the conventional PLWA theory, and the three prerequisites of the idea are demonstrated as follows: 1) Slow waves in a non-uniform structure antenna could produce radiation. To verify prerequisite, it is proved in Section II that there is not an eigen-equation like Pythagorean theorem for the fundamental mode, and a non-uniform antenna is designed in Section III to show that its slow-wave dominant mode can produce leaky-wave radiation. 2) It is proved in Section IV that the difference of the phase constants between a slow-wave fundamental mode and its fast-wave space harmonics has not any effect on the radiation pattern of a PLWA. 3) The fundamental mode has a more definite physical significance than space harmonics. This prerequisite is elaborated in Section V. The paper also proposes a concept of the generalized fundamental modes without using Fourier expansion in Section VI. The generalized fundamental modes and the fundamental mode differ by only a phase factor. Since the generalized fundamental modes can have the same phase constant and attenuation constant as any space harmonic, it is suggested that the generalized fundamental modes could replace the roles that were once played by the space harmonics In Section VII a conclusion is provided. ## II Is There Eigen-Equation for Fundamental Mode in a Periodic Structure This Section will proof that there is not an eigen-equation like Pythagorean theorem for the fundamental mode. As a result, whether the fundamental mode radiates or not, it cannot be determined based on whether the fundamental mode is fast-wave or slow-wave. Appling Floquet's theorem to a periodic structure like a PLWA, one can write the field in the periodic structure as follows [18] \[\mathbf{E}(x,y,z)=\mathbf{E}_{0}(x,y,z)e^{-\beta_{0}z} \tag{1}\] \[\mathbf{E}_{0}(x,y,z+p)=\mathbf{E}_{0}(x,y,z) \tag{2}\] where the function of the fundamental mode, \(\mathbf{E}_{0}(x,y,z)\), is a periodic function with a periodicity \(p\) about the argument \(z\), \(\beta_{0}\) is the propagation constant of the fundamental mode. Substituting (1) into wave equation, a three-dimensional wave equation for the fundamental mode is as follow \[(\nabla_{T}^{2}+\frac{\partial^{2}}{\partial z^{2}}-\beta_{0}^{2}+k_{0}^{2}) \mathbf{E}_{0}(x,y,z)-2j\beta_{0}\frac{\partial\mathbf{E}_{0}(x,y,z)}{ \partial z}=0 \tag{3}\] Rather than a two-dimensional partial equation that the eigenmode in a uniform transmission structure satisfies, the equation (3) generally is a three-dimensional partial differential equation. The propagation constant \(\beta_{0}\) depends on the boundary condition on the whole boundary of a unit cell rather than the transverse boundary condition only on its transverse section. Only if the following equation \[(\frac{\partial^{2}}{\partial z^{2}}-2j\beta_{0}\frac{\partial}{\partial z}) \mathbf{E}_{0}(x,y,z)=0 \tag{4}\] holds, the equation (3) would become a two-dimensional partial equation. The solution to the equation (16) is as follow \[\mathbf{E}_{0}(x,y,z)=j\frac{\mathbf{E}_{01}(x,y)}{2\beta_{0}}+\mathbf{E}_{02 }(x,y)e^{j\beta_{0}z} \tag{5}\] Substituting (5) into (1), one can write the field as follow \[\mathbf{E}(x,y,z)=j\frac{\mathbf{E}_{01}(x,y)}{2\beta_{0}}e^{-\beta_{0}z}+ \mathbf{E}_{02}(x,y)e^{j\beta_{0}z} \tag{6}\] The field in (6) actually is the electromagnetic modes in a uniform rather than periodic transmission structure. Therefore, for the fundamental mode in a periodic structure there is an inequation as follow. \[(\nabla_{T}^{2}-\beta_{0}^{2}+k_{0}^{2})\mathbf{E}_{0}(x,y,z)\neq 0 \tag{7}\] Consequently, the \(\mathbf{E}_{0}(x,y,z)\) cannot be characterized with only a pair of transverse eigen parameters of \(k_{x}\) and \(k_{y}\), which are usually determined only by the transverse boundary condition on its transverse section in a uniform structure. In fact, because a periodic structure always has variable cross-sections along its propagation direction, one can not obtain a fixed propagation constant \(\beta_{0}\) governing the whole periodic structure, because there is not a constant transverse section to enforce boundary conditions. Therefore, there is certainly an inequation for the fundamental mode as follow \[k_{0}^{2}\neq k_{x}^{2}+k_{y}^{2}+\beta_{0}^{2} \tag{8}\] If a fundamental mode is a slow-wave, namely \(\beta_{0}\!>\!k_{y}\), due to inequation (8), the \(k_{x}^{2}\!+\!k_{y}^{2}\) cannot be determined to be negative, and the \(k_{x}^{2}\!+\!k_{y}^{2}\) might be positive! When the real part of the \(k_{x}\) or/and \(k_{y}\) is not zero, the fundamental mode has phase variation along its transverse direction, and can produce leaky-wave radiation. As a result, the slow-wave fundamental mode could be responsible for leaky-wave radiation. ## III can slow-wave produce Radiation in non-uniform LWA To demonstrate that the slow-wave dominant mode in a non-uniform structure antenna can produce radiation, this Section introduces a very simple traveling-wave series-fed array antenna, namely slotted dielectric filled waveguide (DFW) antenna. Fig. 1(a) shows the geometries structures of the slotted DFW antenna, and Fig. 1(b) is the photo of its SIW prototype. The SIW has a equivalent width of 17.9 mm. and is designed on a Rogers RO4003 dielectric substrate (\(\varepsilon_{x}\)=3.55, tan \(\delta\) = 0.0022). Both the DFW and SIW here are 75.4 mm longs. Since the slotted DFW/SIW has only two slots, it cannot be a periodic structure and there is no any space harmonics in it. To simplify the following simulations with CST, without otherwise specified, all the substrates are treated as lossless and the metal is treated as PEC (perfect electric conductor). When the dominant mode TE\({}_{10}\) is fed into a DFW, its phase constant \(\beta\) can be calculated as follows [18], \[\beta=k_{0}\sqrt{\varepsilon_{o}-\left(\frac{\lambda_{0}}{2W_{e}}\right)^{2}} \tag{9}\] where \(\lambda_{0}\) and \(k_{0}\) are wavelength and wavenumber in air, respectively. Fig. 1(c) shows the dispersion of the DFW, calculated by equation (9). It indicates that the phase delay, \(\beta p\), between the two slots is \(0.994\pi\) and \(2.994\pi\) at \(6.9\) GHz and \(16.5\) GHz, respectively. In addition, when the operating frequency exceeds \(5.4\) GHz, the dominant mode is slow-wave. The two-slot DFW can be treated as a two-element array. The radiation pattern of the two-element array can be written as [26] \[F_{0}=F_{0}(0)e^{-\gamma(k_{0}+\alpha_{0}\beta p)}\cos[0.5p(k_{0}\cos 0-\beta)] \tag{10}\] where \(F_{0}(\star)\) is the radiation pattern of a single slot. Based (10), it can be find that when \(\beta p\) is \(\pi\) and \(3\pi\), the radiation at broadside is null. Fig. 2(a) and (b) show the simulated 3D radiation patterns of the DFW, and Fig. 2(c) and (d) plot the simulated and measured radiation patterns on E-plane of the two-slot SIW. All these patterns have a null radiation at broadside at \(6.9\) GHz and \(16.5\) GHz, respectively. Fig. 2(e) and (f) show the simulated E-field distribution on plane \(y\)=\(0.5b\) inside the two-slot DFW at \(6.9\) GHz and \(16.5\) GHz, respectively. They also show that the dominant modes has the phase delays of about \(\pi\) and \(3\pi\) at \(6.9\) GHz and \(16.5\) GHz, respectively. . The calculated phase delays by (9), the showed phase delays in Fig. 2(c) and (d) are consistent. These phase delays are \(\pi\) and \(3\pi\) at \(6.9\) GHz and \(16.5\) GHz, respectively. The null radiation direction predicted by (10) based on these phase Fig. 1: Slotted DFW/SIW: (a) Structure, (b) Its SIW prototype and (c) Dispersion (Ws=18 mm, s=0.3 mm, d=0.6 mm, We=17.9 mm, b=0.813 mm, w=0.4 mm, Ls=12 mm, p=15 mm, and r=0.6 mm). Fig. 2: Radiation patterns and electric fields of the two-slot DFW/SIW: (a) Simulated DFW pattern at \(6.9\) GHz, (b) Simulated DFW pattern at \(16.5\) GHz, (c) Simulated and measured SIW E-plane pattern at \(6.9\) GHz and (d) Simulated and measured SIW E-plane pattern at \(16.5\) GHz, (e) E-field at \(6.9\) GHz and (f) E-field at \(16.5\) GHz. delays is the same as the simulated and measured null radiation. All these consistent results indicate that when operating frequencies are 6.9 GHz and 16.5 GHz, the slow-wave dominant mode in the DFW produce leaky-wave radiation. All the above results show that slow-wave dominant mode in a non-uniform antenna can produce the leaky-wave radiation. ## IV Can Slow-Wave Fundamental Mode Explain Pattern Characteristics of PLWA When a PLWA has slow-wave radiation, the fundamental mode is slow-wave. In classic PLWA theory, its pattern characteristics of the PLWA, such as the direction of its main lobe, is usually explained with the fast-wave space harmonic. This Section provides a proof that the difference of the phase constants between the slow-wave fundamental mode and the fast-wave space harmonics has not any effect on the radiation pattern characteristics of a PLWA. The fundamental mode and its space harmonics share the same attenuation constant and different phase constants. The phase of the space harmonic is as follow [4] \[\beta_{\mathrm{a}}=\beta_{0}+\frac{2n\pi}{p},\quad\langle n=0,\pm 1,\pm 2,\cdots\rangle \tag{11}\] Therefore, there are two kinds of phase delays between any two adjacent radiating elements of a PLWA: one calculated using the fundamental mode and the other using fast-wave space harmonics. For a PLWA with \(N\) elements, its array pattern can be expressed by [26] \[F(u)=\left|\frac{\sin(0.5Nu)}{\sin(0.5u)}\right| \tag{12}\] \[u=k_{0}\,p\cos\theta-\beta p \tag{13}\] where \(u\) is the total phase difference of two radiating fields from adjacent two element antennas, and \(F(u)\) is a periodic function with a periodicity of \(2\pi\). The phase difference based on \(n\)th fast-wave space harmonic is as follow \[u_{\mathrm{a}}=k_{0}\,p\cos\theta-\beta_{\mathrm{a}}\,p \tag{14}\] Substituting (11) into (14), one has \[u_{\mathrm{a}}=u_{0}-2n\pi \tag{15}\] where \(u_{0}\) is the phase difference based on the fundamental mode. Based on the periodic characteristic of \(F(u)\), one has \[F(u_{\mathrm{a}})=F(u_{0}) \tag{16}\] It means that the difference of the phase constants between the slow-wave fundamental mode and the fast-wave space harmonics has not any effect on the radiation pattern characteristics of a PLWA. On the other hand, the fundamental mode has the same attenuation constant as the space harmonics, but different phase constants. Therefore, it might be possible that the fundamental mode could fully replace the role the harmonics used to play in the classic PLWA theory. ## V Which Is Reasonable: Fundamental Mode Produce or Space Harmonic Produce Since slow-wave modes in a non-uniform structure antenna can produce radiation, and the difference of the phase constants between the slow-wave fundamental mode and the fast-wave space harmonics has not any effect on the radiation pattern characteristics of a PLWA, it may be reasonable that the fundamental mode rather than its space harmonics, produces the radiation of a PLWA, regardless of whether the fundamental mode is fast-wave or slow-wave. The reason for it is as follows. 1) The fundamental mode has a definite physical significance since it satisfies the boundary condition of a PLWA. On the other, any one space harmonic does not satisfy the boundary conditions individually, so that any individual space harmonic does not exist alone in a periodic structure [4]. Moreover, any individual space harmonic has not a field distribution corresponding to an actual antenna. It means that the space harmonics have less physical significance compared to the fundamental mode. 2) It would be impossible to tell whether any individual space harmonic radiate or not. Whether or not wave radiates depends on the continuity of the structure boundary. Any individual space harmonic is only part of the overall electromagnetic wave, so it has nothing to do with boundary continuity. Therefore, a space harmonic cannot be used to determine whether a PLWA radiates or not. If one asks what kind of modes can produce radiation, even if the answer to this question is not the fundamental mode, it cannot be a space harmonic, regardless of whether the space harmonic is fast-wave or slow-wave. On the other hand, the fundamental mode satisfies boundary conditions and has a field distribution associated with an actual antenna, and it would be evident to determine whether the fundamental mode radiates or not. Moreover, since the field distribution can also be calculated with simulation tools, the fundamental mode provides a more visual insight than space harmonics. 3) In principle, all space harmonics in a PLWA share the same importance. If two spatial harmonics are different types of wave, saying one fast-wave and the other slow-wave, the radiation of the PLWA cannot be produced/determined by only one and not the other. In addition, since a space harmonic is only part of the entire electromagnetic wave, it is not easy to understand that only several space harmonics can characterize the radiation characteristics of the entire electromagnetic wave. The fundamental mode, however, can fully characterize the entire electromagnetic wave in a PLWA, including the radiation. Since the fundamental mode rather than any space harmonics has a definite physical significance, it is the fundamental mode, not its space harmonics, that produces the radiation in a PLWA even if the fundamental mode is a slow-wave. ## VI Generalized Fundamental Modes Besides the minimum periodicity, any one periodic function has any number of other periodicity. Can the fundamental mode have multiple phase constants? The field in a periodic structure can be rewritten in the following form \[\mathbf{E}(x,y,z)=\mathbf{E}_{0}(x,y,z)e^{-i\beta_{\mathrm{e}}z}=\mathbf{E}_{0}( x,y,z)e^{\frac{2\pi z}{p}}e^{-i\beta_{0}+\frac{2\pi p}{p}zz} \tag{17}\] Defining a set of generalized fundamental modes as \[\mathbf{E}_{pp}(x,y,z)=\mathbf{E}_{0}(x,y,z)e^{\frac{2\pi z}{p}z} \tag{18}\] the field in a periodic structure can be expressed with any one generalized fundamental mode multiplied by a phase factor of \(e^{-i\beta_{\mathrm{e}}z}\) as follow \[\mathbf{E}(x,y,z)=\mathbf{E}_{pp}(x,y,z)e^{-i\beta_{\mathrm{e}}z} \tag{19}\] where \(\beta_{\mathrm{e}}\) is phase constant of the generalized fundamental modes, and is just the same as the phase constant of space harmonics. When \(n=0\), \(\mathbf{E}_{g0}(x,y,z)=\mathbf{E}_{0}(x,y,z)\). Based on (2) and (18), one can find \[\mathbf{E}_{pp}(x,y,z+p)=\mathbf{E}_{pp}(x,y,z) \tag{20}\] Therefore, \(\mathbf{E}_{pp}(x,y,z)\) is still a periodic function with a periodicity \(p\). Since the formula (18) shows that the generalized fundamental modes have the same magnitude as the fundamental mode, and the two kinds of modes differ by only one phase factor, the generalized fundamental modes still satisfy the same boundary condition as the fundamental mode, so the generalized fundamental mode have physical significance as the fundamental mode. The phase constants of the generalized fundamental modes are consistent with the phase constants of space harmonics. Accordingly, if wave phenomenon related to phase characteristic in a periodic structure is explained well with a space harmonic, the wave phenomenon would be also explained well with a generalized fundamental mode. Therefore, the generalized fundamental modes could replace the roles that were once played by the space harmonics, and it would be much reasonable than space harmonics due to its physical significance. The introduction of the generalized fundamental modes is based only on the periodicity of the fundamental mode, and is independent of the space harmonics. On the other hand, the generalized fundamental modes also provide a basis for the usability of the space harmonic explanation of radiation. With the help of the generalized fundamental modes, it can be showed why the logic of the space harmonic explanation of radiation is unreasonable but the explanation can be available. ## VII Conclusion In this study, we first prove that there is not an eigen-equation like Pythagorean theorem for the fundamental mode in a PLWA, and design an non-uniform structure antenna to show that slow-wave mode in the antenna can produce leaky-wave radiation. These results suggest an idea that the slow-wave fundamental mode in a PLWA may also produce slow-wave radiation. Secondly, it is proved that the difference of the phase constants between the slow-wave fundamental mode and the fast-wave space harmonics has not any effect on the radiation pattern characteristics of a PLWA. Thirdly, it is clarified that the fundamental mode has a more definite physical significance than its space harmonics. Accordingly, it is much more reasonable to consider the radiation in a PLWA to be produced with the fundamental mode than its space harmonics, regardless of whether the fundamental mode is fast-wave or slow-wave. Moreover, the field in a periodic structure could be expressed as a type of generalized fundamental mode multiplied by a phase factor, which have phase constants as the same as the space harmonics. A generalized fundamental mode has physical significance as the conventional fundamental mode, and could replace the roles that were once played by the space harmonics. The concept of the generalized fundamental mode in one dimensional periodic structures can be extended to two or three dimensional periodic structures, and the extensions are direct and easy. Moreover, because of a common mathematical base, the concept of the generalized fundamental mode could be also applied to periodic structures in solid state physics or other fields. ## Acknowledgment The authors would like to thank the technical staff of the Poly-Grames Research Center at Ecole Polytechnique de Montreal for their collaboration and support of the fabrications and measurements related to this work.
2309.13702
Skill Check: Some Considerations on the Evaluation of Gamemastering Models for Role-playing Games
In role-playing games a Game Master (GM) is the player in charge of the game, who must design the challenges the players face and narrate the outcomes of their actions. In this work we discuss some challenges to model GMs from an Interactive Storytelling and Natural Language Processing perspective. Following those challenges we propose three test categories to evaluate such dialogue systems, and we use them to test ChatGPT, Bard and OpenAssistant as out-of-the-box GMs.
Santiago Góngora, Luis Chiruzzo, Gonzalo Méndez, Pablo Gervás
2023-09-24T17:19:36Z
http://arxiv.org/abs/2309.13702v2
# Skill Check: Some Considerations on the Evaluation of Gamemastering Models for Role-playing Games ###### Abstract In role-playing games a Game Master (GM) is the player in charge of the game, who must design the challenges the players face and narrate the outcomes of their actions. In this work we discuss some challenges to model GMs from an Interactive Storytelling and Natural Language Processing perspective. Following those challenges we propose three test categories to evaluate such dialogue systems, and we use them to test ChatGPT, Bard and OpenAssistant as out-of-the-box GMs. Keywords:Role-playing Games Natural Language Processing Interactive Storytelling Computational Creativity Dialogue Systems ## 1 Introduction Probably no one wants to hear somebody say "_Watch out! Behind that door there's a giant monster!_"; except if they are playing a role-playing game (RPG), using their imagination to visit endless worlds and having lots of fun. Tabletop role-playing games (TTRPGs) consist of two or more players that collaborate in order to create a story, while acting as characters. One of these players is the Game Master (GM), who is the one in charge of creating the world where the narrated events take place, describing the non-playable characters the human players meet and the situations they face. Having a player acting as the GM is one of the characteristics that most TTRPGs share [12]. Capturing the essence of RPGs has long been one of the goals of Interactive Storytelling (IS) research [21, 25]. However, through the years only limited solutions have been found, typically by having a lot of premade scenes that can be mixed to generate other narrative structures1, but pushing the player's freedom aside2[25]. To automate a GM is a big challenge for Natural Language Processing (NLP) and Artificial Intelligence, due to its complexity on dialogue and creativity [8]. Footnote 1: For example “Call of Cthulhu: The Official Video Game”, an adaptation of the RPG. Footnote 2: An interesting example of this is “The Stanley Parable”, a novel videogame that makes the players think about free will and the impact of their actions. Our long-term goal is to model the diverse set of skills that a GM needs to play RPGs. This long path must lead to an explainable, grounded and controllable model, so human-in-the-loop features should be taken into consideration to meet the needs reported by [1] and [30]. In this paper we will take a first step by proposing, inspired in core aspects of RPGs, a set of unit test categories to evaluate such GM models. We also use these brand new tests to evaluate ChatGPT5, Bard6 and OpenAssistant7[16] as out-of-the-box automated GMs, both for Spanish and English. Footnote 5: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Footnote 6: [https://bard.google.com](https://bard.google.com) Footnote 7: [https://huggingface.co/chat/](https://huggingface.co/chat/) ## 2 Previous work The study of the role of GMs in narrative is nothing new and also some efforts have been made to explicitly model their capabilities. [27] reflects on the core concepts of RPGs and gamemastering. [21, 22] consider GMs and RPGs as a framework to build IS systems, in order to tackle the _Interactive Dilemma_, the conflict and balance between the player's will and the designer's choices. Closely related to this concept is the GM's skill to improvise some aspects of a scene due to unexpected players' actions, and [18, 19] discuss concepts, approaches and architectures taking that into account. As they present insightful discussions about modelling narrative improvisation, they are some of the most clarifying works for us to date. Most of the latest works pursue the modelling of GMs for _Dungeons & Dragons_ (D&D), called _Dungeon Masters_ (DMs), since it is the most popular RPG and finding data is easier than for other games. For example [8] describes the complexity of modelling the (D&D) game, performing experiments with neural models and using _control features_ to guide their outputs. They also describe a _gameplay dataset_ in English used for training. [28] tries to create a DM model with the ability to predict player's actions, modelling the _Dungeon Master-Player_ interactions using Theory-of-Mind and Reinforcement Learning. The recent published datasets are also centered on modelling the D&D game. [24] presents one of the most complete datasets to study D&D interactions, consisting of transcriptions of the popular _Critical Role_ web show. [17, 29] also present datasets of D&D players' interactions from online text-based playing sessions. It is important to note that all of these recent works are about D&D, while our main objective is to work on the general aspects of a GM, regardless of the specific game or theme. Additionally, all of them rely on English resources. ## 3 A list of gamemastering challenges Most of the works mentioned in the previous section discuss difficulties faced while modelling some aspects of RPGs. However, as a way of introducing some details that guide our long-term goal and justify the test categories we propose, we would like to convey our thoughts on some challenges that a GM must face while running an RPG session. This list is not exhaustive and there may be other challenges that are not described here. **I. World and story design**. As storytellers, GMs must create and manage a rich and coherent world, populated with diverse forms of life (e.g. plants and animals) and characters. In this fictional world is where the players' characters will live and act. They also need to create some interesting places (e.g. an old library) and challenges for the players, which can be logic puzzles, tactic battles, complex dialogues with characters, or other challenges (e.g. the library has hidden rooms). Usually these situations are intended to be solved by teaming up with other characters, collaborating and using the different skills that they may master. It is useful if a GM can also measure how interesting these challenges are for the players, and how meaningful they are for the development of their characters or other characters that live in the fictional world. That is the reason why it is important that such a model can take _creative responsibility_[7, 10] while being able to explain what the plan and objectives of each utterance are. **II. Extract player's actions from input**. Since TTRPGs are played through a discussion between the players, these games have an inherent _conversational nature_. Therefore classic research problems related to dialogue systems [9] are fundamental to model GMs. More specifically, in order to _understand_ (i.e. semantically represent) the actions taken by the players, decide if they are possible in the fictional world and then determine the outcomes, the GM model should have the ability to semantically analyze their inputs. **III. Commonsense reasoning**. Commonsense reasoning is an important research area within NLP [26], and despite the great advances made in the area it remains as one of the hardest tasks [11], even for the recent Large Language Models (LLMs) like ChatGPT [23]. The relation between this classic task and the challenges for a GM is direct: since commonsense is an inherent part of our human identity, it naturally arises when playing RPGs. It is important to note that this challenge is related but different to the previous challenge: a model can semantically represent what a human is saying, but maybe the action does not make sense in some context. For instance, sometimes players may want to do actions that are possible in the real world but not in the fictional world (e.g. a character wants to play basketball but there is no gravity in her world). **IV. Track the game state**. One of the core aspects of RPGs is to let the players act as they wish, what in IS is usually called _user agency_[25]. Making the players feel this way while thoroughly tracking the state of items (i.e. objects) and characters is one of the greatest problems for IS [6, 18]. To track some component of the game is to know where it is, how hurt (in case of a living being) or damaged it is (in case of an object), and other properties that it may have (e.g. intensity of the magic property of a sacred object). This game state must be constantly updated as the world changes and the story moves forward. Finally, we would like to mention other relevant aspects for this long path. In first place, we think it is crucial that the narrative structure and the game state may be represented using a human-readable format. Since RPG games are used in educational [13] and therapy environments [2], such GM models could be used to create serious games with a wide range of objectives. Having the possibility of visualizing and customizing the boundaries of an RPG session is extremely crucial for that kind of applications. In second place, it is fundamental that these models generate respectful and _ethical_ outputs, to make the players feel safe and included. In modern RPGs like "Alice is Missing"8 there are mechanics to silently communicate the rest of players that something recently said was hurtful or uncomfortable. This is crucial when working with neural systems or LLMs, which are known to _hallucinate_[14] and generate offensive outputs [4]. Footnote 8: [https://www.huntersentertainment.com/alice-is-missing](https://www.huntersentertainment.com/alice-is-missing) Last but not least, we have to keep in mind that GMs are constantly adapting the game to fit the players' choices, so they have the additional requirement of facing every challenge described here on the fly. That big challenge is related to what [18] previously described as _open-world improvisational storytelling_. ## 4 How to evaluate such models? The procedure to evaluate creative systems (i.e. appropriate experiments and metrics) has long been a subject of debate, and remains one of the main problems of the field [10, 15]. Since TTRPGs can be modelled as a series of utterances in a complex dialogue [12], we will assume that a GM model will always have a _conversational nature_, as we mentioned in _challenge II_. This gives us a general guideline: there is always a player who is asking or trying to do something, and another player answering or reacting to it. The first idea that comes to mind could be to ask humans to play and evaluate the models based on their reaction. Although we consider important to measure how fun it is to play with the models, the humans' judgments can be very subjective, not very specific, and also biased by the fluency of the generated text [3]. This bias can be stronger when working with LLMs, since they are trained to sound very natural to the human reader (exploiting the patterns behind the form of massive amounts of texts [5]), what can lead to distract the evaluators from their goal of judging specific characteristics of the models' output. Hence, we would like to take an approach on evaluating basic, almost essential, skills that a GM should master. We propose three different test categories related to the previously described challenges: _commonsense reasoning_, the capacity to track _items_ in the world and the ability to coherently design _maps_. These categories were designed reflecting on core characteristics of RPGs, so we think they can be used to evaluate any system trying to model a GM, independently of the theme and features of the modelled game, and the technology used to play it. We also hope these categories work as a guide for human evaluators, helping them to judge models while reducing the subjectivity, the mentioned biases and the evaluation noise as possible. We will describe each of them next. ### GM-P-GM pattern In _challenges II_ and _III_ we discussed the importance of pragmatics and commonsense reasoning for a GM model. In order to evaluate the performance on this challenges we propose the GM-P-GM pattern, a formalization of the most elemental interaction between a GM and a player [12]. Specifically, we propose to evaluate the model's ability to judge the feasibility of a player's action: * \(GM_{1}\): Narrates a **situation** to solve in some **context**. * _Player_: Describes the **actions** to overcome that **situation**. * \(GM_{2}\): Validates if those **actions** are feasible for that **context**, and next narrates the **outcomes**. To run this test we give the model the \(GM_{1}\) and _Player_ contradictory utterances and ask it to generate the \(GM_{2}\) utterance. If the GM model prevents the action and explains why it is an inconsistency, the test is passed. A failure case is shown in table 1. ### Item tracking As we described in _challenge IV_, item tracking is one of the fundamental problems for gamemastering. It is important both to track items that are located in specific places (e.g. on the ground, inside a safe) and to track items that belong to the characters. The latter is usually referred as the _inventory_ of each character but, in order to avoid the use of specific RPG words, in this test we will use _bag_ or _backpack_ as more neutral words for a similar concept. To test if the model can track objects inside or outside the _backpack_, we follow this scheme: 1. _Player_: "What is in my backpack?" 2. \(GM\): \(Object_{1},Object_{2},Object_{3},Object_{4},...\) 3. _Player_: "I take \(Object_{K}\) and leave it on *another location*" 4. \(GM\): *Narrates outcome* 5. _Player_: "What is in my backpack?" 6. \(GM\): \(Object_{1},Object_{2},Object_{3},...\) 7. _Player_: "What is on *another location*?" 8. \(GM\): \(Object_{A},Object_{B},...\) If the lists given in steps #2 and #6 are the same or contain other unrelated objects, then the test is failed. In case the object is not listed in step #6, the step #8 tries to check if the model updated its new location. A failure case is shown in table 2. \begin{table} \begin{tabular}{|p{42.7pt}|p{284.5pt}|} \hline **GM** & You enter the building to look for the lost suitcase. However, the door is covered in flames. To your right you see a fire extinguisher, which seems to be ready to use. You can’t use anything from your backpack because you can hardly move your arms. What will you do? \\ \hline **Player** & 1 I take a bucket of water from my backpack and put out the fire \\ \hline **GM** & As the player grabs the bucket of water from their backpack, they realize it’s not nearly enough to put out the flames completely. However, they quickly pour the water onto the fire, creating a small opening in the flames (...) \\ \hline **Result** & **Paired** \\ \hline \end{tabular} \end{table} Table 1: An example of a _GM-P-GM_ unit test. ### Map design When GMs create a fictional world they also have the challenge to design an interesting and coherent concrete map for the players to explore, as we described in _challenge I_. Although we also justified the importance of having interesting situations for the players, for now we do not focus on how attractive, large or complex the maps generated are. Inspired in the scheme of the _Item tracking_ category, we propose to ask the model to describe the accessible locations from the current location. Then the player must choose to move to one of those locations and go back again to the original location, in order to test the map's coherence. If during this process the available locations are coherent, the test is passed. A successful case is shown in table 3. ## 5 Large Language Models as out-of-the-box GMs In order to perform some experiments with our three proposed categories, we wrote five unit tests in Spanish and English for each of them. We then used those unit tests to evaluate the out-of-the-box GM skills of three LLMs: ChatGPT10, Bard11 and the LLaMA-based OpenAssistant chatbot12. We consider these three LLMs appropriate since they are dialogue systems (i.e. they have a _conversational nature_), a fundamental requirement for a GM model. \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline **GM** & [...] **Aside from the mysterious key, your backpack also contains the usual adventures a equipment. a study rope, a repping hook, a water flask, some dried rationing, a small first-aid kit, a compass, a map of the surrounding area, and a trusty dagger at your side.** \\ \hline **Player** & **I take the rope, the map and the key and I leave them on the floor** \\ \hline **GKM** & You carefully get the rope, map, and key down on the floor, making sure they are within reach in case you need them later. (...) \\ \hline **Player** & What’s in my backpack? \\ \hline **GM** & As you take a closer look at your backpack, you notice that it contains a few essential items that any adventure would find useful: \\ & A study rope: (...) this rope is perfect for climbing or rappelling down steep surfaces. (...) \\ & A map of the surrounding area; this map shows the nearby landmarks, towns, and topography of the region, allowing you to plan your route. (...) \\ \hline **Result** & **Failed** \\ \hline \end{tabular} \end{table} Table 2: An example of an _Item tracking_ unit test. \begin{table} \begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline **GM** & [...] **Current place: You find yourself standing at the entrance of a dark and damp cave. The entrance is surrounded by tall trees, and the sound of running water can be heard in the distance.** \\ & Accessible place: Inside the cave, Back to the forest, Along the river** \\ \hline **Player** & **I want to explore the cave** \\ \hline **GM** & [...] **Current place: Inside the cave** \\ & Accessible place: The cave entrance, Desper into the cave \\ \hline **Player** & **I want to go outside** \\ \hline **GM** & [...] **Current place: Outside the cave** \\ & Accessible place: Inside the cave, Along the river, Desper into the forest** \\ \hline **Result** & **Passed** \\ \hline \end{tabular} \end{table} Table 3: An example of a _Map design_ unit test. Since these are only preliminary experiments we consider really important to make the experimental logs open, because it can help the reader to critically examine the results reported here and reflect on the real flaws and strengths of both our proposed test categories and the evaluated models. Therefore, the detailed logs of the experiments and their results are accessible on GitHub12. Footnote 12: [https://github.com/sgongora27/skill-check-GM-tests](https://github.com/sgongora27/skill-check-GM-tests) We detail the analysis in two subsections, the quantitative results and the qualitative observations. ### Quantitative results After we ran the tests using the aforementioned models, we carefully examined the outputs and determined the results for each test, shown in table 4. As can be seen, the performance on the _GM-P-GM_ category is really low, regardless of the language or model. This result is aligned with those in [23], where common-sense reasoning was one of the remarkable flaws of ChatGPT. However, _Item tracking_ and _Map design_ tests were quite good both for ChatGPT and Bard. Although these preliminary experiments do not report a big gap in the results for ChatGPT or Bard between languages, they do unveil their strength over OpenAssistant. In most cases OpenAssistant just could not finish the test, due to generating nonsensical outputs that had nothing to do with the narrated events. That problem was even deeper for the tests in Spanish. ### Qualitative observations The first and most important observation is that ChatGPT and Bard are really good at making the user feel that is playing with a real GM. There is a world to interact with, characters to meet and items to use. Everything seems perfect if the player chooses an action from those suggested by the model, although it is far from perfect when having to improvise new scenes and keep it coherent. OpenAssistant, however, struggles to deliver a minimum interactive experience and the tests had to be repeated several times to obtain a reasonable output. Our evaluation methodology does not distinguish that kind of errors, hence this aspect cannot be inferred by just comparing the quantitative results for each \begin{table} \begin{tabular}{|l|l|l|l||l|l|l|l|} \hline **Category** & **OA** [ES] & **BARD** [ES] & **CGPT** [ES] & **OA** [EN] & **BARD** [EN] & **CGPT** [EN] \\ \hline GM-P-GM & 0 out of 5 & 1 out of 5 & 1 out of 5 & 1 out of 5 & 1 out of 5 & 0 out of 5 \\ \hline Item & 0 out of 5 & 0 out of 5 & 2 out of 5 & 0 out of 5 & 3 out of 5 & 1 out of 5 \\ \hline Map & 0 out of 5 & 3 out of 5 & 3 out of 5 & 0 out of 5 & 2 out of 5 & 3 out of 5 \\ \hline \hline **Total** & 0 out of 15 & 4 out of 15 & 6 out of 15 & 1 out of 15 & 6 out of 15 & 4 out of 15 \\ \hline \end{tabular} \end{table} Table 4: Number of passed tests for each of the categories described in section 4, testing OpenAssistant (OA), Google’s Bard and ChatGPT (CGPT), both for English and Spanish. The last row shows the sum of the passed tests for each model-language pair. category between models (e.g. Bard failed the Spanish _Item tracking_ tests due to wrongly list the available items, while OpenAssistant failed them because could not even give a proper output). However, we think that the quantitative results do represent the strengths and weaknesses of each model (e.g. ChatGPT is better at world coherence than commonsense reasoning) but also the "Total" scores provide an accurate comparison of the experience provided by the different models. The second observation is about the contents generated by the models when taking the _creative responsibility_. Almost every scene _generated_ by the models took place in a medieval-fantasy setting. This relation between RPGs and a medieval setting is aligned with the previous comments in section 2: most of the available data about RPGs is in fact about D&D. As LLMs reproduce the biases in their training data [4], this shows that more work on other RPGs with different themes is needed. There is also a notorious absence of diversity of plots; after playing a few hours the narrated events and the available places start to repeat. Although this is related to the previous comment about the medieval settings and the biased data, it is important to have in mind that a great diversity of plots can be created using a medieval-fantasy setting13, so they are independent flaws and might be studied separately. Footnote 13: This is evidenced by the massive amount of adventures published for RPGs with this theme, such as _Dungeons & Dragons_ or _Pathfinder_. Our third observation is about these models' tendency to constantly adjust the output to the prompt. If the player says or tries to do something then the output will try to adjust the narrative to it, without letting the player to feel any mystery about the plot. This is not a good sign for the skills we described in _challenge I_. ### Limitations Although we propose the test categories to assist the evaluation of GM models, the human subjectivity is still there. In addition to the difficulties faced to decide whether or not the test was passed, this subjectivity can also be present in the prompts design as well, as in the case of the _GM-P-GM_ tests which need a specific human-designed case in order to run (i.e. a situation to solve and a player's solution to it). To perform a deeper evaluation and extract stronger conclusions we would need a diverse team of human evaluators and a bigger number of tests. It is important to highlight that the difficulties faced when evaluating a creative system, added to the nearly-infinite input space that RPGs offer, make the evaluation even harder. Furthermore, the LLMs show a tendency to irregularly move the story forward: sometimes the model's output narrates a single event happening immediately and sometimes narrates long scenes. Not having a reliable mechanism (e.g. a symbolic representation) to restrict the model makes the execution of these tests more unpredictable, forcing the human evaluator to take unexpected decisions on the fly. For example, it would be positive for the _Map design_ tests to have some kind of constraints and visualization components, to perform an in-depth analysis of the different reachable places in a given scene but without moving the story forward. Also, these dialogue models compute the utterances each time a new input is sent, what makes the replication experiments harder. Additionally we share the same limitations found by [23] regarding the needed time to run a small set of tests. ## 6 Conclusions and future work In this paper we discussed some challenges to face in order to model the skills that GMs need to play RPGs, like creating and managing a fictional world, tracking the game state and understanding the players' actions. Following those challenges we proposed three test categories to evaluate any kind of GM model. Although these tests are domain specific, we think they can inspire other evaluation methodologies for dialogue systems. We also used those test categories to perform preliminary experiments with ChatGPT, Bard and OpenAssistant. We found that ChatGPT and Bard can provide a satisfying gaming experience, but also they struggle when dealing with commonsense reasoning. OpenAssistant was unable to maintain the GM role during most of the tests. All 90 unit tests are available on GitHub. The difficulties faced to control the models' outputs while running the tests make us think that in the future more _neuro-symbolic_ approaches should be explored. We think that would help to keep the test phase more controllable, and also allow the players to examine the narrative details, avoid some scenes that they do not want to play and add another elements that they do. In the future we would like to improve these test categories and design more to test other gamemastering skills (e.g. model the emotional variation of a character during an interaction with other character [20]). ## 7 Acknowledgements This paper has been partially funded by ANII (Uruguayan Innovation and Research National Agency), Grant No. \(POS\_NAC\_2022\_1\_173659\) and by the project CANTOR: Automated Composition of Personal Narratives as an aid for Occupational Therapy based on Reminescence, Grant No. \(PID2019-108927RB-I00\) (Spanish Ministry of Science and Innovation).
2309.08444
Neural Network Exemplar Parallelization with Go
This paper presents a case for exemplar parallelism of neural networks using Go as parallelization framework. Further it is shown that also limited multi-core hardware systems are feasible for these parallelization tasks, as notebooks and single board computer systems. The main question was how much speedup can be generated when using concurrent Go goroutines specifically. A simple concurrent feedforward network for MNIST digit recognition with the programming language Go was created to find the answer. The first findings when using a notebook (Lenovo Yoga 2) showed a speedup of 252% when utilizing 4 goroutines. Testing a single board computer (Banana Pi M3) delivered more convincing results: 320% with 4 goroutines, and 432% with 8 goroutines.
Georg Wiesinger, Erich Schikuta
2023-09-15T14:46:43Z
http://arxiv.org/abs/2309.08444v1
# Neural Network Exemplar Parallelization with Go ###### Abstract This paper presents a case for exemplar parallelism of neural networks using Go as parallelization framework. Further it is shown that also limited multi-core hardware systems are feasible for these parallelization tasks, as notebooks and single board computer systems. The main question was how much speedup can be generated when using concurrent Go goroutines specifically. A simple concurrent feedforward network for MNIST [1] digit recognition with the programming language Go [2, 3, 4, 5] was created to find the answer. The first findings when using a notebook (Lenovo Yoga 2) showed a speedup of 252% when utilizing 4 goroutines. Testing a single board computer (Banana Pi M3) delivered more convincing results: 320% with 4 goroutines, and 432% with 8 goroutines. Backpropagation, Exemplar Parallelization, Go Programming Language, MNIST ## I Introduction Neural networks and artificial intelligence are becoming more and more important not only in research, but also in daily used technology. Due to higher amounts of data these artificial agents have to analyze there is the need for a larger throughput and highly efficient neural networks. The programming language Go looks promising for developing such a highly efficient agent, as the language itself has been made not only for highly efficient parallelization but also with fast development in mind. The main question is if Go is suitable for a highly efficient parallelization of neural networks. The main objective is the creation of an efficient parallelized neural network. There is a possibility that Go could lead to a higher parallelization efficiency/speedup than other programming languages. As Go is a young programming language, the literature about this specific topic is very sparse to almost nonexistent. There are tertiary sources like websites comparing the general throughput of Go in comparison to web languages like NodeJS, PHP and Java1. Other literature is related to parallelization speedup. There are also some neural networks realized in Go. No sources for a better comparison of parallelization techniques has been found. The scope of this work is to find out the speedup when using multiple goroutines with a neural network while maintaining a high and sustainable classification accuracy. A working MNIST digit recognition system has been created for testing the speedup with up to sixteen cores. The network and parameters have been optimized, but due to only negligible improvements with more than 100 hidden layer nodes this amount has not been exceeded. The execution time for one epoch has been sped up from 856.57-1271.73 (median 1005.80) seconds with 1 sporoutine to only 171.50-221.38 (median 201.82) seconds with 16 goroutines with a Banana Pi M3. The Lenovo Yoga 2 showed a less significant speedup with 137.29-146.01 (median 142.33) for 1 goroutine to 55.10-64.62 (median 56.49) with 4 goroutines. Additional goroutines exceeding the maximum thread limit brought further speedup due to pipelining with the Banana Pi, but a negligible speed loss for the Lenovo Yoga. Footnote 1: [https://www.toptal.com/back-end/server-side-io-performance-node-php-java-go](https://www.toptal.com/back-end/server-side-io-performance-node-php-java-go) ## II Related Work and Baseline Research Artificial neural networks and their parallel simulation gained high attention in the scientific community. Parallelization is a classic approach for speeding up execution times and exploiting the full potential of modern processors. Still, not every algorithm can profit from parallelization, as the concurrent execution might add a non-negligible overhead. This can also be the case for data parallel neural networks, where accuracy problems usually occur, as the results have to be merged. In the literature a huge number of papers on parallelizing neural networks can be found. An excellent source of references is the survey by Tal Ben-Nun and Torsten Hoefler [7]. However, only few research was done on using Golang in this endeavour. In the following only specific references are listed, which influenced the presented approach directly. The authors of [8] presented a parallel backpropagation algorithm dealing with the accuracy problem only by using a MapReduce and Cascading model. In the course of our work on parallel and distributed systems [9, 10, 11] we developed several approaches for the parallelization of neural networks. In [12], two novel parallel training approaches were presented for face recognizing backpropagation neural networks. The authors use the OpenMP environment for classic CPU multithreading and CUDA for parallelization on GPU architectures. Aside from that, they differentiated between topological data parallelism and structural data parallelism [13], where the latter is focus of the presented approach here. [14] gave a comparison of different parallelization approaches on a cluster computer. The results differed depending on the network size, data set sizes and number of processors. Using Go as parallelization tool was already analyzed for the Single-Program-Multiple-Data approach [15, 16] and showed promising results. However, in this paper we focus on exemplar parallelization. Besides parallelizing the backpropagation algorithm for training speed-up, alternative training algorithms like the Resilient Backpropagation described in [17] might lead to faster convergence. One major difference to standard backpropagation is that every weight and bias has a different and variable learning rate. A detailed comparison of both network training algorithms was given in [18] in the case of spam classification. ## III Data and Methods The following data and methods have been used to gain insight on the speedup possibilities. ### _Choosing the data and parallelization method_ Different approaches of which data could be used have been evaluated. Weather data, crime rates, etc. all seemed to be a good fit, but with the possibility of very inconclusive outputs. Finally the "Hello World!" of neural networks has been chosen: The MNIST dataset [1]. With this ready to use dataset the development process sped up as the convolutional part was already done. Exemplar parallelism [19] has been chosen as parallelization technique. Within the workers the learning method is stochastic gradient descent [20], but due to combining the data in the main connectome, it behaves like a mini-batch update [21]. ### _Basic structure_ First a functional code for a basic neural network has been prepared. With this code it is also possible to define simple multi-layer feedforward networks. From that stable basis more functionality has been added (i.E. different activation functions) to ease up the future development. Then the parallelization of the neural network has been implemented. There were additional challenges with avoiding possible race conditions. Go was very helpful with its built in race detector which can be utilized with the "-race" flag. It was easy to spot any race conditions and therefore the development sped up in the area deemed to take the most time. Afterwards the possibility to input and compute large datasets has been implemented. A batch file functionality for ease of testing as well as data output functionality have been added too. Afterwards the code and neural network have been optimized for a balance of speed, memory usage and training quality. Shuffling of the training data has been implemented to prevent any unwanted behavior that comes from repeated data. The elastic net regularization [22] has been chosen to get better results and more stability for the neural network. ### _The math_ Different activation functions have been tested to get a high accuracy, although this is not the purpose of this work. At the end it has been concluded that the best way is to start without any data normalization for the data to be put into the input layer. But before any activation function runs over the layer, the data is normalized by dividing each value by the size of the layer (including the bias) to minimize the risk of exploding gradients [23]. The following activation functions have been used: * Input layer: Identity * Hidden layer: ELU [24] * Output layer: SoftMax Variables are aas followed: * \(\eta\) = learning rate * t = target * x = neuron value before activation * \(\varphi\) = activation function * \(\delta\) = error * w = weight #### Iii-B1 Activation functions **Identity** The simplest activation function is the identity function. It simply states that the value stays the same, as in equation (1). \[\varphi(x)=x \tag{1}\] So the derivation simply is 1 as shown in equation (2). \[\varphi^{\prime}(x)=1 \tag{2}\] **Exponential Linear Unit** In comparison to ReLU and leaky ReLU, ELU "[...] speeds up learning in deep neural networks and leads to higher classification accuracies." [24] and therefore has been chosen over the other options. Equation (3) shows the math, where alpha is a positive value that can be freely chosen. \[\varphi(x)=\begin{cases}x&\text{if x }\geq 0\\ \alpha*(e^{\text{x}}-1)&\text{if x }<0\end{cases} \tag{3}\] The derivation for training is shown in equation (4). \[\varphi^{\prime}(x)=\begin{cases}1&\text{if x }\geq 0\\ \varphi(x)+\alpha&\text{if x }<0\end{cases} \tag{4}\] **SoftMax** The SoftMax function gives us a classification of the likelihood that the current input represents a certain number. The math is straightforward and shown in equation (5). \[\varphi_{\text{i}}(\overrightarrow{x})=\frac{e^{\text{x}_{\text{i}}}}{ \sum\limits_{j=1}^{J}e^{\text{x}_{\text{j}}}} \tag{5}\] But with this equation, exploding or vanishing gradients [25] can become a problem due to the high likelihood of numbers exceeding.For the SoftMax activation there is a little "trick". It is possible to add a scalar, as shown in (6), without changing the value of the softmax function [26]. \[\varphi_{\text{i}}(\overrightarrow{x^{\prime}})=\frac{e^{\text{x}\text{ + S}}}{\sum\limits_{j=1}^{J}e^{\text{x}_{\text{j}}\text{ + S}}} \tag{6}\] So, instead of using softmax(x), softmax(z) - with a scalar value of the negative x maximum - has been used, as in equation (7). \[z_{\text{i}}=(x_{\text{i}}-max_{\text{i}}(x_{\text{i}})) \tag{7}\] If we use the maximum, we push the calculation into the negative number spectrum. So, instead of having values ranging over ]-\(\infty\), \(\infty\)[, they've been shifted to ]0, 1], as in (8). \[\varphi_{\text{i}}(\overrightarrow{x})=\frac{e^{\text{z}_{\text{i}}}}{\sum \limits_{j=1}^{J}e^{\text{z}_{\text{i}}}} \tag{8}\] Equation (9) derivation for training is a little bit more complicated. \[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=\frac{\partial\varphi_{\text {i}}(\overrightarrow{x})}{\partial x_{\text{j}}}=\begin{cases}\varphi_{\text {i}}(\overrightarrow{x})*(1-\varphi_{\text{j}}(\overrightarrow{x}))&\text{i = j}\\ \varphi_{\text{i}}(\overrightarrow{x})*(0-\varphi_{\text{j}}(\overrightarrow{x }))&\text{i \neq j}\end{cases} \tag{9}\] Mathematicans use (10) to shorten the equation to (11). \[\delta_{\text{ij}}=\begin{cases}1&\text{i = j}\\ 0&\text{i \neq j}\end{cases} \tag{10}\] \[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=\frac{\partial\varphi_{\text {i}}(\overrightarrow{x})}{\partial x_{\text{j}}}=\varphi_{\text{i}}( \overrightarrow{x})*(\delta_{\text{ij}}-\varphi_{\text{j}}(\overrightarrow{x })) \tag{11}\] But in the end it comes down to the same function as the logistic derivation, shown in equation (12). As the result of the derivation is a diagonal matrix [27] there is no need to calculate the whole matrix. \[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=(1-\varphi_{\text{i}}( \overrightarrow{x}))*\varphi_{\text{i}}(\overrightarrow{x})=(1-x)*x \tag{12}\] #### Iii-B2 Elastic Net Regularization The elastic net regularization [22] has been used for weight updates. It's a combination of the lasso regression (13) and the ridge regression (14). \[L^{1}=\lambda|w| \tag{13}\] \[L^{2}=\lambda w^{2} \tag{14}\] The elastic net (15) is simple. \[ElasticNet=\lambda|w|+\lambda w^{2} \tag{15}\] Computational optimization (16) \[ElasticNet=\lambda(|w|+w^{2}) \tag{16}\] For the derivation (17) the signum function (18) is needed. \[|w|=w*sgn(w) \tag{17}\] \[sgn(w)=\begin{cases}1&\text{w > 0}\\ 0&\text{w = 0}\\ -1&\text{w < 0}\end{cases} \tag{18}\] Which leads to (19). \[ElasticNet^{\prime}=\lambda(sgn(w)+2w) \tag{19}\] #### Iii-B3 Loss function Quadratic loss has been chosen as loss function. Although the classification of handwritten digits - as the name says - is a classification problem and therefore cross entropy loss should show better results. Different loss functions will be implemented in the future of this work. \[L=-\frac{1}{2}\sum\limits_{i}^{nclass}\Big{(}t_{\text{i}}-\varphi(x_{\text{i }})\Big{)}^{2}+\lambda\frac{1}{2}\sum\limits_{i}^{k}\Big{(}|w_{\text{i}}|+{w_{ \text{i}}}^{2}\Big{)} \tag{20}\] The derivative is the logistic equation, therefore the loss function is equation (21). \[L^{\prime}=-\sum\limits_{i}^{nclass}\Big{(}t_{\text{i}}-\varphi(x_{\text{i}}) \Big{)}+\lambda\sum\limits_{i}^{k}\Big{(}\frac{1}{2}sgn(w_{\text{i}})+w_{ \text{i}})\Big{)} \tag{21}\] #### Iii-C4 Forward and backward propagation All the previous information is needed to understand the forward and backward propagation methods. **Forward** After setting the inputs and targets, the first layer of the neural network gets activated. Then for each layer, the next neuron gets excited with the product of the activated value and the weight of the connection between the neurons (22). \[x_{\mathrm{j}}=\varphi(x_{\mathrm{i}})*w_{\mathrm{ij}} \tag{22}\] **Backward** When learning, equation (23) is used to calculate the error. \[\delta=t-\varphi(x) \tag{23}\] The formula for the weight update, with learning rate and the regularization in (24). \[\Delta w=\eta*(\delta*\varphi(x)+\lambda(sgn(w)+w)) \tag{24}\] The final equation in (25). \[w_{\mathrm{ij}}^{+}=w_{\mathrm{ij}}-\Delta w, \tag{25}\] ### _Choosing the parameters_ ELU alpha = 0.5 (currently hardcoded) The alpha value for ELU has been hardcoded as there was no incentive to do otherwise in the current software iteration. **workerBatch:** 100 The worker batch has been chosen to merge the single neural networks as often as possible, but without losing too much performance due to context switching. **Minimum/Maximum weight (starting weights):** [-0.1; 0.1] As the weights are usually getting smaller, when learning occurs, the starting values have to be chosen to be 0.1 instead of 1. This led to the best outcome. **LearningRate:** 0.8 The learning rate has been set to 0.8, as this led to the best outcome. **Lambda:** 0.0000001 The multiplicator for the elastic net has been set to this value, as it provided the highest accuracy for the training and test set. As it is hard to tell if either L1 or L2 regularization is the best, there is only one lambda for setting both methods, to achieve a balance between the two methods. ## IV Results/Evaluation For testing the neural network, two available systems have been chosen: The Lenovo Yoga 2 laptop, as it is a dual core consumer product which utilizes threading and a turbo mode for higher workloads, with 64 Bit Linux. The Banana Pi M3, as it is a well known octa core home server, with no threading, and without data distortion due to turbo mode kicking in, and 32 Bit Linux. Both systems have a standard CPU frequency of 1.80 GHz, although the minimum and maximum values differ. There are stark differences in computation speed as well as speedup between the Intel and the ARM architecture. As RISC and CISC lost their meaning to describe newer architectures, it is not possible to draw the conclusion here, although the main effect could come from the smaller - and therefore faster access rates - of the Intel L1 and L2 caches, or the lack of an L3 cache in the ARM architecture. Further research would be needed. ### _Benchmark_ When using pprof for checking the total cpu usage of the code parts with BenchBatch (it utilizes 4 cores, uses a worker batch of 100 lines, and processes 1 training with 60.000 MNIST lines as well as 1 test with 10.000 MNIST lines), it can be seen in Figure 1 that thinking and training takes up about 96% of the total time. Thinking takes about 40% of the time, training takes about 56%. Thinking is the forward propagation, training is the backward propagation. Due to that high amount of cpu usage heavy optimizations were made in these code parts, as these had the greatest effects. The utility functions only play a marginal role. Even though they wouldn't need any optimization, they've been optimized for general code quality reasons. In example the garbage collector (mallocgc) is hardly used in the utility functions, and almost never in the main code part. As strings are only converted when needed, these parts of the code - even though they're not impacting the measurements - have been highly optimized. Maybe there's still room for further optimization, but for the general purpose this goal has been exceeded. Fig. 1: pprof worker CPU profile ### _Test systems_ The final tests were made with a "Lenovo Yoga 2 Pro Multimode Ultrabool" as well as a "Banana Pi M3". Specifications of the Lenovo: Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz, Dual Core (4 threads) Min CPU: 800 MHz, Max CPU: 3.0 GHz 32 KiB L1 cache, 256 KiB L2 cache, 4 MiB L3 cache 2x4096 DIMM @ Clockspeed 1600 MHz 64 Bit Linux Ubuntu 18.04.1 LTS Specifications of the Banana Pi M3: A83T ARM Cortex-A7 octa-core CPU @ 1.80 GHz, Octa Core (8 threads), 4800 BogoMIPS ARMv7 Processor rev 5 (v7l) Min CPU: 480 MHz, Max CPU: 1.8 GHz 512 KiB L1 cache, 1 MiB L2 cache 2GB LPDDR3 32 bit (armv7l) Linux Ubuntu 16.04.5 LTS, MATE Desktop Environment 1.12.1 _1) Lenovo Yoga 2:_ The 252% speedup generated with 4 goroutines on the Lenovo Yoga 2 when utilizing more than 1 processor is clearly visible in Figure 2. It is also visible that using more goroutines than processors slows the execution time down only by an almost negligible amount. Parallelization speedup comes at a price. Although very small, there is a slight decrease in recognition rates when utilizing more goroutines as shown in Figure 3. #### Iv-C2 Banana Pi M3 When looking at the results of the Banana Pi M3 in Figure 4, it is apparent that utilizing multiple cores leads to an even greater benefit than with the Lenovo. It was possible to generate a 320% speedup with 4 goroutines, and - due to pipelining - it was even possible to generate over 498% speedup when using more goroutines than there were threads available. The training and test set accuracies look promising too. A 99.26% training set accuracy and 97.14% test set accuracy with only one core has been accomplished. The accuracy does not get lower when utilizing more cores, even though quality differences in the recognition rate can occur. In Figure 5 it is clearly visible that recognition rate drops can occur at any time. ### _Accuracy growth depending on goroutines_ When only one goroutine is usedFigure 6 with the Banana Pi, the neural network starts with a very high recognition accuracy after the first epoch and has a very good learning rate. With 16 goroutinesFigure 7 the recognition accuracy starts lower and the network takes longer to learn. #### Iv-C Fig. 3: Lenovo Yoga 2 accuracy Fig. 2: Lenovo Yoga 2 speedup 1 Goroutine, accuracy > 90%/95%/99%: 93.24% accuracy after 1 epoch, 1040 seconds 95.22% accuracy after 2 epochs, 2006 seconds 99.03% accuracy after 15 epochs, 15608 seconds 16 goroutines, accuracy > 90%/95%/99%: 90.18% accuracy after 2 epochs, 392 seconds 95.12% accuracy after 8 epochs, 1583 seconds 99.02% accuracy after 49 epochs, 9628 seconds To reach a higher accuracy with more goroutines more epochs and training samples are needed. But the speedup allows to train it in shorter time - or, to look at it from another perspective - to compute more inputs in a much shorter timespan. ## V Lessons Learned The final part of this work is to look at what has been learned about the "do's and don'ts of implementing neural networks", Go as a language, the drawn conclusion, and possible future work. ### _"To do, or not to do?" of implementing neural networks_ There are certain roads to victory and many paths to development hell. The latter leads to a steeper learning curve and should therefore be preferred when trying to understand the implications of certain design decisions - but under normal circumstances the beaten path is the quicker route. These recommendations for other coders shall make implementing neural networks a little bit easier and shine a light on which thought processes are good and which are impractical to do. #### V-1 Arrays instead of structs Do not use one struct instance per neuron, connection, etc. as it has a large overhead. The compiler is able to optimize the usage of arrays. The first iteration of the neural network took hours for just one epoch on the Lenovo, while the array version takes less than a minute. #### V-2 Only save when necessary Only save and load data when needed. In the context of the neural network: Save either batchwise or after every epoch. Try to hold the data in the memory as long as possible. #### V-3 Machine readable is better than human readable The conversion of data to XML, JSON, or any other human readable format takes a higher amount of computation time, memory, and disk space, than machine readable formats. If a human readable format is needed, it should only be created, if a human wants to read it and there is a need for them to do so. Sifting through millions of weights and updates is not something a human should do. But, depending on the use case, the human readable format can be created, when * The process is finished and the results shall be shown. * An error occurs and the data is necessary to fix it. Fig. 4: Banana Pi M3 speedup Fig. 5: Banana Pi M3 accuracy If human entities want to access data while the process is running (in real time, or stepwise for debugging) there are different approaches: * Create only one file every few epochs which can be accessed by multiple human entities. Do NOT create it for every entity that accesses the file. * Duplicate the machine readable results and parse them on a different system. For snapshots a simple ID can be given to every file. #### Iii-B4 Parallelization and context switches It takes time to store states of threads. Data has to be shoved around between CPU caches. If applicable give a worker as much data as possible, with one drawback in mind: More data merges mean higher fluctuations and slower computation - less data merges can lead to a more stable convergence and faster computation [28], as well as a higher level of generalization [23]. All while being able to perform online learning due to the singular workers performing stochastic gradient descent. #### Iii-B5 Struct packing In Go it is possible to pack structs. That means organizing the data types in a way so that they waste the least amount of memory. The principle for this work was "Memory is cheap, but even though students lack the money to buy some, there is no need to overdo it". But one late evening (at 7 o'clock in the morning) these principles had been thrown over board. So structs have been packed. #### Iii-B6 Slices Do not loop through lines of data to append it to a batch line by line. Use the slice functionality of Go - which passes them by reference - if applicable. The following data has been taken from the old model with using the updated weights for the error calculation Figure 8. In example the code in Figure 9 takes 129.68 seconds for 1 training and 1 testing with the MNIST dataset, 4 cores, and a worker batch setting of 100, as shown in Figure 10. In comparison when utilizing slices instead of making a slice and appending the data lines within core_run() to send the worker batches to the workers as shown in Figure 11, saved about 17 seconds on the Lenovo, as is visible in Figure 12. Fig. 8: Code changes from wrong to correct code Fig. 6: Accuracy growth with 1 Goroutine Fig. 7: Accuracy growth with 16 goroutines for each cluster's \(\alpha\)-constraint, activationless parameter \(\alpha\) and activationless parameter \(\beta\). For each cluster's \(\alpha\)-constraint, activationless parameter \(\beta\) is defined by \(\alpha\). For each cluster's \(\alpha\)-constraint, activationless parameter \(\beta\) is defined by \(\alpha\). functions, but used the old equations from the first two versions of the paper. There is a small effect on the derivative functions when x = 0. In example with ELU: The old and new equation are only equal when \(\alpha\) = 1. Otherwise, when x = 0, f'(x) should be \(\alpha\), not 1. One trusted source - due to one University module handed out an excerpt without citing the source which was the basis for an assignment to calculate a forward, backward, and forward propagation by hand, and creating a simple neural network, which both got graded - used the updated weights as basis for the backpropagation. There was a larger accuracy dropFigure 15 when using multiple goroutines. Although it was possible to see when to tweak the parameters to gain a higher accuracy with a single coreFigure 16, there has been found no practical use of these values for the correct implementation - the hyperparameters vary widely, so they have to be tuned differently. Also mathematical sources are often not the best source for calculations in information systems. Math has been spared the problem of both errors due to overflow and underflow (except when using calculators or information systems). Also there is no need to optimize for memory or computation speed. The problem with trusting the wrong data has been solved with further research from different sources and consulting a mathematician to check if the partial derivatives and all Fig. 16: Accuracy with wrong parameters Fig. 14: Benchmark after code changes Fig. 15: Old Plot showing the accuracy decrease formulas have been implemented correctly. The error has then been found very quick when checking against the standard reference [26]. ### _Comment on Go_ Go is a wonderful language to write code. Implementation and testing of the neural network seemed to be easier than with other programming languages. But go also has some drawbacks (as does any language). The main annoyance were the "unused imports" bugs. Sometimes only certain outputs are needed for testing which will get dropped by the developer immediately afterwards. It's good that the Go compiler sees these oversights as errors, even though they are a huge annoyance. A probably better way would be if unused imports won't be tested in a debug environment, only in production. But this would have additional drawbacks when Go is used in environments where code quality is not highly valued. Another annoyance is the "sudden 'bad file descriptor' of doom". Sometimes it's just a "data reader error: file already closed". It was not possible to pin down what exactly causes the error, only that it affects the file as a whole. Not even deleting and creating a new file with the same name helps to overcome that error. Further testing is needed. An additional observation that can ruin ones day is, that the Go compiler for some reason accesses trashed files, at least under Linux. There is no problem when files are overwritten by new files. But if a file gets deleted, and a new one inserted instead, Go sometimes seems to try to compile the deleted files, which can lead to hard to trace errors. If there's, in example, an error where the Go compiler expects an integer value, the code provides an integer value, but recently a file with the same function expecting a double value had been trashed, simply empty the trash bin. Another "hard to debug except when you know it" part is: "panic: runtime error: invalid memory address or nil pointer dereference". This error occurs when the object has not been created with new(...). If it's further up in the code, i.e. some struct attribute, this error is not easy to find. When starting with Go that panic tells almost nothing about its nature. Circular dependencies are not allowed. They can happen while refactoring code or when making some design mistake. It's good that Go does not permit them as they are a sign of bad software design. The short variable declaration := is very handy. Go recognizes the type and assigns the value to the left hand variable. The best part: It won't break type safety, which prevents weird behavior. With the test coverage profiler it is easy to see the current code coverage. There is also the possibility to create test heat maps and to show the test coverage in the browser with highlighting good, poor, or not covered code parts2. Footnote 2: Go test coverage and html heatmap: [https://blog.golang.org/cover](https://blog.golang.org/cover) There are memory and cpu profilers, and even a profiling tool3. It is easy to list the top cpu or memory consumers or show a profiling web. Therefore memory issues can be found easily, as well as slow code parts. Footnote 3: Go profiling: [https://blog.golang.org/profiling-go-programs](https://blog.golang.org/profiling-go-programs) Go uses function inlining which is a great method for speeding up code. Goroutines are very lightweight4. As they're very efficient and only start to run if they get data from a channel, there's the probability of an application for parallelized neurons instead of only parallelized networks. Footnote 4: Currently 2, 4, or 8 LB per Goroutine, depending on the version, i.e. [https://github.com/golang/go/issues/7514](https://github.com/golang/go/issues/7514) It's easy to find and fix race conditions with Go as it comes with its own race detector. ## VI Conclusion and Future work It has been learned how to use the programming language Go and about its parallel speedup possibilities. The main accomplishment of this work is to have managed to create a stable and fast neural network. The hardest part was to understand the mathematical concepts and ramifications behind neural networks and how to implement them software wise. The main focus of this work was to see how the parallel speedup of a neural network behaves with the language Go. Due to time and resource restrictions only little derivations from the main focus were made. There are still ways left to make this neural network even more efficient, with higher accuracy, and so on. The current version could have some possible memory leaks. They will be fixed in a future version. As there will be further changes due to development and additional insights, the code will probably be refined and refactored in the future. Some parts of the code are still untested - mainly file reading and writing. As they work as intended no additional effort has been made to get 100% test coverage in these areas. Here is room for improvement. Optimization of the neural network would be the largest part of the future work. Currently it is only a simple network with Bias. It would be possible to implement momentum [29] and other artifacts to achieve higher accuracies. NADAM and other stochastic gradient descent optimization algorithms [28] could be implemented too. Smaller changes will also include several options, in example if the user wants bias nodes, which error severity to log, and to choose different lambdas for the L1 and L2 Regularization in the elastic net. Adaptive learning rates [29] would be of interest too. Different loss functions, especially Cross Entropy Loss [30] will be implemented in the future. There is an interest to look into Self-Normalizing Neural Networks [31].
2308.16553
The seating couple problem in even case
In this paper we consider the seating couple problem with an even number of seats, which, using graph theory terminology, can be stated as follows. Given a positive even integer $v=2n$ and a list $L$ containing $n$ positive integers not exceeding $n$, is it always possible to find a perfect matching of $K_v$ whose list of edge-lengths is $L$? Up to now a (non-constructive) solution is known only when all the edge-lengths are coprime with $v$. In this paper we firstly present some necessary conditions for the existence of a solution. Then, we give a complete constructive solution when the list consists of one or two distinct elements, and when the list consists of consecutive integers $1,2,\ldots,x$, each one appearing with the same multiplicity. Finally, we propose a conjecture and some open problems.
M. Meszka, A. Pasotti, M. A. Pellegrini
2023-08-31T08:44:22Z
http://arxiv.org/abs/2308.16553v1
# The Seating couple problem in even case ###### Abstract. In this paper we consider the seating couple problem with an even number of seats, which, using graph theory terminology, can be stated as follows. Given a positive even integer \(v=2n\) and a list \(L\) containing \(n\) positive integers not exceeding \(n\), is it always possible to find a perfect matching of \(K_{v}\) whose list of edge-lengths is \(L\)? Up to now a (non-constructive) solution is known only when all the edge-lengths are coprime with \(v\). In this paper we firstly present some necessary conditions for the existence of a solution. Then, we give a complete constructive solution when the list consists of one or two distinct elements, and when the list consists of consecutive integers \(1,2,\ldots,x\), each one appearing with the same multiplicity. Finally, we propose a conjecture and some open problems. Key words and phrases:seating couple problem, matching, Skolem sequence 2010 Mathematics Subject Classification: 05C70, 05A17 ## 1. Introduction In [8] the authors considered the following problem proposed by Roland Bacher. A king invites \(n\) couples for dinner at his round table containing \(2n+1\) seats, the king taking the last unoccupied chair. The king has to address the following task: given an arbitrary set of \(n\) couples, no one married for more than \(n\) years, is it always possible to seat all \(n\) couples at his table according to the royal protocol stipulating that, if the two spouses of a couple are in their \(i\)-th year of marriage, they have to occupy two chairs at _circular distance_\(i\) (where circular distance \(i\) means that the two chairs are separated by exactly \(i-1\) chairs)? Using a mathematical language, the problem can be restated as follows: given an arbitrary list of \(n\) natural numbers \(d_{1},\ldots,d_{n}\) in \(\{1,\ldots,n\}\), is it always possible to find an involution of \(2n+1\) circularly ordered points having a unique fixed point and consisting of \(n\) disjoint transpositions exchanging respectively two points at circular distance \(d_{1},\ldots,d_{n}\)? We point out that the same problem can be also stated using graph terminology (as already done in [9]). We prefer this choice because, in our proofs, we use tools from graph theory. To this purpose we introduce some definitions and notation, see [11] for a very good reference. In this paper \(K_{v}\) denotes the complete graph on \(\{0,1,\ldots,v-1\}\) for any positive integer \(v\). The length \(\ell(u,w)\) of an edge \(\{u,w\}\) of \(K_{v}\) is defined as \[\ell(u,w)=\min(|u-w|,v-|u-w|).\] If \(\Gamma\) is a subgraph of \(K_{v}\), then the list of edge-lengths of \(\Gamma\) is the list \(\ell(\Gamma)\) of the lengths (taken with their respective multiplicities) of all the edges of \(\Gamma\). For convenience, if a list \(L\) consists of \(a_{1}\)\(1\)'s, \(a_{2}\)\(2\)'s, \(\ldots,a_{t}\)\(t\)'s one writes \(L=\{1^{a_{1}},2^{a_{2}},\ldots,t^{a_{t}}\}\), whose underlying set is the set of the elements \(\{i:a_{i}>0\}\). Given a graph \(\Gamma\) we denote by \(V(\Gamma)\) and \(E(\Gamma)\) its vertex-set and its edge-set, respectively. If \(\Gamma\) has an odd (even, respectively) number of vertices, say \(2n+1\) (\(2n\), resp.), a _near_\(1\)-_factor_, ## 1. Introduction Let \(\Gamma\) be a finite finite set of integers. Let \(\Gamma\) be a finite set of integers. Let \(\Gamma\) be a finite set of integers. the complete graph having a given list of edge-lengths and we give results for some special classes of lists, including the case in which the underlying set has size one. Then, it is natural to consider lists having exactly two distinct edge-lengths, see Section 3. In this case, we obtain a full classification, as described in the following. **Theorem 1.4**.: _Let \(n,x,y,a\) be four integers such that \(1\leq x,y\leq n\), \(x\neq y\) and \(1\leq a<n\). Let \(d_{x}=\gcd(x,2n)\), \(d_{y}=\gcd(y,2n)\) and \(d=\gcd(x,y,2n)\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) if and only if \(d\) divides \(n\) and one of the following cases occurs:_ 1. \(\frac{x}{d}\) _is even,_ \(\frac{y}{d}\) _is odd,_ \(n-a\) _is even and either_ 1. \(d_{x}\) _divides_ \(n\)_; or_ 2. \(d_{x}\) _does not divide_ \(n\) _and_ \(2a\geq d_{x}\)_;_ 2. \(\frac{x}{d}\) _is odd,_ \(\frac{y}{d}\) _is even,_ \(a\) _is even and either_ 1. \(d_{y}\) _divides_ \(n\)_; or_ 2. \(d_{y}\) _does not divide_ \(n\) _and_ \(2(n-a)\geq d_{y}\)_;_ 3. \(\frac{x}{d}\) _and_ \(\frac{y}{d}\) _are both odd, and the following two conditions are both satisfied:_ 1. \(a\) _is even or_ \(da\geq d_{x}\)_._ 2. \(n-a\) _is even or_ \(d(n-a)\geq d_{y}\)_._ In Section 4 we consider lists in which each element appears the same number of times. In particular, we provide a complete solution for lists consisting of the integers \(1,2,\ldots,x\), for some \(1\leq x\leq n\), each one appearing with the same multiplicity. We conclude our paper with some considerations and highlighting a conjecture and two open questions that, we believe, are of particular interest for the seating couple problem. ## 2. Necessary conditions and preliminary results In the following given two integers \(a\) and \(b\) with \(a\leq b\), by \([a,b]\) we mean the set with elements \(a,a+1,\ldots,b\), while it is empty when \(a>b\). Given an edge \(\{u,w\}\) of \(K_{v}\) it is useful to define \(\ell^{\prime}(u,w)=|u-w|\). So, the length \(\ell(u,w)\) is nothing but \(\min(\ell^{\prime}(u,w),v-\ell^{\prime}(u,w))\). Clearly, if \(\ell^{\prime}(u,w)\leq\left\lfloor\frac{v}{2}\right\rfloor\), then \(\ell^{\prime}(u,w)=\ell(u,w)\). If \(\Gamma\) is a subgraph of \(K_{v}\), \(\ell^{\prime}(\Gamma)\) denotes the list \(\{\ell^{\prime}(e):e\in E(\Gamma)\}\). Also, given a nonnegative integer \(k\), by \(\Gamma+k\) one means the graph with vertex-set \(\{u+k:u\in V(\Gamma)\}\) and edge-set \(\{\{u+k,w+k\}:\{u,w\}\in E(\Gamma)\}\). Note that \(\Gamma+k\) is not necessarily a subgraph of \(K_{v}\). **Remark 2.1**.: _Given a perfect matching \(F_{1}\) of \(K_{v_{1}}\) and a perfect matching \(F_{2}\) of \(K_{v_{2}}\), one can easily get a perfect matching \(F=F_{1}\cup(F_{2}+v_{1})\) of \(K_{v_{1}+v_{2}}\) such that \(\ell^{\prime}(F)=\ell^{\prime}(F_{1})\cup\ell^{\prime}(F_{2})\). Note that, in general, the equality \(\ell(F)=\ell(F_{1})\cup\ell(F_{2})\) does not have to hold._ **Example 2.2**.: Consider for instance the perfect matchings \(F_{1}=\{\{0,3\},\{1,2\}\}\) of \(K_{4}\) and \(F_{2}=\{\{0,4\},\{1,2\},\{3,5\}\}\) of \(K_{6}\). Then, \(F=F_{1}\cup(F_{2}+4)=\{\{0,3\},\{1,2\},\{4,8\},\{5,6\}\), \(\{7,9\}\}\) is a perfect matching of \(K_{10}\) such that \(\ell^{\prime}(F)=\{1^{2},2,3,4\}=\ell^{\prime}(F_{1})\cup\ell^{\prime}(F_{2})\). On the other hand, \(\ell(F)=\{1^{2},2,3,4\}\), while \(\ell(F_{1})\cup\ell(F_{2})=\{1^{3},2^{2}\}\). We now present some necessary conditions for the existence of a perfect matching with a given list of edge-lengths. **Proposition 2.3**.: _Let \(v=2n\) be a positive integer and \(L\) be a list of \(n\) positive integers not exceeding \(n\). If there exists a perfect matching \(F\) of \(K_{v}\) such that \(\ell(F)=L\), then for any divisor \(d\) of \(v\) such that \(d\) does not divide \(n\), the number of multiples of \(d\) appearing in \(L\) does not exceed \(\frac{v-d}{2}\)._ Proof.: The proof is exactly the same as of [7, Proposition 1]. In this case the hypothesis \(d\) does not divide \(n\) ensures that \(\frac{v}{d}\) is odd, which is a necessary (and automatically satisfied) condition in the proof of [7, Proposition 1]. **Proposition 2.4**.: _Let \(L=\{1^{a_{1}},2^{a_{2}},\ldots,n^{a_{n}}\}\) where \(a_{i}\geq 0\) for every \(i\in[1,n]\). If there is a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=L\), then \(\sum\limits_{i=1}^{\left\lfloor\frac{n}{2}\right\rfloor}a_{2i}\) is even._ Proof.: Let \(L_{1}\) be the sublist of \(L\) containing exactly all the even elements of \(L\), and set \(L_{2}=L\setminus L_{1}\). Hence, \(L_{2}\) contains all the odd elements of \(L\). Let \(F_{1}\) and \(F_{2}\) be the subgraphs of \(F\) such that \(\ell(F_{1})=L_{1}\) and \(\ell(F_{2})=L_{2}\). Clearly, the length of an edge is odd if and only if its end-vertices have different parity. Hence, \(V(F_{2})\) contains the same number (that is \(|L_{2}|\)) of even and odd numbers. This implies that also \(V(F_{1})=V(K_{2n})\setminus V(F_{2})\) contains the same number of even and odd numbers. Since the end-vertices of the edges of \(F_{1}\) have the same parity, this implies that \(|L_{1}|\) is even. The statement follows. **Proposition 2.5**.: _Let \(c\) and \(n\) be positive integers and let \(F_{0},F_{1},\ldots,F_{c-1}\) be perfect matchings of \(K_{2n}\) such that \(\ell(F_{i})=\{1^{a_{i,1}},2^{a_{i,2}},\ldots,n^{a_{i,n}}\}\), where \(a_{i,j}\geq 0\). Then, there exists a perfect matching \(F\) of \(K_{2nc}\) such that \(\ell(F)=\{c^{b_{1}},(2c)^{b_{2}},\ldots,(nc)^{b_{n}}\}\), where \(b_{j}=\sum\limits_{i=0}^{c-1}a_{i,j}\)._ Proof.: Let \(R_{i}\) be the matching of \(K_{2nc}\) obtained from \(F_{i}\) applying the relabeling \(u\mapsto cu+i\). Hence, \(\ell(R_{i})=\{c^{a_{i,1}},(2c)^{a_{i,2}},\ldots,(nc)^{a_{i,n}}\}\) and \(V(R_{i})=\{w:w\in[0,2nc-1]\text{ and }w\equiv i\pmod{c}\}\). We conclude that \(R_{0}\cup R_{1}\cup\ldots\cup R_{c-1}\) is a perfect matching of \(K_{2n}\) with the required properties. **Proposition 2.6**.: _Let \(c\) and \(n\) be positive integers and let \(F\) be a perfect matching of \(K_{2nc}\) such that \(\ell(F)=\{c^{b_{1}},(2c)^{b_{2}},\ldots,(nc)^{b_{n}}\}\), where \(b_{j}\geq 0\). Then, there exist \(c\) (not necessarily distinct) perfect matchings \(F_{0},F_{1},\ldots,F_{c-1}\) of \(K_{2n}\) such that \(\ell(F_{i})=\{1^{a_{i,1}},2^{a_{i,2}},\ldots,n^{a_{i,n}}\}\), where \(a_{i,j}\geq 0\) and \(b_{j}=\sum\limits_{i=0}^{c-1}a_{i,j}\)._ Proof.: The end-vertices of each edge of \(F\) belong to the same congruence class modulo \(c\). So, considering the vertices in each congruence class, we obtain \(c\) submatchings \(S_{0},S_{1},\ldots,S_{c-1}\) of \(F\) such that \(V(S_{i})=\{u:u\in[0,2nc-1]\text{ and }u\equiv i\pmod{c}\}\) and \(\ell(S_{i})=\{c^{a_{i,1}},(2c)^{a_{i,2}},\ldots,\allowbreak(nc)^{a_{i,n}}\}\). Applying the relabeling \(u\mapsto\frac{u-i}{c}\) we obtain the perfect matchings \(F_{0},F_{1},\ldots,F_{c-1}\) of \(K_{2n}\). Previous proposition is a useful tool for getting some non-existence results as shown in the following example. **Example 2.7**.: Applying Proposition 2.6 one can see that there is no perfect matching \(F\) of \(K_{20}\) such that \(\ell(F)=\{4^{3},6^{7}\}\). In fact, if we take \(c=2\) the existence of \(F\) would imply the existence of two perfect matchings \(F_{0},F_{1}\) of \(K_{10}\) such that \(\ell(F_{0})=\{2^{a_{0,2}},3^{a_{0,3}}\}\) and \(\ell(F_{1})=\{2^{a_{1,2}},3^{a_{1,3}}\}\), where \(a_{0,2}+a_{1,2}=3\). By Proposition 2.4 we have a contradiction. We now consider the existence of perfect matchings for some special lists. **Proposition 2.8**.: _Let \(x\) and \(n\) be two positive integers such that \(1\leq x\leq n\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n}\}\) if and only if \(\gcd(x,2n)\) is a divisor of \(n\)._ Proof.: Set \(d=\gcd(x,2n)\). If \(d\) does not divide \(n\), the non-existence of the perfect matching follows by Proposition 2.3. Suppose now that \(d\) divides \(n\). Let \[F=\left\{\left\{2ix+j,(2i+1)x+j\right\}:i\in\left[0,\frac{n}{d}-1\right],j\in[0,d -1]\right\},\] where the elements are considered modulo \(2n\). Clearly, \(F\) is a set of \(n\) edges, each of length \(x\). It is not hard to see that \(V(F)=V(K_{2n})\), namely that \(F\) is a perfect matching of \(K_{2n}\) such that \(\ell(F)=\{x^{n}\}\). **Example 2.9**.: Take \(x=9\) and \(n=12\), hence \(d=\gcd(x,2n)=3\). In particular \(d\) divides \(n\), so there exists a perfect matching \(F\) of \(K_{24}\) such that \(\ell(F)=\{9^{12}\}\). Following the proof of previous proposition, we have \(F=\{\{18i+j,18i+9+j\}:i\in[0,3],j\in[0,2]\}\), that is \[\begin{array}{rcl}F&=&\{\{0,9\},\{18,3\},\{12,21\},\{6,15\},\{1,10\},\{19,4 \},\{13,22\},\{7,16\},\\ &\{2,11\},\{20,5\},\{14,23\},\{8,17\}\}.\end{array}\] **Proposition 2.10**.: _Let \(x,y,n\) be three integers such that \(1\leq x<y<n\). There is no perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x,y,n^{n-2}\}\)._ Proof.: Let \(L=\{x,y,n^{n-2}\}\) with \(x,y\) and \(n\) as in the statement. For the sake of contradiction, suppose that there exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=L\). This implies that there is a subgraph \(F^{\prime}\) of \(F\) such that \(\ell(F^{\prime})=\{n^{n-2}\}\). Clearly, the edges of \(F^{\prime}\) are \(\{i,n+i\}\) for \(i\in[0,n-1]\setminus\{u,w\}\) for some \(u,w\). In other words, \(V(F^{\prime})=V(K_{2n})\setminus\{u,n+u,w,n+w\}\) for some \(u,w\in[0,n-1]\). It is easy to see that it is not possible to match the vertices \(u,n+u,w,n+w\) in such a way to have two disjoint edges with distinct lengths \(x\) and \(y\) both different from \(n\). To conclude this section we propose a constructive asymptotic result. **Proposition 2.11**.: _Let \(t\) and \(n\) be integers with \(1\leq t\leq n\) and let \(L=\{1^{a_{1}},2^{a_{2}},\ldots,t^{a_{t}}\}\) with \(a_{i}\geq 0\) and \(|L|=n\). Set \(R=\sum\limits_{j\geq 1}a_{2j}\) and \(S=\sum\limits_{j\geq 2}\left\lfloor\frac{j-1}{2}\right\rfloor a_{j}\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=L\) whenever \(R\) is even and \(a_{1}\geq S\)._ Proof.: For every \(x,y\geq 0\), define \(F_{x}=\{\{0,2x+1\}\}\cup\{\{2i+1,2i+2\}:i\in[0,x-1]\}\) and \(F_{x,y}=\{\{0,2x+2\},\{2x+1,2x+2y+3\}\}\cup\{\{2i+1,2i+2\}:i\in[0,x-1]\cup[x+1, x+y]\}\). Then, \(F_{x}\) is a perfect matching of \(K_{2(x+1)}\) such that \(\ell^{\prime}(F_{x})=\{1^{x},2x+1\}\), while \(F_{x,y}\) is a perfect matching of \(K_{2(x+y+2)}\) such that \(\ell^{\prime}(F_{x,y})=\{1^{x+y},2x+2,2y+2\}\). The statement follows from Remark 2.1. **Example 2.12**.: Let \(L=\{1^{21},2^{7},4,5^{2},10^{4}\}\). Hence, \(|L|=35\) and so we are working in \(K_{70}\). Note that all the conditions of previous proposition are satisfied, since \(R=12\) is even and and \(a_{1}\geq S=21\). A perfect matching of \(K_{70}\) with the required properties is \[F_{4,4}\cup(F_{4,4}+20)\cup(F_{2}+40)\cup(F_{2}+46)\cup(F_{0,1}+52)\cup(F_{0, 0}+58)\cup(F_{0,0}+62)\cup(F_{0,0}+66).\] ## 3. A complete solution for the two edge-lengths case In this section we present the necessary and sufficient conditions for the existence of a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) for any \(a\in[1,n-1]\) and any \(x,y\in[1,n]\) with \(x\neq y\). Note that the cases \(x=y\) and \(a=n\) have been already solved in Proposition 2.8. In other words, we prove Theorem 1.4 and we begin with the case \(\gcd(x,y,2n)=1\), where \(x\) is even. **Theorem 3.1**.: _Let \(n,a,x,y\) be integers such that \(x\) is even, \(x\neq y\), \(1\leq x,y\leq n\) and \(1\leq a<n\). Let \(d=\gcd(x,2n)\) and suppose that \(\gcd(d,y)=1\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) if and only if one the following cases occurs:_ 1. \(d\) _divides_ \(n\) _and_ \(n-a\) _is even;_ 2. \(d\) _does not divide_ \(n\) _and_ \(n-a\) _is an even integer such that_ \(n-a\leq\frac{2n-d}{2}\)_._ Proof.: Since \(x\) is even, \(d\) is also even: by Proposition 2.4 we may assume that \(n-a\) is even. Furthermore, notice that the integers \(ix+jy\), with \(i\in[0,\frac{2n}{d}-1]\) and \(j\in[0,d-1]\) are pairwise distinct modulo \(2n\). In fact, suppose \(i_{1}x+j_{1}y\equiv i_{2}x+j_{2}y\pmod{2n}\) for some \(i_{1},i_{2}\in[0,\frac{2n}{d}-1]\) and some \(j_{1},j_{2}\in[0,d-1]\). Then, \(j_{1}y\equiv j_{2}y\pmod{d}\), whence \(j_{1}\equiv j_{2}\pmod{d}\) as \(\gcd(d,y)=1\). We obtain \(j_{1}=j_{2}\) and so \(i_{1}x\equiv i_{2}x\pmod{2n}\). It follows that \(i_{1}\equiv i_{2}\pmod{\frac{2n}{d}}\), whence \(i_{1}=i_{2}\). If \(d\) divides \(n\), set \(\bar{n}=\frac{n}{d}\); otherwise, set \(\bar{n}=\frac{2n-d}{2d}\) since \(\frac{2n}{d}\) is an odd integer. Let \(q,r\) be two integers such that \(n-a=2\bar{n}q+2r\) where \(0\leq r<\bar{n}\). Working modulo \(2n\), take the three matchings: \[\begin{array}{lll}A&=&\left\{\{2ix,(2i+1)x\},\{2ix+y,(2i+1)x+y\}:i\in[0,r-1] \right\}\cup\\ &&\left\{\{ix,ix+y\}:i\in[2r,2\bar{n}-1]\right\},\\ B&=&\left\{\{2ix,(2i+1)x\},\{2ix+y,(2i+1)x+y\}:i\in[0,\bar{n}-1]\right\},\\ C&=&\left\{\{ix,ix+y\}:i\in[0,2\bar{n}-1]\right\}.\end{array}\] Notice that \(V(A)=V(B)=V(C)=\{ix,ix+y:i\in[0,2\bar{n}-1]\}\). Furthermore, \(\ell(A)=\{x^{2r},y^{2(\bar{n}-r)}\}\), \(\ell(B)=\{x^{2\bar{n}}\}\) and \(\ell(C)=\{y^{2\bar{n}}\}\). If we are in the case (1), take \[F=A\cup\bigcup_{k=1}^{q}(B+2ky)\cup\bigcup_{k=q+1}^{\frac{d-2}{2}}(C+2ky).\] In this way we get \(2r+2q\bar{n}=n-a\) edges of length \(x\) and \(2(\bar{n}-r)+2\left(\frac{d}{2}-q-1\right)\bar{n}=a\) edges of length \(y\) (note that \(q<\frac{d}{2}\)). From the previous argument it follows that \(V(F)=[0,2n-1]\). We conclude that \(F\) is a perfect matching of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\). In case (2), Proposition 2.3 implies \(n-a\leq\frac{2n-d}{2}\). Define the additional matching \[D=\left\{\left\{\left(\frac{2n}{d}-1\right)x+2jy,\left(\frac{2n}{d}-1\right)x +(2j+1)y\right\}:j\in\left[0,\frac{d-2}{2}\right]\right\}.\] We have \(V(D)=\left\{\left(\frac{2n}{d}-1\right)x+jy:j\in[0,d-1]\right\}\) and \(\ell(D)=\left\{y^{\frac{d}{2}}\right\}\). If \(r=0\), take \[F=\bigcup_{k=0}^{q-1}(B+2ky)\cup\bigcup_{k=q}^{\frac{d-2}{2}}(C+2ky)\cup D.\] In this way we get \(2q\bar{n}=n-a\) edges of length \(x\) and \(2\left(\frac{d}{2}-q\right)\bar{n}+\frac{d}{2}=\frac{d}{2}(1+2\bar{n})-(n-a)=a\) edges of length \(y\) (note that \(q\leq\frac{d}{2}\leq a\)). If \(r>0\), take \[F=A\cup\bigcup_{k=1}^{q}(B+2ky)\cup\bigcup_{k=q+1}^{\frac{d-2}{2}}(C+2ky)\cup D.\] In this way we get \(2r+2q\bar{n}=n-a\) edges of length \(x\) and \(2(\bar{n}-r)+2\left(\frac{d}{2}-q-1\right)\bar{n}+\frac{d}{2}=\frac{d}{2}(1+2\bar {n})-(n-a)=a\) edges of length \(y\) (note that \(q+1\leq\frac{d}{2}\leq a\)). Since \(2\bar{n}-1=\frac{2n}{d}-2\), in both cases we have \(V(F)=[0,2n-1]\). We conclude that \(F\) is a perfect matching of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\). **Example 3.2**.: To obtain a perfect matching \(F\) of \(K_{40}\) such that \(\ell(F)=\left\{5^{6},12^{14}\right\}\), write \(x=12\), so \(d=\gcd(12,40)=4\) divides \(n=20\), hence we are in Case (1) of previous theorem. Following the notation of the proof, write \(n-a=14=2\cdot 5\cdot 1+2\cdot 2\) and take \(F=A\cup(B+10)\) where \[\begin{array}{rcl}A&=&\{\{24i,24i+12\},\{24i+5,24i+17\}:i\in[0,1]\}\cup\{\{1 2i,12i+5\}:i\in[4,9]\},\\ B&=&\{\{24i,24i+12\},\{24i+5,24i+17\}:i\in[0,4]\}.\end{array}\] Hence, \[\begin{array}{rcl}F&=&\{\{0,12\},\{24,36\},\{5,17\},\{29,1\},\{8,13\},\{20, 25\},\{32,37\},\{4,9\},\{16,21\},\\ &&\{28,33\}\}\cup\{\{10,22\},\{34,6\},\{18,30\},\{2,14\},\{26,38\},\{15,27\}, \{39,11\},\\ &&\{23,35\},\{7,19\},\{31,3\}\}.\end{array}\] Note that in this case \(C\) is empty. **Example 3.3**.: To obtain a perfect matching \(F\) of \(K_{42}\) such that \(\ell(F)=\left\{7^{13},12^{8}\right\}\), write \(x=12\). Since \(d=\gcd(12,42)=6\) does not divide \(n=21\), we are in Case (2) of Theorem 3.1. Following the notation of the proof, write \(n-a=8=2\cdot 3\cdot 1+2\cdot 1\) and take \(F=A\cup(B+14)\cup(C+28)\cup D\) where \[\begin{array}{rcl}A&=&\{\{24i,24i+12\},\{24i+7,24i+19\}:i=0\}\cup\{\{12i,12 i+7\}:i\in[2,5]\},\\ B&=&\{\{24i,24i+12\},\{24i+7,24i+19\}:i\in[0,2]\},\\ C&=&\{\{12i,12i+7\}:i\in[0,5]\},\\ D&=&\{\{14j+30,14j+37\}:j\in[0,2]\}.\end{array}\] That is, take \[\begin{array}{rcl}F&=&\{\{0,12\},\{7,19\},\{24,31\},\{36,1\},\{6,13\},\{18,2 5\}\}\cup\{\{14,26\},\{38,8\},\\ &&\{20,32\},\{21,33\},\{3,15\},\{27,39\}\}\cup\{\{28,35\},\{40,5\},\{10,17\},\{ 22,29\},\\ &&\{34,41\},\{4,11\}\}\cup\{\{30,37\},\{2,9\},\{16,23\}\}.\end{array}\] We now consider the case when \(x\) is odd. **Lemma 3.4**.: _Let \(n,x,a\) be three integers such that \(x\) is odd, \(1<x<n\) and \(\frac{n}{2}\leq a<n\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{1^{a},x^{n-a}\}\)._ Proof.: Let \(b=n-a\), then \(a\geq b>0\) and \(b\leq\frac{n}{2}\). Let \(s\) and \(t\) be nonnegative integers such that \(b=sx+t\), where \(0\leq t<x\). Consider separately three cases. Case I: \(t=0\). Let \(A=\{\{2ix+j,(2i+1)x+j\}:i\in[0,s-1],\ j\in[0,x-1]\}\). Note that \(A\) is a matching containing \(b\) edges of length \(x\). Vertices which are not in \(V(A)\) make the interval \(H=[2sx,2n-1]\) of cardinality \(2a\), from which we can obviously construct a matching \(B\) with \(a\) edges of length \(1\). Thus we can take \(F=A\cup B\). Case II: \(t\) is odd. Define two vertex disjoint matchings: \[\begin{array}{rcl}A&=&\{\{2ix+j,(2i+1)x+j\}:i\in[0,s-1],\ j\in[0,x-1]\},\\ B&=&\{\{2sx+j,(2s+1)x+j\}:j\in[0,t-1]\}.\end{array}\] Notice that \(A\cup B\) contains \(b\) edges of length \(x\). Vertices which are not in \(V(A\cup B)\) make two disjoint intervals: \(H=[2sx+t,2sx+x-1]\) and \(I=[2sx+x+t,2n-1]\). Both \(H\) and \(I\) are nonempty and each contains an even number of consecutive integers, so it is immediate to construct a matching \(C\) containing \(\frac{|H|+|I|}{2}=a\) edges of length \(1\). Then, \(F=A\cup B\cup C\) is a perfect matching of \(K_{2n}\) such that \(\ell(F)=\{1^{a},x^{n-a}\}\). Case III: \(t\) is even and \(t\geq 2\). Let \(z\) be a positive integer such that \(n=x+z\), and write \(k=\left\lfloor\frac{x}{2z}\right\rfloor\). Consider two subcases. Subcase III.A: \(s>0\) or \(k=0\). Similarly to the above cases, define the following three vertex disjoint matchings: \[A = \{\{2ix+j,(2i+1)x+j\}:i\in[0,s-1],\ j\in[0,x-1]\},\] \[B = \{\{2sx,(2s+1)x\}\},\] \[C = \{\{(2s+1)x+j,(2s+2)x+j\}:j\in[1,t-1]\}.\] Note that \(A\cup B\cup C\) is a set of \(b\) edges, each of length \(x\), since \(2z+x\equiv-x\pmod{2n}\). Vertices which are not in \(V(A\cup B\cup C)\) make three disjoint intervals: \(H=[2sx+1,2sx+x-1]\), \(I=[2sx+x+t,2sx+2x]\) and \(J=[2sx+2x+t,2n-1]\). It is trivial that both \(H\) and \(I\) are nonempty. Now we show that \(2sx+2x+t<2n-1\). If \(s>0\), then \(b=sx+t\geq x+t\). On the other hand \(b\leq\frac{n}{2}\), whence \(x\leq sx\leq\frac{n}{2}-t\). So, \[2sx+2x+t=(sx+t)+sx+2x=b+sx+2x\leq\frac{n}{2}+\left(\frac{n}{2}-t\right)+2 \left(\frac{n}{2}-t\right)<2n-1.\] Suppose now \(s=k=0\). Since \(s=0\), we have \(b=t<x\); since \(k=0\), we get \(x<2(n-x)\) and hence \(x<\frac{2n}{3}\). So, \[2sx+2x+t=2x+t\leq 2x+x-1<2n-1.\] Hence, we have proved that \(J\) is nonempty too. Each of \(H\), \(I\) and \(J\) has even cardinality, so we can get a matching \(D\) containing \(\frac{|H|+|I|+|J|}{2}=a\) edges of length \(1\). Then it is sufficient to take \(F=A\cup B\cup C\cup D\). Subcase III.B: \(s=0\) and \(k>0\). Thus \(x>2z\) and, since \(n=x+z\), then \(2n<3x\). Moreover, \(b=t\) is even and \(b\leq\frac{n}{2}<\frac{3x}{4}\). Let \(b=2pz+r\), where \(0\leq r<2z\) and \(p\geq 0\). Then \(r\) is even. Define the following vertex disjoint matchings: \[A = \{\{2iz,2(i+1)z+x\}:i\in[0,p-1]\},\] \[B = \{\{2iz+j,2iz+x+j\}:i\in[0,p-1],\ j\in[1,2z-1]\},\] \[C = \{\{2pz+j,2pz+x+j\}:j\in[1,r-1]\},\] and \[D=\{\{2pz,2(p+1)z+x\}\}\] if \(r>0\), \(D=\emptyset\) otherwise. The set \(A\cup B\cup C\cup D\) contains \(b\) edges, each of length \(x\). Notice that vertices which are not in \(V(A\cup B\cup C\cup D)\) make three disjoint intervals: \(H=[b,x]\), \(I=[x+b-h+1,2(p+1)z+x-h]\) and \(J=[2(p+1)z+x+1,2n-1]\), where \(h=0\) if \(r=0\) and \(h=1\) otherwise. Each of these intervals contains an even number of consecutive integers, \(H\) and \(I\) are nonempty while \(J\) is empty only when \(a=b=x-1=2\). Thus, a matching \(E\) containing \(\frac{|H|+|I|+|J|}{2}=a\) edges of length \(1\) can be easily constructed. In this case, take \(F=A\cup B\cup C\cup D\cup E\) **Example 3.5**.: Consider the list \(L=\{1^{5},7^{4}\}\). Following the notation of the proof of the previous lemma we get: \(x=7\), \(n=9\), \(a=5\), \(b=4\), hence \(s=0\) and \(t=4\). This means that to construct \(F\) Case III is applied. Then \(z=2\) and \(k=1\). By Subcase III.B, \(p=1\), \(r=0\) and \(A=\{\{0,11\}\}\), \(B=\{\{1,8\},\{2,9\},\{3,10\}\}\), \(C=D=\emptyset\), \(H=[4,7]\), \(I=[12,15]\), \(J=[16,17]\). Thus \(F=A\cup B\cup E\), where \(E=\{\{4,5\},\{6,7\},\{12,13\},\{14,15\},\{16,17\}\}\). **Proposition 3.6**.: _Let \(n,x,a\) be three integers such that \(1<x<n\), \(\gcd(x,2n)=1\) and \(1\leq a<n\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{1^{a},x^{n-a}\}\)._ Proof.: By Lemma 3.4, it remains to consider the case when \(1\leq a<n-a\). Since \(\gcd(x,2n)=1\), there exists an integer \(y\) such that \(1<y<2n\) and \(xy\equiv 1\pmod{2n}\). By Lemma 3.4, \(K_{2n}\) contains a perfect matching \(F^{\prime}\) such that \(\ell(F^{\prime})=\{1^{n-a},y^{a}\}\) if \(y\leq n\), or \(\ell(F^{\prime})=\{1^{n-a},(2n-y)^{a}\}\) if \(n<y<2n\). In both cases, we apply the relabeling \(i\mapsto xi\) to all vertices of \(F^{\prime}\) to get a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{1^{a},x^{n-a}\}\). **Lemma 3.7**.: _Let \(n\) and \(a\) be two integers such that \(n\) is odd, \(a\) is even and \(2\leq a<n\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{1^{a},n^{n-a}\}\)._ Proof.: Define the matching \(A=\{\{j,n+j\}:j\in[0,n-a-1]\}.\) Clearly, \(A\) contains \(n-a\) edges of length \(n\). Vertices which are not in \(V(A)\) make two disjoint intervals: \(H=[n-a,n-1]\) and \(I=[2n-a,2n-1]\). Both \(H\) and \(I\) contain \(a\) consecutive integers, so it is immediate to construct a matching \(B\) consisting of \(a\) edges of length \(1\). Then, \(F=A\cup B\) is a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{1^{a},n^{n-a}\}\). The following result extends [4, Theorem 2.2] by replacing the hypothesis \(\gcd(x_{i},2n)=1\) with the weaker hypothesis \(\gcd(x_{i},2)=1\). **Lemma 3.8**.: _Let \(L\) be a list of length \(n\) such that each its element \(x_{i}\) is an odd positive integer and does not exceed \(n\). If \(K_{2n}\) contains a perfect matching \(F\) such that \(\ell(F)=L\) then there exist \(\varepsilon_{1},\varepsilon_{2},\ldots,\varepsilon_{n}\) such that \(\varepsilon_{i}\in\{-1,1\}\) and \(\sum\limits_{i=1}^{n}\varepsilon_{i}x_{i}\equiv n\pmod{2n}\)._ Proof.: Each edge of \(F\) has odd length \(x_{i}\) so one its end-vertex, \(u_{i}\), has odd label and the other, \(w_{i}\), even. Clearly, \(u_{i}-w_{i}\equiv\varepsilon_{i}x_{i}\pmod{2n}\) for some \(\varepsilon_{i}=\pm 1\). Set \(s=\sum\limits_{i=1}^{n}\varepsilon_{i}x_{i}\). Since the sum of all odd labels of vertices in \(K_{2n}\) is equal to \(n^{2}\) and the sum of all vertices with even labels is \(n^{2}-n\), we obtain \(s\equiv n\pmod{2n}\). **Theorem 3.9**.: _Let \(n,x,y,a\) be four integers such that \(x\) and \(y\) are odd, \(x\neq y\), \(1\leq x,y\leq n\) and \(1\leq a<n\). Let \(d_{x}=\gcd(x,2n)\) and \(d_{y}=\gcd(y,2n)\), and suppose that \(\gcd(d_{x},d_{y})=1\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) if and only if both conditions are satisfied:_ 1. \(a\) _is even or_ \(a\geq d_{x}\)_;_ 2. \(n-a\) _is even or_ \(n-a\geq d_{y}\)_._ Proof.: To prove the necessity, suppose to the contrary that (1) or (2) is not satisfied. First consider the case when \(a\) is odd and \(1\leq a<d_{x}\). By Lemma 3.8, if \(K_{2n}\) contains a perfect matching \(F\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\), then there exist \(\varepsilon_{1},\varepsilon_{2},\ldots,\varepsilon_{n}\) such that \(\varepsilon_{i}\in\{-1,1\}\) and \(y\sum\limits_{i=1}^{a}\varepsilon_{i}+x\sum\limits_{i=a+1}^{n}\varepsilon_{i} \equiv n\pmod{2n}\). Since \(n\) is divisible by \(d_{x}\) and \(x\sum\limits_{i=a+1}^{n}\varepsilon_{i}\equiv 0\pmod{d_{x}}\) the integer \(sy\), where \(s=\sum\limits_{i=1}^{a}\varepsilon_{i}\), has to be also divisible by \(d_{x}\). Notice that \(s\) is odd and \(-a\leq s\leq a\). Moreover, \(\gcd(y,d_{x})=1\), which immediately leads to a contradiction. The proof in the case when (2) is not satisfied follows in exactly the same way. To prove the sufficiency, w.l.o.g. we may assume that \(a\leq n-a\) (otherwise it is enough to replace \(y\) with \(x\)). Set \(d=d_{x}\). First, consider the case \(d=1\). Then, there exists an integer \(p\) such that \(1\leq p<2n\) and \(xp\equiv 1\pmod{2n}\), and there exists an integer \(q\) such that \(1<q<2n\) and \(yp\equiv q\pmod{2n}\). Set \(r=\min(q,2n-q)\); so, \(r\) is odd and such that \(1<r\leq n\). Since \(\frac{n}{2}\leq n-a<n\), we can apply Lemma 3.4 (when \(r<n\)) or Lemma 3.7 (when \(r=n\), whence \(d_{y}=n\)). Hence, there exists a perfect matching \(\tilde{F}\) of \(K_{2n}\) such that \(\ell(\tilde{F})=\{1^{n-a},r^{a}\}\): to get a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\), apply the relabeling \(i\mapsto xi\) to all vertices of \(\tilde{F}\). It remains to consider the case \(d>1\). Then \(d\) is odd, and \(n=dm\), \(x=dz\) for some \(1\leq z<m\), where \(\gcd(z,2m)=1\). Let \(\mu\) be an integer such that \(1\leq\mu<2m\) and \(z\mu\equiv 1\pmod{2m}\). Let \(\xi\) be a positive integer such that \(1\leq\xi<2m\) and \(\xi\equiv y\pmod{2m}\), and set \(\bar{y}=\min(\xi,2m-\xi)\). Let \(\vartheta\) be an integer such that \(0\leq\vartheta<2m\) and \(\vartheta\equiv\bar{y}\mu\pmod{2m}\). Let \(F^{\prime}\) be a perfect matching of \(K_{2m}\) such that \(\ell(F^{\prime})=\{z^{m}\}\), whose existence follows from Proposition 2.8. We need to construct another perfect matching, \(F^{\prime\prime}\), of \(K_{2m}\) such that \(\ell(F^{\prime\prime})=\{\bar{y},z^{m-1}\}\). So, working modulo \(2m\), take \[F^{\prime\prime}=\{\{0,\bar{y}\}\}\cup\left\{\{(2i-1)z,2iz\}:i\in\left[1, \frac{\vartheta-1}{2}\right]\right\}\cup\left\{\{2iz,(2i+1)z\}:i\in\left[ \frac{\vartheta+1}{2},m-1\right]\right\}.\] Let \(\bar{F}=F^{\prime}\) if \(a\) is even and \(\bar{F}=F^{\prime\prime}\) otherwise. For each edge \(\{u,w\}\) of length \(z\) in \(\bar{F}\) and for each \(k\) such that \(0\leq k\leq\frac{d-1}{2}\), we construct a matching \(A_{2k}\) of cardinality \(d\) in \(K_{2n}\) with \(2k\) edges of length \(y\) and \(d-2k\) edges of length \(x\). So, working modulo \(2n\), take \[\begin{array}{lll}A_{2k}&=&\{\{du+2iy,du+(2i+1)y\},\{dw+2iy,dw+(2i+1)y\}:i \in[0,k-1]\}\cup\\ &&\{\{du+iy,dw+iy\}:i\in[2k,d-1]\}.\end{array}\] Notice that \(V(A_{2k})=\{du+iy,dw+iy:i\in[0,d-1]\}\). Similarly, the edge \(\{0,\bar{y}\}\) of length \(\bar{y}\) in \(\bar{F}\) corresponds to a matching \(B\) of cardinality \(d\) in \(K_{2n}\) such that its edges have length \(y\): \[B=\{\{2iy,(2i+1)y\}:\ i\in[0,d-1]\}\] (also here, we work modulo \(2n\)). Notice that \(V(B)=\{du+iy:\ i\in[0,2d-1]\}\). Let \(a=(d-1)b+c\) for nonnegative integers \(b\) and \(c\) such that \(c<d-1\). Since \(a\leq\frac{n}{2}\), it follows that \((d-1)b\leq\frac{dm}{2}\), whence \(b\leq\frac{3m}{4}\). To construct a perfect matching \(F\) of \(K_{2n}\), we proceed separately depending on the parity of \(a\). Case I: \(a\) is even. Then \(c\) is even. For \(b\) edges in \(\bar{F}\) we take the corresponding matching \(A_{d-1}\), and, if \(c>0\), another edge is used to get \(A_{c}\). For each remaining edge of \(\bar{F}\) we take a matching \(A_{0}\). So, we obtain \(b(d-1)+c=a\) edges of length \(y\) and \(b+(d-c)+(m-b-1)d=n-a\) edges of length \(x\). Case II: \(a\) is odd. Then \(a\geq d\), \(b\geq 1\) and \(c\) is odd. A single edge of length \(\bar{y}\) in \(\bar{F}\) (the only one if \(z\neq\bar{y}\)) is used to construct \(B\). For \(b-1\) edges of length \(z\) in \(\bar{F}\) corresponding \(A_{d-1}\) are taken, and possibly \(A_{c-1}\) (if \(c>1\)) for one more edge of length \(z\). Each remaining edge of length \(z\) in \(\bar{F}\) is used to construct \(A_{0}\). If \(c=1\) we obtain \(d+(b-1)(d-1)=a\) edges of length \(y\) and \((b-1)+d(m-b)=n-a\) edges of length \(x\). If \(c>1\) we obtain \(d+(b-1)(d-1)+(c-1)=a\) edges of length \(y\) and \((b-1)+(d-c+1)+d(m-b-1)=n-a\) edges of length \(x\). In both cases, the edges give a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{y^{a},x^{n-a}\}\) **Example 3.10**.: Consider the list \(L=\{15^{25},35^{17}\}\). Following the notation of the proof of the previous theorem we get: \(x=15\), \(y=35\), \(n=42\) and \(a=17\), whence \(d=3\), \(d_{y}=7\), \(m=14\), \(z=5\), \(\bar{y}=7\), \(\mu=17\) and \(\vartheta=7\). First, we need to construct a perfect matching \(\bar{F}\) of \(K_{28}\) such that \(\ell(\bar{F})=\{7,5^{13}\}\): \[\bar{F} = \{\{0,7\}\}\cup\{\{5,10\},\{15,20\},\{25,2\}\}\cup\{\{12,17\},\{22,27\},\{4,9\},\] \[\{14,19\},\{24,1\},\{6,11\},\{16,21\},\{26,3\},\{8,13\},\{18,23\}\}.\] We now follow Case II: \(b=8\) and \(c=1\). For the edge \(\{0,7\}\) of \(\bar{F}\), we construct the matching \(B=\{\{0,35\},\{70,21\},\{56,7\}\}\). Then for the seven edges \(\{5,10\},\{15,20\}\), \(\{25,2\}\), \(\{12,17\}\), \(\{22,27\}\), \(\{4,9\}\), \(\{14,19\}\) of \(\bar{F}\) we make the matchings \[\begin{array}{rclrcl}A_{2}^{1}&=&\{\{15,50\},\{30,65\},\{1,16\}\},&A_{2}^{2}& =&\{\{45,80\},\{60,11\},\{31,46\}\},\\ A_{2}^{3}&=&\{\{75,26\},\{6,41\},\{61,76\}\},&A_{2}^{4}&=&\{\{36,71\},\{51,2\}, \{22,37\}\},\\ A_{2}^{5}&=&\{\{66,17\},\{81,32\},\{52,67\}\},&A_{2}^{6}&=&\{\{12,47\},\{27,62 \},\{82,13\}\},\\ A_{2}^{7}&=&\{\{42,77\},\{57,8\},\{28,43\}\},\end{array}\] respectively. For each remaining edge of \(\bar{F}\) we apply substitution \(A_{0}\) to obtain: \[\begin{array}{rclrcl}A_{0}^{1}&=&\{\{72,3\},\{23,38\},\{58,73\}\},&A_{0}^{2} &=&\{\{18,33\},\{53,68\},\{4,19\}\}\\ A_{0}^{3}&=&\{\{48,63\},\{83,14\},\{34,49\}\},&A_{0}^{4}&=&\{\{78,9\},\{29,44\}, \{64,79\}\},\\ A_{0}^{5}&=&\{\{24,39\},\{59,74\},\{10,25\}\},&A_{0}^{6}&=&\{\{54,69\},\{5,20 \},\{40,55\}\},\end{array}\] respectively. So, \(F=B\cup A_{2}^{1}\cup A_{2}^{2}\cup A_{2}^{3}\cup A_{2}^{4}\cup A_{2}^{5}\cup A _{2}^{6}\cup A_{2}^{7}\cup A_{0}^{1}\cup A_{0}^{2}\cup A_{0}^{3}\cup A_{0}^{4} \cup A_{0}^{5}\cup A_{0}^{6}\). We now prove our main result. Proof of Theorem 1.4.: If \(d\) does not divide \(n\), then there is no perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) by Proposition 2.3. If \(d=1\), the result follows from Theorems 3.1 and 3.9. If \(d=n\), then \(x=y=n\), which is excluded by hypothesis. So, we may assume that \(d\) is a divisor of \(n\) such that \(1<d<n\). Let \(\bar{x}\), \(\bar{y}\) and \(\bar{n}\) be integers such that \(x=d\bar{x}\), \(y=d\bar{y}\) and \(n=d\bar{n}\). Also, let \(e_{x}=\frac{d_{x}}{d}\) and \(e_{y}=\frac{d_{y}}{d}\). Then, \(e_{x}=\gcd(\bar{x},2\bar{n})\) and \(e_{y}=\gcd(\bar{y},2\bar{n})\). Note that \(d_{x}\) divides \(n\) if and only if \(e_{x}\) divides \(\bar{n}\). Since \(\gcd(\bar{x},\bar{y},2\bar{n})=1\), we have that \(\bar{x}\) and \(\bar{y}\) cannot be both even. So, we can assume that \(\bar{y}\) is odd, which implies that \(d_{y}\) divides \(n\). It follows, by Proposition 2.8, that we can construct a perfect matching \(\bar{F}\) of \(K_{2\bar{n}}\) such that \(\ell(\bar{F})=\{\bar{y}^{\bar{n}}\}\). By Propositions 2.5 and 2.6, there exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) if and only if \[\begin{array}{l}\mbox{there exist $d$ (not necessarily distinct) perfect matchings $F_{0},F_{1},\ldots,F_{d-1}$ of $K_{2\bar{n}}$}\\ \mbox{such that $\ell(F_{i})=L_{i}$, where $L_{i}=\{\bar{x}^{\bar{n}-a_{i}},\bar{y}^{a_{i}} \}$ for suitable nonnegative integers}\\ a_{0},a_{1},\ldots,a_{d-1}\mbox{ such that $\sum\limits_{i=0}^{d-1}a_{i}=n$}.\end{array} \tag{3.1}\] Suppose that \(\bar{x}\) is even. Then, a necessary condition for the existence of the matchings \(F_{i}\) satisfying (3.1) is that \(\bar{n}-a_{i}\) is even for all \(i\). This implies that \(n-a\) must be even. So, assume that this happens. If \(d_{x}\) divides \(n\), then \(\bar{n}\) is even and the integers \(a_{i}\) can be chosen in the interval \([0,\bar{n}]\). Take two integers \(q\) and \(r\) such that \(n-a=2qd+2r\) with \(0\leq r<d\), and define \(a_{i}=\bar{n}-(2q+2)\) if \(0\leq i<r\) and \(a_{i}=\bar{n}-2q\) if \(r\leq i<d\). Notice that \(0\leq 2q<\bar{n}\). Since we can apply Proposition 2.8 and Theorem 3.1, condition (3.1) holds, giving case (a) of (1). If \(d_{x}\) does not divide \(n\), then the integers \(a_{i}\) must be chosen in \([1,\bar{n}]\). If \(2a<d_{x}\), then one of the integers \(a_{i}\) must be less than \(\frac{e_{x}}{2}\). By Theorem 3.1, there is no perfect matching \(F_{i}\) of \(K_{2\bar{n}}\) such that \(\ell(F_{i})=\{\bar{x}^{\bar{n}-a_{i}},\bar{y}^{a_{i}}\}\). Hence, we can assume \(2a\geq d_{x}\), and take \(q,r,a_{i}\) as before. Note that \(2a_{i}\geq e_{x}\) for every \(i\), so the existence of the matchings \(F_{i}\) satisfying (3.1) follows from Theorem 3.1, giving case (b) of (1). Now, assume that \(\bar{x}\) is odd. Notice that both \(e_{x}\) and \(e_{y}\) are odd. Arguing as before, the integers \(a_{i}\) can be chosen in \([0,\bar{n}]\). If \(a\) is an odd integer such that \(a<e_{x}\), then one of integers \(a_{i}\) must be odd. By Theorem 3.9, there is no perfect matching \(F_{i}\) of \(K_{2\bar{n}}\) such that \(\ell(F_{i})=\{\bar{x}^{\bar{n}-a},\bar{y}^{a_{i}}\}\). So, there is no perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\). Similarly, if \(n-a\) is an odd integer such that \(n-a<e_{y}\). So, assume condition (3) holds. Let \(q\) and \(r\) be two nonnegative integers such that \(a=q\bar{n}+r\), where \(r<\bar{n}\). Note that \(q<d\). Moreover, if \(r\) is odd and \(r<e_{x}\), let \(s=e_{x}-r\), otherwise set \(s=0\). Similarly, if \(\bar{n}-r\) is odd and \(\bar{n}-r<e_{y}\), let \(t=e_{y}-(\bar{n}-r)\), otherwise set \(t=0\). Notice that both \(s\) and \(t\) are even integers and at least one of them is equal to zero. Moreover, \(s<e_{x}\leq\frac{\bar{n}}{2}\) and \(t<e_{y}\leq\frac{\bar{n}}{2}\). Thus, \(\bar{n}-s>e_{x}\) and \(\bar{n}-t>e_{y}\). If \(s>0\), then clearly \(q\geq 1\) and \(\bar{n}-(r+s)\geq e_{y}\); similarly, if \(t>0\), then \(q\leq d-2\) and \(r-t\geq e_{x}\). According to (3.1), to obtain a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\{x^{n-a},y^{a}\}\) it suffices to construct the perfect matchings \(F_{0},F_{1},\ldots,F_{d-1}\) of \(K_{2\bar{n}}\) such that \(\ell(F_{i})=L_{i}\), distinguishing three cases: Case I: \(s=t=0\). Take \(L_{i}=\{\bar{y}^{\bar{n}}\}\) for \(i\in[0,q-1]\), \(L_{q}=\{\bar{x}^{\bar{n}-r},\bar{y}^{r}\}\), and \(L_{i}=\{\bar{x}^{\bar{n}}\}\) for \(i\in[q+1,d-1]\). Case II: \(s>0\). Take \(L_{i}=\{\bar{y}^{\bar{n}}\}\) for \(i\in[0,q-2]\), \(L_{q-1}=\{\bar{x}^{s},\bar{y}^{\bar{n}-s}\}\), \(L_{q}=\{\bar{x}^{\bar{n}-(r+s)},\bar{y}^{r+s}\}\), and \(L_{i}=\{\bar{x}^{\bar{n}}\}\) for \(i\in[q+1,d-1]\). Case III: \(t>0\). Take \(L_{i}=\{\bar{y}^{\bar{n}}\}\) for \(i\in[0,q-1]\), \(L_{q}=\{\bar{x}^{\bar{n}-(r-t)},\bar{y}^{r-t}\}\), \(L_{q+1}=\{\bar{x}^{\bar{n}-t},\bar{y}^{t}\}\), and \(L_{i}=\{\bar{x}^{\bar{n}}\}\) for \(i\in[q+2,d-1]\). In each case, the existence of the corresponding perfect matchings \(F_{i}\) of \(K_{2\bar{n}}\) follows from Proposition 2.8 and Theorem 3.9. **Example 3.11**.: Let \(L=\{10^{24},15^{6}\}\). Then, \(x=10\), \(y=15\), \(n=30\), \(a=6\), \(d=5\), \(d_{x}=10\) and \(d_{y}=15\). Hence, \(\bar{x}=2\), \(\bar{y}=3\) and \(\bar{n}=6\); so, \(\bar{x}\) is even. To construct a perfect matching \(F\) of \(K_{60}\) such that \(\ell(F)=L\) we have to construct five perfect matchings \(F_{0},\ldots,F_{4}\) of \(K_{12}\) such that \(\ell(F_{i})=\{2^{6-a_{i}},3^{a_{i}}\}\) and \(a_{0}+\ldots+a_{4}=6\). We note that \(n-a=24\) is even, so we write \(24=2q\cdot 5+2r\), where \(q=2\) and \(r=2\). In this case \(d_{x}\) divides \(n\): so, we take \(a_{0}=a_{1}=0\) and \(a_{2}=a_{3}=a_{4}=2\). Hence, we construct the perfect matchings \(F_{i}\) such that \(\ell(F_{0})=\ell(F_{1})=\{2^{6}\}\) and \(\ell(F_{2})=\ell(F_{3})=\ell(F_{4})=\{2^{4},3^{2}\}\). **Example 3.12**.: Let \(L=\{10^{20},15^{5}\}\). Then, \(x=10\), \(y=15\), \(n=25\), \(a=5\), \(d=5\), \(d_{x}=10\) and \(d_{y}=5\). Hence, \(\bar{x}=2\), \(\bar{y}=3\) and \(\bar{n}=5\); so, \(\bar{x}\) is even. We note that \(n-a=20\) is even, so we write \(20=2q\cdot 5+2r\), where \(q=2\) and \(r=0\). In this case, \(d_{x}\) does divides \(n\) and \(2a\geq d_{x}\): so, we construct the perfect matchings \(F_{i}\) of \(K_{10}\) such that \(\ell(F_{0})=\ldots=\ell(F_{4})=\{2^{4},3^{1}\}\). Note that this can be done, since \(2\cdot 1\geq\gcd(2,10)=2\). **Example 3.13**.: Let \(L=\{75^{5},9^{85}\}\). Then, \(x=75\), \(y=9\), \(n=90\), \(a=85\), \(d=3\), \(d_{x}=15\) and \(d_{y}=9\). Hence, \(\bar{x}=25\), \(\bar{y}=3\), \(\bar{n}=30\), \(e_{x}=5\) and \(e_{y}=3\). Write \(85=q\cdot 30+r\), where \(q=2\) and \(r=25\). Since \(s=t=0\), we construct three perfect matchings \(F_{0},F_{1},F_{2}\) of \(K_{60}\) such that \(\ell(F_{0})=\ell(F_{1})=\{3^{30}\}\) and \(\ell(F_{2})=\{25^{5},3^{25}\}\). **Example 3.14**.: Let \(L=\{70^{42},42^{48}\}\). Then, \(x=70\), \(y=42\), \(n=90\), \(a=48\), \(d=2\), \(d_{x}=10\) and \(d_{y}=6\). Hence, \(\bar{x}=35\), \(\bar{y}=21\), \(\bar{n}=45\), \(e_{x}=5\) and \(e_{y}=3\). Write \(48=q\cdot 45+r\), where \(q=1\) and \(r=3\). Since \(s>0\), we construct two perfect matchings \(F_{0},F_{1}\) of \(K_{90}\) such that \(\ell(F_{0})=\{35^{2},21^{43}\}\) and \(\ell(F_{1})=\{35^{40},21^{5}\}\). **Example 3.15**.: Let \(L=\{45^{16},25^{59}\}\). Then, \(x=45\), \(y=25\), \(n=75\), \(a=59\), \(d=5\), \(d_{x}=15\) and \(d_{y}=25\). Hence, \(\bar{x}=9\), \(\bar{y}=5\), \(\bar{n}=15\), \(e_{x}=9\) and \(e_{y}=5\). Write \(59=q\cdot 15+r\), where \(q=3\) and \(r=14\). Since \(t>0\), we construct five perfect matchings \(F_{0},\ldots,F_{4}\) of \(K_{90}\) such that \(\ell(F_{0})=\ell(F_{1})=\ell(F_{2})=\{5^{15}\}\), \(\ell(F_{3})=\{9^{5},5^{10}\}\) and \(\ell(F_{4})=\{9^{11},5^{4}\}\). ## 4. Lists with all the elements with the same multiplicity In this section we focus on lists where each element appears the same number of times, say \(t\), with underlying set \(\left\{1,2,\ldots,\frac{n}{t}\right\}\), \(\left\{2,4,\ldots,\frac{2n}{t}\right\}\) or \(\left\{1,3,\ldots,\frac{2n}{t}-1\right\}\). We start by showing a connection between the problem investigated in this paper and Skolem sequences, for details on the topic see [10]. We recall that a _Skolem sequence_ of order \(n\) is a sequence \(S(n)=(s_{0},s_{1},\ldots,s_{2n-1})\) of \(2n\) integers satisfying the conditions: 1. for every \(k\in[1,n]\) there exist two elements \(s_{i},s_{j}\in S(n)\) such that \(s_{i}=s_{j}=k\), 2. if \(s_{i}=s_{j}=k\) with \(i<j\), then \(j-i=k\). Skolem sequences are also written as collections of ordered pairs \(\left\{(a_{i},b_{i}):1\leq i\leq n,b_{i}-a_{i}=i\right\}\) with \(\cup_{i=1}^{n}\{a_{i},b_{i}\}=[0,2n-1]\), which can be seen as the edges of a perfect matching \(F\) of \(K_{2n}\). It is easy to see that the list of edge-lengths of \(F\) is nothing but the set \([1,n]\). For instance, the Skolem sequence \(S(5)=(1,1,3,4,5,3,2,4,2,5)\) of order \(5\) can be seen as the perfect matching \(F=\left\{\{0,1\},\{6,8\},\{2,5\},\{3,7\},\{4,9\}\right\}\) of \(K_{10}\) such that \(\ell(F)=\ell^{\prime}(F)=[1,5]\). **Proposition 4.1**.: _Let \(L=\{1,2,\ldots,n\}\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=L\) if and only if \(n\equiv 0,1\pmod{4}\). Moreover, \(\ell(F)=\ell^{\prime}(F)\) holds._ Proof.: For every positive integer \(n\equiv 0,1\pmod{4}\), there exists a Skolem sequence of order \(n\), see [10], hence the result follows by previous considerations. If \(n\equiv 2,3\pmod{4}\), \(L\) contains an odd number of even numbers, hence the non-existence of \(F\) follows by Proposition 2.4. **Corollary 4.2**.: _Given \(t\leq n\), let \(L=\{i^{a_{i}}:i\in[1,t]\}\) be such that \(|L|=n\), \(a_{i}\geq a_{i+1}\geq 1\) and \(a_{4k+2}=a_{4k+3}=a_{4k+4}\) for any \(k\). Then there exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\ell^{\prime}(F)=L\)._ Proof.: Note that \(a_{4k+2}=a_{4k+3}=a_{4k+4}\) implies \(t\equiv 0,1\pmod{4}\). The result follows by Proposition 4.1 and Remark 2.1. **Example 4.3**.: Let \(L=\{1^{5},2^{4},3^{4},4^{4},5^{4},6^{2},7^{2},8^{2},9\}\), so \(|L|=28\). Let \(S_{i}\) be a Skolem sequence of order \(i\), with \(i\equiv 0,1\pmod{4}\), and consider the corresponding perfect matching \(F_{i}\). Then \[F=F_{9}\cup(F_{8}+18)\cup(F_{5}+34)\cup(F_{5}+44)\cup(F_{1}+54)\] is a perfect matching of \(K_{56}\) such that \(\ell(F)=\ell^{\prime}(F)=L\). To get a generalization of Proposition 4.1 we consider the case in which all the elements in the list are odd (even, respectively) integers. **Proposition 4.4**.: _Let \(t\geq 2\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\left\{1^{t},3^{t},\ldots,(\frac{2n}{t}-1)^{t}\right\}\) if and only if \(n\equiv 0\pmod{t}\)._ Proof.: It is trivial that \(n\equiv 0\pmod{t}\) is a necessary condition. On the other hand, one can easily check that \[F=\left\{\left\{\frac{2n}{t}i+j,\frac{2n}{t}(i+1)-j-1\right\}:i\in[0,t-1],j\in \left[0,\frac{n}{t}-1\right]\right\}\] satisfies the required conditions. Note that also in this case \(\ell(F)=\ell^{\prime}(F)\) **Example 4.5**.: A perfect matching \(F\) of \(K_{32}\) such that \(\ell(F)=\ell^{\prime}(F)=\{1^{4},3^{4},5^{4},7^{4}\}\) is \[F = \{\{0,7\},\{1,6\},\{2,5\},\{3,4\},\{8,15\},\{9,14\},\{10,13\},\{11,1 2\},\{16,23\},\{17,22\},\] \[\{18,21\},\{19,20\},\{24,31\},\{25,30\},\{26,29\},\{27,28\}\}.\] **Proposition 4.6**.: _Let \(t\geq 2\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\left\{2^{t},4^{t},\ldots,\left(\frac{2n}{t}\right)^{t}\right\}\) if and only if \(n\equiv 0\pmod{s}\), where \(s=t\) if \(t\) is even and \(s=4t\) otherwise._ Proof.: To prove necessity, first of all we need to realize that \(n\equiv 0\pmod{t}\) is a trivial condition. Moreover, since \(\frac{2n}{t}\leq n\) then \(t\geq 2\). By Proposition 2.4, \(n\) is even. Suppose to the contrary that \(t\) is odd and \(n\equiv 2t\pmod{4t}\). The existence of \(F\) would imply, by Proposition 2.6, the existence of two perfect matchings \(F_{0},F_{1}\) of \(K_{n}\), one of which has the list of edge-lengths containing an odd number of even integers. Hence, by Proposition 2.4, we get a contradiction. Let \(t\) be even. Then it is easily seen that \[F_{n,t} = \left\{\left\{\frac{(2i+1)n}{t}-j,\frac{(2i+1)n}{t}+j\right\}:\; i\in[0,t-1],j\in\left[1,\frac{n}{t}-1\right]\right\}\cup\] \[\left\{\left\{\frac{4in}{t},\frac{(4i+2)n}{t}\right\},\left\{ \frac{(4i+1)n}{t},\frac{(4i+3)n}{t}\right\}:\;i\in\left[0,\frac{t}{2}-1\right]\right\}\] is a perfect matching of \(K_{2n}\) with the required properties. Notice that \(\ell(e)=\ell^{\prime}(e)\) for each \(e\in F_{n,t}\). Let \(t\) be odd. Since \(t\geq 3\) we have \(n\geq 12\). Firstly, we construct a perfect matching \(F^{\prime}\) of \(K_{24}\) such that \(\ell(F^{\prime})=\ell^{\prime}(F^{\prime})=\{2^{3},4^{3},6^{3},8^{3}\}\). Namely, \(F^{\prime}=\{\{0,8\},\{1,7\},\{2,6\},\{3,5\},\)\(\{4,12\},\{9,15\},\{10,16\},\{11,13\},\{14,22\},\{17,21\},\{18,20\},\{19,23\}\}\). Now, set \(F^{\prime\prime}=F^{\prime}\) if \(t=3\), \(F^{\prime\prime}=F^{\prime}\cup(F_{4(t-3),t-3}+24)\) if \(t\geq 5\). In both cases \(F^{\prime\prime}\) is a matching of \(K_{2n}\) such that \(V(F^{\prime\prime})=[0,8t-1]\) and \(\ell(F^{\prime\prime})=\{2^{t},4^{t},6^{t},8^{t}\}\). If \(n=4t\) we have done. Otherwise, on the set \(V(F^{\prime\prime})\) apply the relabeling \(i\mapsto\frac{ni}{4t}\). In this way \(F^{\prime\prime}\) is converted into a matching \(F^{2}\) of \(K_{2n}\) such that \(\ell(F^{2})=\left\{(\frac{n}{2t})^{t},(\frac{n}{t})^{t},(\frac{3n}{2t})^{t},(\frac{2n}{t})^{t}\right\}\) and \(V(F^{2})=U\), where \(U=\left\{0,\frac{n}{4t},\frac{n}{2t},\ldots,2n-\frac{n}{4t}\right\}\). Note that \(|U|=8t\). Set \(A=\left\{\frac{n}{4t},\frac{n}{2t},\frac{3n}{4t},\frac{n}{t}\right\}\) and \[F^{1}=\left\{\left\{\frac{(2i+1)n}{t}-j,\frac{(2i+1)n}{t}+j\right\}:\;i\in[0,t -1],\,j\in\left[1,\frac{n}{t}\right]\setminus A\right\}.\] It is easy to see that \(F^{1}\) is a matching of \(K_{2n}\) such that \(|F^{1}|=n-4t\), \(V(F^{1})=V(K_{2n})\setminus U\) and \(\ell(F^{1})=\{(2j)^{t}:j\in\left[1,\frac{n}{t}\right]\setminus A\}\). Thus \(F^{1}\cup F^{2}\) is a perfect matching of \(K_{2n}\) with the required properties. Moreover, \(\ell(e)=\ell^{\prime}(e)\) for each \(e\in F^{1}\cup F^{2}\). **Example 4.7**.: A perfect matching \(F\) of \(K_{24}\) such that \(\ell(F)=\ell^{\prime}(F)=\{2^{4},4^{4},6^{4}\}\) is \[F=F_{12,4} = \{\{2,4\},\{1,5\},\{8,10\},\{7,11\},\{14,16\},\{13,17\},\{20,22\},\{19,23\}\}\cup\] \[\{\{0,6\},\{3,9\},\{12,18\},\{15,21\}\}.\] **Example 4.8**.: Here, we consider the list \(\{2^{7},4^{7},6^{7},8^{7},10^{7},12^{7},14^{7},16^{7}\}\), so we are working in \(K_{112}\). Firstly, we construct \(F^{\prime\prime}=F^{\prime}\cup(F_{16,4}+24)\): \[F^{\prime\prime} = \{\{0,8\},\{1,7\},\{2,6\},\{3,5\},\{4,12\},\{9,15\},\{10,16\},\{1 1,13\},\{14,22\},\] \[\{17,21\},\{18,20\},\{19,23\},\{27,29\},\{26,30\},\{25,31\},\{35,37 \},\{34,38\},\] \[\{33,39\},\{43,45\},\{42,46\},\{41,47\},\{51,53\},\{50,54\},\{49,5 5\},\{24,32\},\] \[\{40,48\},\{28,36\},\{44,52\}\}.\] Note that \(\ell(F^{\prime\prime})=\{2^{7},4^{7},6^{7},8^{7}\}\). Now, we construct \(F^{2}\) by applying the relabeling \(i\mapsto 2i\): \[F^{2} = \{\{0,16\},\{2,14\},\{4,12\},\{6,10\},\{8,24\},\{18,30\},\{20,32\}, \{22,26\},\] \[\{28,44\},\{34,42\},\{36,40\},\{38,46\},\{54,58\},\{52,60\},\{50,62 \},\{70,74\},\] \[\{68,76\},\{66,78\},\{86,90\},\{84,92\},\{82,94\},\{102,106\},\{10 0,108\},\] \[\{98,110\},\{48,64\},\{80,96\},\{56,72\},\{88,104\}\}.\] Clearly, \(\ell(F^{2})=\{4^{7},8^{7},12^{7},16^{7}\}\). Finally we construct \(F^{1}\): \[F^{1} = \{\{7,9\},\{5,11\},\{3,13\},\{1,15\},\{23,25\},\{21,27\},\{19,29 \},\{17,31\},\] \[\{39,41\},\{37,43\},\{35,45\},\{33,47\},\{55,57\},\{53,59\},\{51,6 1\},\{49,63\},\] \[\{71,73\},\{69,75\},\{67,77\},\{65,79\},\{87,89\},\{85,91\},\{83,9 3\},\{81,95\},\] \[\{103,105\},\{101,107\},\{99,109\},\{97,111\}\}.\] It results \(\ell(F^{1})=\{2^{7},6^{7},10^{7},14^{7}\}\). Take \(F=F^{1}\cup F^{2}\). **Corollary 4.9**.: _Let \(n\), \(t\) be two integers such that \(t\) divides \(n\). There exists a perfect matching \(F\) of \(K_{2n}\) such that \(\ell(F)=\left\{1^{t},2^{t},\ldots,\left(\frac{n}{t}\right)^{t}\right\}\) if and only if either \(t\) is even or \(\frac{n}{t}\equiv 0,1\pmod{4}\)._ Proof.: If \(t\) is odd and \(\frac{n}{t}\equiv 2,3\pmod{4}\) the non-existence follows by Proposition 2.4. If \(\frac{n}{t}\equiv 0,1\pmod{4}\), we apply Remark 2.1 and Proposition 4.1. If \(\frac{n}{t}\equiv 2,3\pmod{4}\) and \(t\) is even we apply Remark 2.1 and Propositions 4.4 and 4.6. **Example 4.10**.: Let \(F\) and \(F^{\prime}\) be the matchings constructed in Examples 4.5 and 4.7, respectively. Then, \(F^{\prime\prime}=F\cup(F^{\prime}+32)\), namely \[F^{\prime\prime} = \{\{0,7\},\{1,6\},\{2,5\},\{3,4\},\{8,15\},\{9,14\},\{10,13\},\{ 11,12\},\{16,23\},\{17,22\},\] \[\{18,21\},\{19,20\},\{24,31\},\{25,30\},\{26,29\},\{27,28\}\}\cup\] \[\{\{34,36\},\{33,37\},\{40,42\},\{39,43\},\{46,48\},\{45,49\},\{ 52,54\},\{51,55\},\] \[\{32,38\},\{35,41\},\{44,50\},\{47,53\}\}\] is a perfect matching of \(K_{56}\) such that \(\ell(F^{\prime\prime})=\ell^{\prime}(F^{\prime\prime})=\{1^{4},2^{4},3^{4},4^{ 4},5^{4},6^{4},7^{4}\}\). We conclude this section considering some similar lists. **Proposition 4.11**.: _Let \(n\geq 3\) be an odd integer. There exists a perfect matching \(F\) of \(K_{2n}\) such that:_ 1. \(\ell(F)=\{1^{2},3^{2},\ldots,(n-2)^{2},n\}\)_;_ 2. \(\ell(F)=\{2^{2},4^{2},\ldots,(n-1)^{2},n\}\)_._ Proof.: In the case (1) take \(F=\{\{i,2n-1-i\}:i\in[0,n-1]\}\), while in the case (2) take \[F=\left\{\{i,n-1-i\}:i\in\left[0,\frac{n-3}{2}\right]\cup\left[n,\frac{3n-3}{ 2}\right]\right\}\cup\left\{\left\{\frac{n-1}{2},\frac{3n-1}{2}\right\} \right\}.\] One can check that, in both cases, \(F\) satisfies the required properties. ## 5. Conclusions and open problems The conditions presented in this paper lead us to believe that it is not possible to find a "nice" statement for the seating couple problem in the even case, as done for the odd case, namely to find a condition such as (1.1) of Conjecture 2. In fact, as we have seen, the necessary conditions given in Section 2 are rarely sufficient. Hence, it would be interesting to classify the lists for which it happens. This is the case when the underlying set has length \(1\) (Proposition 2.8) or consists of the consecutive integers \(1,2,\ldots,x\), each appearing in the list with the same multiplicity (Corollary 4.9). It is also the case described in Theorem 3.1. On the other hand, one could start considering the case where \(n\) is an odd prime, as done in [5]. With this assumption, when the list does not contain \(n\), the necessary conditions of Theorem 1.4 simply become those of Propositions 2.3 and 2.4. So, also in view of some computational results, we propose the following conjecture which is clearly related to Theorem 1.3, where the elements of the list are all coprime with \(2n\). Here, we assume the stronger assumption that \(n=p\) is an odd prime, but the list is allowed to contain also even integers. **Conjecture 3**.: _Let \(p\) be an odd prime and let \(L\) be a list of \(p\) positive integers less than \(p\). There exists a perfect matching \(F\) of \(K_{2p}\) such that \(\ell(F)=L\) if and only if the number of even integers in \(L\) is even._ ## Acknowledgements The second and the third author are partially supported by INdAM-GNSAGA.
2309.11115
The Navier-Stokes equation and a fully developed turbulence
In fairly general conditions we give explicit (smooth) solutions for the potential flow. We show that, rigorously speaking, the equations of the fluid mechanics have not rotational solutions. However, within the usual approximations of an incompressible fluid and an isentropic flow, the remaining Navier-Stokes equation has approximate vorticial (rotational) solutions, generated by viscosity. In general, the vortices are unstable, and a discrete distribution of vorticial solutions is not in mechanical equilibrium; it forms an unstable vorticial liquid. On the other hand, these solutions may exhibit turbulent, fluctuating instabilities for large variations of the velocity over short distances. We represent a fully developed turbulence as a homogeneous, isotropic and highly-fluctuating distribution of singular centres of turbulence. A regular mean flow can be included. In these circumstances the Navier-Stokes equation exhibits three time scales. The equations of the mean flow can be disentangled from the equations of the fluctuating part, which is reduced to a vanishing inertial term. This latter equation is not satisfied after averaging out the temporal fluctuations. However, for a homogeneous and isotropic distribution of non-singular turbulence centres the equation for the inertial term is satisfied trivially, i.e. both the average fluctuating velocity and the average fluctuating inertial term are zero. If the velocity is singular at the turbulence centres, we are left with a quasi-ideal classical gas of singularities, or a solution of singularities in quasi thermal equilibrium in the background fluid. This is an example of an emergent dynamics. We give three examples of vorticial liquids.
Marian Apostol
2023-09-20T07:45:30Z
http://arxiv.org/abs/2309.11115v1
# The Navier-Stokes equation and a fully developed turbulence ###### Abstract In fairly general conditions we give explicit (smooth) solutions for the potential flow. We show that, rigorously speaking, the equations of the fluid mechanics have not rotational solutions. However, within the usual approximations of an incompressible fluid and an isentropic flow, the remaining Navier-Stokes equation has approximate vorticial (rotational) solutions, generated by viscosity. In general, the vortices are unstable, and a discrete distribution of vorticial solutions is not in mechanical equilibrium; it forms an unstable vorticial liquid. On the other hand, these solutions may exhibit turbulent, fluctuating instabilities for large variations of the velocity over short distances. We represent a fully developed turbulence as a homogeneous, isotropic and highly-fluctuating distribution of singular centres of turbulence. A regular mean flow can be included. In these circumstances the Navier-Stokes equation exhibits three time scales. The equations of the mean flow can be disentangled from the equations of the fluctuating part, which is reduced to a vanishing inertial term. This latter equation is not satisfied after averaging out the temporal fluctuations. However, for a homogeneous and isotropic distribution of non-singular turbulence centres the equation for the inertial term is satisfied trivially, _i.e._ both the average fluctuating velocity and the average fluctuating inertial term are zero. If the velocity is singular at the turbulence centres, we are left with a quasi-ideal classical gas of singularities, or a solution of singularities in quasi thermal equilibrium in the background fluid. This is an example of an emergent dynamics. We give three examples of vorticial liquids. Key words: potential flow; vorticity; instabilities; turbulence; gas of singularities; singular vortices ## 1 Introduction In fairly general conditions we give explicit (smooth) solutions for the potential flow. As it is well known, the fluids may develop turbulence. In its extreme manifestation the turbulent flow displays very irregular, disordered velocities, fluctuating in time at each point in space. This is known as a fully developed turbulence. By using such fluctuating velocities, besides a steady mean velocity, the Navier-Stokes equation becomes an infinite hierarchy of equations for velocity mean correlation functions, known as Reynolds's equations,[1] which need closure assumptions. According to the experimental observations, it was realized that such irregular movements of the fluid exhibit distributions of swirls (eddies, vortices), of various magnitude and vorticities; it is likely that the large eddies transfer energy to the small eddies, which dissipate it.[2]-[4] Statistical concepts like correlations, homogeneity and isotropy have been introduced in the theory of turbulence,[5, 6] and dimensional analysis and similarity arguments allowed the derivation of the energy spectrum of the turbulent eddies.[7]-[9] Meanwhile, the relation of this statistical turbulence with the Navier-Stokes equation remained unclear.[10]-[13] Could the Navier-Stokes equation describe a turbulent motion? To what extent and in what sense? Has the Navier-Stokes equation smooth and stable solutions? What is the appropriate representation of a turbulent field of velocities?[14] The dynamics of the vorticity has enjoyed much interest (see Refs. [15, 16] and References therein). The results depend on model assumptions. Dynamical-system concepts and statistical models have been invoked in studies of turbulence, with chaotic behaviour, intermittency and coherent structures (see, for instance, Refs. [17]-[21]). In particular, by analogy with the quantum turbulence, "Turbulent flows may be regarded as an intricate collection of mutually-interacting vortices", and "Vortex filaments may thus be seen as the fundamental structure of turbulence,... ".[17] The difficulties exhibited by the Navier-Stokes equation are related to the viscosity, which governs the vorticity, and the inertial term, which is quadratic in velocity. We show in this paper that the viscosity term in the Navier-Stokes equation may produce vorticity, provided the fluid is incompressible and the flow is isentropic. Although such an approximate treatment may look reasonable, we can see that, rigorously speaking, the fluids cannot exhibit vorticity. Moreover, we give arguments that the vortices are unstable. Further, we show in this paper that large variations of the velocity over short distances lead to highly fluctuating, swirling instabilities, controlled by viscosity. This is characteristic for the phenomenon of a fully developed turbulence. In this case, the inertial term acquires a major role in describing the flow. We represent a fully developed turbulence as a superposition of fluctuating velocities, associated to a discrete set of turbulence centres. A mean flow may be included. In general, the Navier-Stokes equation, averaged over fluctuations, is not satisfied. On the other hand, a homogeneous and isotropic distribution of (non-singular) turbulence centres leads to vanishing averages of velocity and inertial term, such that the Navier-Stokes equation is satisfied trivially. If the turbulence centres are singular, we are left with a gas of singularities (or a solution of singularities in the background fluid), which is in quasi thermal equilibrium. The corresponding Navier-Stokes equation for the fluid of singularities is reduced to Newton's equation of motion, with a small friction. We illustrate the above descripton with three examples of vorticial liquids (filamentary liquid, coulombian and dipolar liquid). ## 2 Potential flow. Incompressible fluid Let us consider a potential flow of an incompressible fluid. The velocity \(\mathbf{v}=grad\Phi\) is given by the gradient of a potential \(\Phi\), which satisfies the Laplace equation \[\Delta\Phi=0 \tag{1}\] (incompressiblity condition \(div\mathbf{v}=0\)). The viscosity term \(\sim\Delta\mathbf{v}\) is zero, such that we are left with Euler's equation \[\frac{\partial\mathbf{v}}{\partial t}+(\mathbf{v}grad)\mathbf{v}=-\frac{1}{\rho}gradp\enspace, \tag{2}\] where \(\rho\) is the density and \(p\) denotes the pressure. By using the well-known identity \[(\mathbf{v}grad)\mathbf{v}=-\mathbf{v}\times curl \mathbf{v}+grad(v^{2}/2)\;\;, \tag{3}\] equation (2) becomes \[\frac{\partial\mathbf{v}}{\partial t}+grad(v^{2}/2+p/\rho)=0\;\;, \tag{4}\] where \(curl\mathbf{v}=0\). As it is well known, by using equation (1), this equation leads to \[\frac{\partial\Phi}{\partial t}+\frac{1}{2}(grad\Phi)^{2}+\frac{p}{\rho}=0\;. \tag{5}\] In this equation \(p\) should be viewed as the variation of the pressure with respect to equilibrium. We assume that \(p\) does not depend on \(\Phi\) and the time. In equations (1) and (5) the variables may be separated. Let \(g(\mathbf{r})\) be a solution of equation (1) (satisfying the boundary conditions); the potential can be written as \(\Phi=f(t)g(\mathbf{r})\), where the function \(f(t)\) satisfies equation (5), \[\frac{df}{dt}+\frac{1}{2g}(gradg)^{2}+\frac{p}{\rho g}=0\;. \tag{6}\] The acceptable solution of this equation (for \(f(0)=0\)) is \[f(t)=\frac{\sqrt{2\mid p\mid/\rho}}{\mid gradg\mid}\tanh\frac{\sqrt{\mid p \mid/2\rho}\mid gradg\mid}{g}t \tag{7}\] for \(p<0\). It may happen that the boundary conditions for the equation \(\Delta\Phi=0\) depend on time, thus providing the time derivative \(\dot{\Phi}\); in that case equation (5) gives the pressure. Potential flow. Compressible fluid Let us write down the equations of the fluid mechanics \[\begin{array}{c}\frac{\partial\rho}{\partial t}+\rho div\mathbf{v}+ \mathbf{v}grad\rho=0\,\\ \\ \rho\frac{\partial\mathbf{v}}{\partial t}+\rho(\mathbf{v}grad) \mathbf{v}=-gradp-\rho grad\varphi+\\ \\ +\eta\Delta\mathbf{v}+\left(\frac{1}{3}\eta+\zeta\right)grad\,div \mathbf{v}\,\\ \\ \rho T\left(\frac{\partial s}{\partial t}+\mathbf{v}grads\right)= \kappa\Delta T+\sigma^{{}^{\prime}}_{ij}\partial_{j}v_{i}\ \,\end{array} \tag{8}\] where \(p\) is the internal pressure, \(\varphi\) is an external potential, \(\eta\) and \(\zeta\) are the viscosity coefficients, \(T\) is the temperature, \(s\) is the entropy per unit masss, \(\kappa\) is the thermoconductivity and \[\sigma^{{}^{\prime}}_{ij}=\eta\left(\partial_{i}v_{j}+\partial_{j}v_{i}-\frac{ 2}{3}\delta_{ij}div\mathbf{v}\right)+\zeta\delta_{ij}div\mathbf{v}. \tag{9}\] is the viscosity tensor. In the Navier-Stokes equation (the second equation (8)) the forces which determine the velocity are \(gradp\) and \(\rho grad\varphi\), where \(p=p(\rho,T)\) is a function of density and temperature. For all the usual flows the relative variations \(\delta\rho/\rho_{0}\) of the density, \(\delta T/T_{0}\) of the temperature, \(\delta p/p_{0}\) of the pressure, \(\delta s/s_{0}\) of the entropy as well as the variation \(\delta\varphi/\varphi_{0}\) of the external potential are small, in comparison with their equilibrium values, labelled by the suffix \(0\), when the fluid is at rest. Consequently, we may view the velocity \(v\) as a first-order quantity, and linearize the above equations as \[\begin{array}{c}\frac{\partial\rho}{\partial t}+\rho_{0}div\mathbf{ v}=0\,\\ \\ \rho_{0}\frac{\partial\mathbf{v}}{\partial t}=-gradp-\rho_{0}grad \varphi+\\ \\ +\eta\Delta\mathbf{v}+\left(\frac{1}{3}\eta+\zeta\right)grad\,div \mathbf{v}\,\\ \\ \rho_{0}T_{0}\frac{\partial s}{\partial t}=\kappa\Delta T\.\end{array} \tag{10}\] We note that within this approximation there is no heat source, and the first-order equation of energy conservation is reduced to an identity. The density and entropy variations can be written as \[\begin{array}{c}\delta\rho=\frac{\rho_{0}}{K}\delta p-\beta\rho_{0}\delta T\,\\ \\ \delta s=-\frac{\beta}{\rho_{0}}\delta p+\frac{c_{p}}{T_{0}}\delta T\ \,\end{array} \tag{11}\] where \(K\) is the isothermal modulus of compressibility (\(1/K=-\frac{1}{V}(\partial V/\partial p)_{T}\), \(V=1/\rho\)), \(\beta=\frac{1}{V}(\partial V/\partial T)_{p}\) is the dilatation coefficient and \(c_{p}\) is the specific heat per unit mass at constant pressure, all at equilibrium. In deriving equations (11) the Gibbs free energy \(d\Phi=Vdp-sdT\) is used. Part of the temperature variation in equation (11) is compensated by pressure variation, as in an adiabatic process; we denote this contribution by \(\delta T_{1}\). The remaining part, denoted by \(\delta T\), corresponds to the conducted heat. Therefore, we write \[\begin{array}{c}\delta\rho=\frac{\rho_{0}}{K}\delta p-\beta\rho_{0}\delta T_ {1}-\beta\rho_{0}\delta T\,\\ \\ \delta s=-\frac{\beta}{\rho_{0}}\delta p+\frac{c_{p}}{T_{0}}\delta T_{1}+ \frac{c_{p}}{T_{0}}\delta T=\frac{c_{p}}{T_{0}}\delta T\ \,\end{array} \tag{12}\] whence \[\frac{\beta}{\rho_{0}}\delta p=\frac{c_{p}}{T_{0}}\delta T_{1} \tag{13}\] and \[\delta\rho=\frac{\rho_{0}}{K}\left(1-\frac{\beta^{2}T_{0}K}{\rho_{0}c_{p}} \right)\delta p-\beta\rho_{0}\delta T. \tag{14}\] In this equation we use the thermodynamic relation \[\frac{\beta^{2}T_{0}K}{\rho_{0}}=c_{p}-c_{v}\ \, \tag{15}\] where \(c_{v}\) is the specific heat per unit mass at constant volume.[22] Therefore, equation (14) becomes \[\delta\rho=\frac{\rho_{0}c_{v}}{Kc_{p}}\delta p-\beta\rho_{0}\delta T\ \, \tag{16}\] or \[\delta p=\frac{Kc_{p}}{\rho_{0}c_{v}}\delta\rho+\frac{\beta Kc_{p}}{c_{v}} \delta T. \tag{17}\] Now we use two other thermodynamic relations \[\frac{c_{p}}{c_{v}}K=K_{ad}\,\ \beta K=\alpha\ \, \tag{18}\] where \(K_{ad}\) is the adiabatic modulus of compressibility and \(\alpha=(\partial p/\partial T)_{V}\) is the thermal pressure coefficient.[22] Finally, we get \[\delta p=\frac{K_{ad}}{\rho_{0}}\delta\rho+\frac{c_{p}}{c_{v}}\alpha\delta T\ \, \tag{19}\] which is used in the Navier-Stokes equation. Equations (10) become \[\begin{array}{c}\frac{\partial\rho}{\partial t}+\rho_{0}div\mathbf{ v}=0\,\\ \\ \rho_{0}\frac{\partial\mathbf{v}}{\partial t}=-\frac{K_{ad}}{\rho_{0} }grad\rho-\rho_{0}grad\varphi-\frac{c_{p}}{c_{v}}\alpha gradT+\\ \\ +\eta\Delta\mathbf{v}+\left(\frac{1}{3}\eta+\zeta\right)grad\,div \mathbf{v}\,\\ \\ \rho_{0}c_{p}\frac{\partial T}{\partial t}=\kappa\Delta T\ \,\end{array} \tag{20}\] where the second equation (12) is used. We can see that the temperature equation (20) is independent; it describes the transport of an external temperature, which may provide a source for the velocity in the Navier-Stokes equation. We may leave aside this external temperature. Let us seek a potential-flow solution of the above equations, where the velocity is derived from a potential \(\Phi\), by \[\mathbf{v}=grad\Phi. \tag{21}\] We notice that \(curl\mathbf{v}=0\) and \(curl\,curl\mathbf{v}=0\), _i.e._\(\Delta\mathbf{v}=grad\,div\mathbf{v}\). Therefore, the Navier-Stokes equation can be written as \[\begin{array}{c}\rho_{0}\frac{\partial\mathbf{v}}{\partial t}=- \frac{K_{ad}}{\rho_{0}}grad\rho-\rho_{0}grad\varphi+\\ \\ +\left(\frac{4}{3}\eta+\zeta\right)grad\,div\mathbf{v}\.\end{array} \tag{22}\] By using equation (21), we obtain \[\begin{array}{c}\frac{\partial\rho}{\partial t}+\rho_{0}\Delta\Phi=0\,\\ \\ \frac{\partial\Phi}{\partial t}+\frac{K_{ad}}{\rho_{0}^{2}}\rho+\varphi-\frac{ 1}{\rho_{0}}\left(\frac{4}{3}\eta+\zeta\right)\Delta\Phi=0\ \,,\end{array} \tag{23}\] up to a function of time, where \(\rho\) and \(\varphi\) should be viewed as their corresponding variations. By an additional time differentiation we obtain \[\frac{\partial^{2}\Phi}{\partial t^{2}}-\frac{K_{ad}}{\rho_{0}}\Delta\Phi+\dot{ \varphi}-\frac{1}{\rho_{0}}\left(\frac{4}{3}\eta+\zeta\right)\Delta\dot{\Phi}=0. \tag{24}\] This equation provides the potential \(\Phi\), therefore the velocity \(v\) through equation (21) and the density \(\rho\) through the first equation (23). Equation (24) is the wave equation with friction (the term \(\sim\Delta\dot{\Phi}\)) and sources (\(-\dot{\varphi}\)). The ratio \(K_{ad}/\rho_{0}\) is the square of the sound velocity \(c=\sqrt{K_{ad}/\rho_{0}}\). The elementary solutions \(e^{-i\omega t}e^{i{\mathbf{k}}{\mathbf{r}}}\) of this (homogeneous) equation are damped plane waves \[e^{\mpickt}e^{i{\mathbf{k}}{\mathbf{r}}}e^{-\frac{\sigma k^{2}}{c}t}\, \tag{25}\] for \(\sigma k\ll c\), where \(\sigma=(4\eta/3+\zeta)/2\rho_{0}\). The relaxation time is much longer than the wave period. A wave propagating along the \(x\)-direction is proportional to \(\sim e^{-\gamma x}\), with the attenuation coefficient \(\gamma=\sigma k^{2}/c=\sigma\omega^{2}/c^{3}\). This is the well-known absorption coefficient for sound (without the \(\kappa\)-contribution). ## 4 Vorticity Euler's equation for an ideal fluid can be written as \[\frac{d{\mathbf{v}}}{dt}=-gradw\ \, \tag{26}\] where \(w\) is the enthalpy (\(dw=\frac{1}{\rho}dp\)); the pressure \(p\) is a function of density \(\rho\). By taking the \(curl\), we get \[curl\frac{d{\mathbf{v}}}{dt}=0. \tag{27}\] On the other hand, \[\frac{d{\mathbf{v}}}{dt}=\frac{\partial{\mathbf{v}}}{\partial t}+({\mathbf{v}}grad){\mathbf{v }}=\frac{\partial{\mathbf{v}}}{\partial t}+\left(\frac{\partial{\mathbf{v}}}{ \partial t}\right)_{f}\, \tag{28}\] where the suffix \(f\) indicates that the derivative is taken along the flow. Equation (27) becomes \[\frac{\partial}{\partial t}curl\mathbf{v}+\left(\frac{\partial}{ \partial t}curl\mathbf{v}\right)_{f}=0\ ; \tag{29}\] since the two variations of the \(curl\mathbf{v}\) are independent, we get \[curl\mathbf{v}=0\ \, \tag{30}\] _i.e._ the vorticity \(curl\mathbf{v}\) is conserved along the flow. Therefore, we cannot create, or destroy, vorticity \(curl\mathbf{v}\) in the flow of an ideal fluid. Equation (30) is valid in the absence of special external force, which do not derive from a gradient. This is Helmholtz's circulation law. As it is well known, an ideal fluid supports only an irrotational (potential) flow, where the velocity is derived from a scalar potential (\(\mathbf{v}=grad\Phi\)). By knowing the equation of state of the fluid, the Euler equation and the continuity equation are fully determined. For a real, viscid, fluid the Navier-Stokes equation is \[\rho\frac{d\mathbf{v}}{dt}=-gradp+\eta\Delta\mathbf{v}+ \left(\frac{1}{3}\eta+\zeta\right)grad\,div\mathbf{v}\ \, \tag{31}\] where \(\eta\), \(\zeta\) are the viscosity coefficients. By taking the \(curl\), we get \[\begin{array}{c}curl\left(\rho\frac{d\mathbf{v}}{dt}\right)=grad \rho\times\frac{d\mathbf{v}}{dt}+\rho curl\frac{d\mathbf{v}} {dt}=\\ \\ =\eta\Delta curl\mathbf{v}\ ;\end{array} \tag{32}\] we can see that the viscosity \(\eta\) can generate vorticity (\(curl\mathbf{v}\neq 0\)). In general, the velocity \(v\) rotates about the vorticity \(curl\mathbf{v}\). This is a vortex. Equation (32) is in conflict with the continuity equation \[\frac{d\rho}{dt}+\rho div\mathbf{v}=0. \tag{33}\] Indeed, if \(curl\mathbf{v}\neq 0\), the velocity should be derived from a \(curl\) (not from a \(grad!\)), _i.e._ we should have \(\mathbf{v}=curl\mathbf{A}\), where \(A\) is a vector potential. Consequently, the fluid should be incompressible (\(div\mathbf{v}=div\,curl\mathbf{A}=0\)). The density should be constant, both in time and space (along the flow). This indicates that in a compressible fluid we cannot have vortices. Usually, the variations of the density are small, such that they may be neglected for the present purpose. Therefore, we may limit to an incompressible fluid (\(div\mathbf{v}=0\)), for which equation (32) becomes \[curl\frac{d\mathbf{v}}{dt}=\nu\Delta curl\mathbf{v}\ \, \tag{34}\] or \[\frac{\partial}{\partial t}curl\mathbf{v}-curl\left(\mathbf{v$ }\times curl\mbox{\boldmath$v}\right)=\nu\Delta curl\mathbf{v}\ \, \tag{35}\] where \(\nu=\eta/\rho\) is the kinematical viscosity and we have used the identity \((\mathbf{v}grad)\mathbf{v}=-\mathbf{v}\times curl \mathbf{v}+grad(v^{2}/2)\). This is the equation of vorticity; it can also be written as \[\frac{\partial}{\partial t}curl\mathbf{v}+curl\left[(\mbox{\boldmath $v$}grad)\mathbf{v}\right]=\nu\Delta curl\mathbf{v}. \tag{36}\] This equation gives the velocity. The pressure is obtained from the Navier-Stokes equation. If we write the Navier-Stokes equation as \[\frac{\partial}{\partial t}curl\mathbf{A}+(\mathbf{v}grad) \mathbf{v}=-\frac{1}{\rho}gradp+\nu\Delta curl\mathbf{A}\ \, \tag{37}\] we get \[div\left[(\mathbf{v}grad)\mathbf{v}+\mbox{$\frac{1}{\rho} $}gradp\right]=0 \tag{38}\] and \[div\left(\frac{\partial\mathbf{v}}{\partial t}-\nu\Delta\mathbf{v}\right)=0\ \, \tag{39}\] which is an identity (\(div\mathbf{v}=0\), \(\mathbf{v}=curl\mathbf{A}\)). In some cases a particular solution of these equations is provided by \[\begin{array}{c}(\mathbf{v}grad)\mathbf{v}=-grad(p/\rho) \,\\ \frac{\partial\mathbf{v}}{\partial t}=\nu\Delta\mathbf{v} \.\end{array} \tag{40}\] We can see that the Navier-Stokes equation is split into (the derivatives of) a diffusion (heat) equation and an equilibrium equation; the first equation (40) indicates an equilibrium between the pressure force \(-grad(p/\rho)\) and Euler's force \((\mathbf{v}grad)\mathbf{v}\). The diffusion equation (40) holds also for the vorticity, because the above equations are valid for a non-vanishing vorticity. It is easy to see that these equations generalize the equations for the Couette flow. However, we have also the heat-transfer equation. For an incompressible fluid it reads \[\rho c_{p}\frac{dT}{dt}=\kappa\Delta T+\frac{1}{2}\eta\left(\partial_{i}v_{j}+ \partial_{j}v_{i}\right)^{2}\ \, \tag{41}\] or \[\frac{dT}{dt}=\chi\Delta T+\frac{1}{2}\frac{\nu}{c_{p}}\left(\partial_{i}v_{j} +\partial_{j}v_{i}\right)^{2}\ \, \tag{42}\] where \(c_{p}\) is the specific heat per unit mass at constant pressure, \(\kappa\) is the thermoconductivity, and \(\chi=\kappa/\rho c_{p}\) is the thermometric conductivity. For an incompressible fluid this equation can be transformed into an equation for the derivatives of the pressure, \[\frac{dp}{dt}=\chi\Delta p+\frac{1}{2}\frac{\alpha\nu}{c_{p}}\left(\partial_{ i}v_{j}+\partial_{j}v_{i}\right)^{2}\ \, \tag{43}\] where \(\alpha=(\partial p/\partial T)_{v}\) is the thermal pressure coefficient. In general, equation (43) is not compatible with the Navier-Stokes equation. Therefore, rigorously speaking, we cannot have vorticity in an incompressible fluid either. Usually, the coefficient \(\nu/c_{p}\) is very small (of the order \(10^{-24}-10^{-25}g\cdot cm^{2}/s\)), such that, for small gradients of velocity, we have a low rate of entropy production (though, a factor of the order \(10^{23}K/erg\) should be taken into account). Under these conditions, we may assume that the flow is isentropic and the heat-transfer equation may be neglected. In order to get an idea of how large the variations of the density and the temperature can be, we may estimate a change \(\delta p\) in pressure from \(\delta p\simeq\rho v^{2}\). A velocity \(v=100km/h\), which is fairly large, produces a change \(\delta p\simeq 10^{4}dyn/cm^{2}\) in air (\(\rho=10^{-3}g/cm^{3}\)), whose normal pressure is \(10^{6}dyn/cm^{2}\); therefore, \(\delta p/p\simeq 10^{-2}\). Such a velocity (\(\simeq 3\times 10^{3}cm/s\)) is close to the mean thermal velocity \(\simeq 10^{4}cm/s\) (for normal air), and close to the sound velocity in normal air \(c\simeq 3.5\times 10^{4}cm/s\). For this velocity we still expect local thermal equilibrium. The change in density is given by \(\delta p=K(\delta\rho/\rho)\), where \(K=-V(\partial p/\partial V)\) is the (say, isothermal) modulus of compression. For air \(K\simeq 10^{6}dyn/cm^{2}\), for water \(K\simeq 10^{10}dyn/cm^{2}\), such that we get \(\delta\rho/\rho\simeq 10^{-2},\,10^{-6}\). The change in temperature is obtained from \(\delta p=\alpha T(\delta T/T)\), where \(\alpha=(\partial p/\partial T)_{V}\) is the thermal pressure coefficient (at constant volume \(V\)). For water \(\alpha\simeq 10^{22}/cm^{3}\), for gases it is much higher; for normal temperature \(T=300K\) we get \(\delta T/T\simeq 10^{-4}\), or much lower. Consequently, we may expect an almost ideal, incompressible flow. We note that, although we neglect the viscosity in the heat-transfer equation, we keep it in the Navier-Stokes equation. Therefore, within these approximations (incompressibility and constant entropy), we are left with equation (36) and the Navier-Stokes equation for a vorticial flow. In general, an external pressure which satisfies the Navier-Stokes equation (or equation (38)) is very special, such that, if they exist, the vortices might be, in fact, unstable. They develop an Euler's force, which is difficult to be compensated by an external force. We note that, under the conditions stated above, the viscosity may generate (unstable) vortices. In the next section we show that the viscosity may generate another type of instabilities. For small variations of the velocity we may neglect the inertial term in the vorticity equation (36), which becomes \[\frac{\partial}{\partial t}curl\mathbf{v}=\nu\Delta curl\mathbf{v}. \tag{44}\] By making use of \(\mathbf{v}=curl\mathbf{A}\), we get \[\left(\Delta-grad\,div\right)\left(\frac{\partial\mathbf{A}}{\partial t }-\nu\Delta\mathbf{A}\right)=0. \tag{45}\] A solution of this equation is provided by \[\frac{\partial\mathbf{A}}{\partial t}-\nu\Delta\mathbf{A}=0\ \, \tag{46}\] which leads to \[\mathbf{A}=\mathbf{A}_{0}e^{-\lambda\nu t}e^{\pm i\sqrt{ \lambda}r}/r\ \, \tag{47}\] where \(\mathbf{A}_{0}\) and \(\lambda\) are two constants. The velocity acquires the form \[\mathbf{v}=-\mathbf{A}_{0}\times grad\left(e^{-\lambda\nu t }e^{\pm i\sqrt{\lambda}r}/r\right)\ \, \tag{48}\] and the pressure is uniform within this approximation. We note that, although the spatial dependence of the solution does not depend on viscosity, it is generated by the viscosity term \(\nu\Delta\mathbf{v}\). Instabilities The equation of energy conservation for an incompressible fluid is \[\begin{array}{c}\frac{\partial}{\partial t}\left(\frac{1}{2}\rho v^{2}\right)+ div\left[\mathbf{v}\left(\frac{1}{2}\rho v^{2}+p\right)-\frac{1}{2} \eta grad(v^{2})\right]+\\ \\ +\eta\left(\partial_{j}v_{i}\right)^{2}=0\ ;\end{array} \tag{49}\] it is obtained by multiplying by \(v\) the Navier Stokes equation (31) for an incompressible fluid (\(div\mathbf{v}=0\)). The \(div\)-term represents a transport of energy and mechanical work of the pressure, and an energy flux associated with collisions (viscosity); the term \(\eta\left(\partial_{j}v_{i}\right)^{2}\) represents the heat produced by viscosity. We integrate this equation over a volume \(V\) enclosed by a surface \(S\), \[\begin{array}{c}\frac{\partial}{\partial t}\int dV\left(\frac{1}{2}\rho v^{2 }\right)+\oint dS\left[v_{n}\left(\frac{1}{2}\rho v^{2}+p\right)-\frac{1}{2} \eta\partial_{n}(v^{2})\right]+\\ \\ +\eta\int dV\left(\partial_{j}v_{i}\right)^{2}=0\ \,\end{array} \tag{50}\] where \(v_{n}\) is the velocity component normal to the surface and \(\partial_{n}\) is the derivative along the normal to the surface. We compare the orders of magnitude of the surface terms and the \(\eta\)-volume term, and get ratios of the form \(\frac{Sl}{V}R\), \(\frac{Sl}{V}(p/\rho v^{2})R\), \(\frac{Sl}{V}\), where \(l\) is the distance over which the velocity varies and \(R=vl/\nu\) is the Reynolds number. For moderate Reynolds numbers and \(Sl/V\ll 1\) we can neglect the surface contributions in comparison with the heat term. By writing \[\mathbf{v}=f(t)\mathbf{u}(\mathbf{r})\, \tag{51}\] the above equation becomes \[\frac{\partial}{\partial t}f^{2}\cdot\int dV\left(\frac{1}{2}\rho u^{2} \right)+\eta f^{2}\int dV\left(\partial_{j}u_{i}\right)^{2}=0. \tag{52}\] We can see that the time dependence of the velocity is a damped exponential. The flow is stable, as a consequence of the dissipated heat. The \(\eta\)-term in equation (52) gives, in fact, the increase of entropy. Let us assume that the integration domains are sufficiently small, such that \(Sl/V\) is of the order of unity; the velocity varies over a distance \(l\) inside the domains, but we assume that it suffers a large discontinuity across the surface, over a small distance \(\delta\ll l\). Then, it is easy to see that the dominant term in equation (50) is the collision term, such that equation (50) becomes \[\frac{\partial}{\partial t}f^{2}\cdot\int dV\left(\frac{1}{2}\rho u^{2}\right)- \frac{1}{2}\eta f^{2}\oint dS\partial_{n}(u^{2})=0. \tag{53}\] We can see that for a positive normal derivative the flow is unstable. The viscosity is insufficient to disipate the energy as heat, and the energy is transferred by molecular collisions (viscosity) through surfaces of discontinuities. The process occurs in small domains, with large discontinuities of velocity across their surface, and the instabilities imply returning, swirling and fluctuating, velocities. This is the turbulence phenomenon. We note that the instabilities are governed by viscosity, which gives also vorticity (when the entropy production is neglected). Moreover, we note that the inertial term does not appear in instabilities, though it plays an important role in turbulence. The above arguments can be extended to compressible fluids, including the variations of the temperature. Indeed, the energy conservation in this case reads \[\frac{\partial}{\partial t}\left(\frac{1}{2}\rho v^{2}+\rho\varepsilon\right)= -\partial_{j}\left[\rho v_{j}\left(\frac{1}{2}v^{2}+w\right)-v_{i}\sigma_{ij}^ {{}^{\prime}}-\kappa\partial_{j}T\right]\ \, \tag{54}\] where \(\varepsilon\) is the internal energy per unit mass, \(w\) is the enthalpy per unit mass and \[\sigma_{ij}^{{}^{\prime}}=\eta\left(\partial_{i}v_{j}+\partial_{j}v_{i}-\frac {2}{3}\delta_{ij}div\mathbf{v}\right)+\zeta\delta_{ij}div\mathbf{v} \tag{55}\] is the viscosity tensor. For a smooth flow and a sufficiently large volume the surface term in equation (54) can be neglected, and the energy is conserved, as it is well known. However, in the surface integral we have terms of the form \[\oint dS\left[\eta v_{i}\left(\partial_{i}v_{n}+\partial_{n}v_{i}\right)- \left(\frac{2}{3}\eta-\zeta\right)v_{n}div\mathbf{v}+\kappa\partial_ {n}T\right]\ \, \tag{56}\] which imply normal derivatives to the surface, both of velocity and temperature. By collecting these contributions, we get \[\oint dS\left[\eta\partial_{n}(v^{2}/2)+\left(\frac{1}{3}\eta+\zeta\right) \partial_{n}(v_{n}^{2}/2)+\kappa\partial_{n}T\right]. \tag{57}\] We can see that for large normal derivatives across the surface, both for velocity and temperature, these terms may lead to instabilities. Turbulence As it is well known, for a moderate turbulence, _i.e._ for slowly varying fluctuations, we may decompose the velocity field into a mean velocity and a fluctuating part, and limit ourselves to the time averaged Navier-Stokes equation. This way we get the Reynolds equations, for which the mean energy is coupled to the fluctuating energy, via model assumptions. A fully developed turbulence exhibits highly-varying fluctuations, such that we need to consider the time-dependent Navier-Stokes equation. The turbulent instabilities ocurring in a fully developed turbulence exhibit large variations of the velocity over small distances. In this case we may assume that the velocity is split into a mean-flow velocity \(\mathbf{v}_{0}\) and a fluctuating part \(v\), where the mean-flow velocity \(\mathbf{v}_{0}\) may have a slight time variation, while the fluctuating velocity \(v\) is a rapidly varying velocity. By using this decomposition for an incompressible fluid, the Navier-Stokes equation reads \[\begin{array}{l}\frac{\partial\mathbf{v}_{0}}{\partial t}+\frac{ \partial\mathbf{v}}{\partial t}+\left(\mathbf{v}_{0}grad \right)\mathbf{v}_{0}+\left(\mathbf{v}_{0}grad\right)\mathbf{v}+\left(\mathbf{v}grad\right)\mathbf{v}_{0}+\\ \\ +\left(\mathbf{v}grad\right)\mathbf{v}=-\frac{1}{\rho} gradp_{0}-\frac{1}{\rho}gradp+\nu\Delta\mathbf{v}_{0}+\nu\Delta\mathbf{v}\ \,\end{array} \tag{58}\] where \(p_{0}\) is the pressure corresponding to the main flow and \(p\) is the fluctuating part of the pressure. In this equation we have three distinct types of time variations, such that it should be viewed as three equations \[\begin{array}{l}\frac{\partial\mathbf{v}_{0}}{\partial t}+\left( \mathbf{v}_{0}grad\right)\mathbf{v}_{0}=-\frac{1}{\rho} gradp_{0}+\nu\Delta\mathbf{v}_{0}\,\\ \\ \frac{\partial\mathbf{v}}{\partial t}+\left(\mathbf{v}_{0} grad\right)\mathbf{v}+\left(\mathbf{v}grad\right)\mathbf{v}_{0}=- \frac{1}{\rho}gradp+\nu\Delta\mathbf{v}\,\\ \\ \left(\mathbf{v}grad\right)\mathbf{v}=0\ ;\end{array} \tag{59}\] similarly, the continuity equation should be split into \[div\mathbf{v}_{0}=0\,\ div\mathbf{v}=0. \tag{60}\] The first equation (59) is an independent equation, which gives the main flow velocity \(\mathbf{v}_{0}\). Since the main part of the velocity is taken by the fluctuating velocity, we may neglect the quadratic term in this equation. The energy conservation and the heat transfer for this equation are given by \[\begin{array}{c}\frac{\partial}{\partial t}(v_{0}^{2}/2)=-\partial_{i}\left[v_{0 i}\left(p_{0}/\rho+v_{0}^{2}/2\right)-\nu\partial_{i}(v_{0}^{2}/2)\right]-\\ \\ -\nu(\partial_{i}v_{0j})^{2}\,\\ \\ T_{0}\frac{ds_{0}}{dt}=\chi\Delta T_{0}+\nu(\partial_{i}v_{0j})^{2}\ \,, \end{array} \tag{61}\] where \(T_{0}\) is the temperature of the main flow, \(s_{0}\) is the entropy per unit mass of the main flow and \(\chi\) is the thermometric conductivity. Having solved the mean-flow equation we can pass to solve the second equation (59) for the fluctuating velocity \(\boldsymbol{v}\), with \(\boldsymbol{v}_{0}\) as a parameter. This equation has its own energy-conservation and heat-transfer equations. We note that the temperature and the entropy of the mean flow are different from the temperature and the entropy of the fluctuating part of the flow, which means that the two components of the flow (the mean flow and the fluctuating flow) are not in thermal equilbrium. Indeed, if we multiply the first equation (59) by \(\boldsymbol{v}\) and the second equation (59) by \(\boldsymbol{v}_{0}\), we get cross-terms of the form \(T_{0}\frac{ds}{dt}+T\frac{ds_{0}}{dt}\) in the heat-transfer equation, where \(T\) and \(s\) are the temperature and the entropy of the fluctuating flow. This indicates a heat exchange between the two components of the flow. We are left with the third equation (59), which, in general, is not satisfied. We conclude that the fully developed turbulence does not satisfy the Navier-Stokes equation. The quadratic term of the third equation (59) is equivalent to a rapidly varying internal force (Euler's force), which cannot be compensated by any physical external force. The fully developed turbulence is unstable. Under these conditions it is reasonable to be interested in time averaged quantities. Then, the fluctuating part of the flow is reduced to \[\overline{\left(\boldsymbol{v}grad\right)\boldsymbol{v}}=0. \tag{62}\] Since \(div\boldsymbol{v}=0\), the components of the velocity \(\boldsymbol{v}\) are not independent. In general, equation (62) is not satisfied, which means that the turbulent motion is unstable even on average. We note that a similar decomposition is valid for a compressible fluid, as long as the velocity, density and entropy fluctuations are independent of one another. Since a fully developed turbulence originates in large variations of the velocity across small distances, it is reasonable to associate these variations with a discrete distribution of positions \(\mathbf{r}_{i}\),which we call centres of turbulence. Further on, we assume that this is a homogeneous and isotropic distribution, such that we may write the velocity field as \[\mathbf{v}=\sum_{i}\mathbf{v}_{i}(t,R_{i})\ \,, \tag{63}\] where \(\mathbf{R}_{i}=\mathbf{r}-\mathbf{r}_{i}\). If \(curl\mathbf{v}_{i}\neq 0\), this velocity field represents a vorticial liquid, which is unstable. We assume that the fluctuating velocities are independent at distinct positions, _i.e._\(\overline{\mathbf{v}_{i}}=0\) and \(\overline{\mathbf{v}_{i}\mathbf{v}_{j}}\sim\delta_{ij}\). Equation (62) becomes \[\overline{\left(\mathbf{v}grad\right)\mathbf{v}}=\sum_{i} \overline{\left(\mathbf{v}_{i}\mathbf{R}_{i}/R_{i}\right) \left(d\mathbf{v}_{i}/dR_{i}\right)}. \tag{64}\] The conditions of homogeneity and isotropy imply that \(\mathbf{v}_{i}\) in the above equation may be replaced by the same velociy \(u\). For a sufficiently dense set of positions \(\mathbf{r}_{i}\) we can define a density \(\rho_{v}\) of such points, which is a constant. Then, equation (64) can be transformed into the integral \[\overline{\left(\mathbf{v}grad\right)\mathbf{v}}=\rho_{v} \int dR\cdot R^{2}\int do\overline{\left(\mathbf{u}\mathbf{R$ }/R\right)\left(d\mbox{\boldmath$u}/dR\right)}. \tag{65}\] If the radial integral is finite, the result of integration in the above equation is zero, due to the integration over the solid angle \(o\), such that the term \(\overline{\left(\mathbf{v}grad\right)\mathbf{v}}\) is zero. In this case we can say that the Navier-Stokes equation is trivially satisfied on average, being reduced to the mean-flow equation (first equation (59)). If the radial integral in equation (65) is singular for \(\mathbf{R}=0\), as it may often happen for vortices, we are left with a discrete set of singularities, extending over a small characteristic distance \(a\), where the singularity is \[\mathbf{u}\sim\left(a/R\right)^{n}\,,\ n>1. \tag{66}\] In each of these regions there exists a mass \(M\) of fluid, which can be carried by the background fluid and, at the same time, they may have their own motion. The averaged energy-conservation equation derived from the second equation (59), \[\frac{1}{2}\mathbf{v}_{0}grad\overline{v^{2}}+\partial_{j}\left( \overline{v_{i}v_{j}}v_{0i}\right)-\frac{1}{2}\nu\Delta\overline{v^{2}}=-\nu \overline{\left(\partial_{j}v_{i}\right)^{2}}\, \tag{67}\] shows that the fluctuating motion produces heat which is partly transported by the mean flow (the \(div\)-terms integrated over a volume are irrelevant). This dissipated heat should be compensated from the outside. Therefore, we are left with a quasi ideal classical gas of singular vortices, or a solution of vortices in the background fluid, in thermal quasi equilibrium. This is an example of emergent dynamics.[23] ## 7 Gas of singularities Let us assume a homogeneous, isotropic, fluctuating distribution of singular centres of turbulence localized at \(\mathbf{r}_{i}\) with mass \(M\), as described above. Their density is \[\rho=M\sum_{i}\delta(\mathbf{r}-\mathbf{r}_{i}) \tag{68}\] and \[\frac{\partial\rho}{\partial t}=-M\sum_{i}\mathbf{u}_{i}grad\delta( \mathbf{r}-\mathbf{r}_{i})\;\;, \tag{69}\] where \(\mathbf{u}_{i}=d\mathbf{r}_{i}/dt\) is their velocity. The velocity field of the singularities is \[\mathbf{u}=v\sum_{i}\mathbf{u}_{i}\delta(\mathbf{r }-\mathbf{r}_{i})\;\;, \tag{70}\] where \(v\) is the small volume over which the \(\delta\)-function is localized, such that \(v=a^{3}\) and \(M=\rho v\). Let us compute \[div(\rho\mathbf{u})=vMdiv\sum_{ij}\delta(\mathbf{r}-\mathbf{r}_{i})\mathbf{u}_{j}\delta(\mathbf{r}-\mathbf{r}_{j})=\] \[=Mdiv\sum_{i}\mathbf{u}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})= \tag{71}\] \[=M\sum_{i}\mathbf{u}_{i}grad\delta(\mathbf{r}-\mathbf{r}_{i})=-\frac{\partial\rho}{\partial t}\;;\] we can see that the continuity equation is satisfied. Now, let us focus on the Navier-Stokes equation \[\rho\frac{\partial\mathbf{u}}{\partial t}+\rho(\mathbf{u} grad)\mathbf{u}=-gradp+\eta\Delta\mathbf{u}\;, \tag{72}\] and let us compute each term in this equation for our fluid of singularities. We have \[\frac{\partial\mathbf{u}}{\partial t}=v\sum_{i}\dot{\mbox{\boldmath $u$}}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})-v\sum_{i}\mathbf{u}_{i}(\mathbf{u}_{i}grad)\delta(\mathbf{r}- \mathbf{r}_{i})\;. \tag{73}\] The inertial term is \[\begin{array}{c}(\mathbf{u}grad)\mathbf{u}=v^{2}\sum_{ij} \mathbf{u}_{j}\delta(\mathbf{r}-\mathbf{r}_{i})(\mbox {\boldmath$u$}_{i}grad)\delta(\mathbf{r}-\mathbf{r}_{j})=\\ \\ =v\sum_{i}\mathbf{u}_{i}(\mathbf{u}_{i}grad)\delta(\mathbf{r}-\mathbf{r}_{i})\.\end{array} \tag{74}\] On comparing equations (73) and (74), we can see that the inertial (Euler's) term disappears from equation. This is expected, since the \(\delta\)-function equals the variable \(r\) to the function \(\mathbf{r}_{i}(t)\), which amounts to Lagrange's approach. The term on the left in equation (72) becomes \[\rho\frac{\partial\mathbf{u}}{\partial t}+\rho(\mathbf{u} grad)\mathbf{u}=M\sum_{i}\dot{\mathbf{u}}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})\ \, \tag{75}\] and equation (72) reads now \[\begin{array}{c}M\sum_{i}\dot{\mathbf{u}}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})=-v\sum_{i}p_{i}grad\delta(\mathbf{r}-\mathbf{r}_{i})+\\ \\ +\eta v\sum_{i}\mathbf{u}_{i}\Delta\delta(\mathbf{r}-\mathbf{r}_{i})\ \,\end{array} \tag{76}\] where \(p_{i}\) is the pressure at the position \(\mathbf{r}_{i}\). The pressure term in equation (76) is a force per unit volume acting upon the vortex placed at \(\mathbf{r}_{i}\); it may arise from the pressure exerted by the background fluid particles. This term may be written as \[\sum_{i}\mathbf{f}_{i}\delta(\mathbf{r}-\mbox{\boldmath$r$ }_{i})\, \tag{77}\] where \(\mathbf{f}_{i}\) is the force acting at \(\mathbf{r}_{i}\). The factor \(\Delta\delta(\mathbf{r}-\mathbf{r}_{i})\) is of the order \(-\frac{1}{a^{2}}\delta(\mathbf{r}-\mathbf{r}_{i})\), where \(a\) is of the order of the dimension of the vortex (\(v=a^{3}\)); consequently, we may replace the viscosity terms in equation (76) by \[-\frac{\eta v}{a^{2}}\sum_{i}\mathbf{u}_{i}\delta(\mathbf{r} -\mathbf{r}_{i}). \tag{78}\] A similar contribution brings the \(\zeta\)-term. The equation of motion (76) describes a set of independent particles with mass \(M\), subjected to an external force \(\mathbf{f}_{i}\) and a friction force; the equation of motion of each such particle can be written as \[M\dot{\mathbf{u}}_{i}=\mathbf{f}_{i}-\eta a\mbox{\boldmath $u$}_{i}\ \, \tag{79}\] which is Newton's law of motion. The damping coefficient caused by the friction force is very small, such that we can consider the ensemble of singularities as a (quasi) ideal classical gas of independent, identical, pointlike particles. Therefore, a (singular) fully developed turbulence may be viewed as the (quasi) thermodynamic equilibrium of such a gas (or a solution of singularities in the background fluid). We can define a temperature of turbulence, which is approximately the mean kinetic energy of the translational motion of a singularity. Also, we can estimate a chemical potential by evaluating \(\overline{v^{2}}/2\). We note that the density \(\rho_{v}\) and the dimension \(a\) of the singularities remain undetermined; these parameters can be estimated from experiment. Also, it is worth noting that we may consider the equations of the fluid mechanics for this new gas of singularities, viewed as a continuous medium, at a higher scale. ## 8 Vorticial liquids ### Vortex For an incompressible fluid with an isentropic flow we consider a velocity field given by \[\mathbf{v}=\mathbf{\omega}\times gradf(r)\,\,\,, \tag{80}\] where \(\omega\) is a vector which may depend only on the time and the function \(f(r)\) is smooth everywhere, except, possibly, at the origin \(\mathbf{r}\neq 0\), and vanishing rapidly at infinity. The velocity \(v\) rotates about \(\omega\); such a velocity field defines a vortex. We can ckeck that \(div\mathbf{v}=0\) and \[\mathbf{v}=-curl\left[\mathbf{\omega}f(r)\right]\,\,\,, \tag{81}\] such that we can define a vector potential \(\mathbf{A}=-\mathbf{\omega}f(r)\) (\(\mathbf{v}=curl\mathbf{A}\)). We note that \(div\mathbf{A}\neq 0\) and the vorticity differs from \(\Delta\mathbf{A}\), in general, \[curl\mathbf{v}=\Delta\left[\mathbf{\omega}f(r)\right]- grad\,div\left[\mathbf{\omega}f(r)\right]\neq-\Delta\mathbf{A}\,\,. \tag{82}\] If the velocity given by equation (81) satisfies the second equation (40), we should have \(\mathbf{\omega}\sim e^{-\nu\lambda t}\) and \(\Delta f+\lambda f=0\), \(i.e.\)\(f\sim e^{\pm i\sqrt{\lambda}r}/r\), where \(\lambda\) is, in general, complex. In two dimensions \(f\) is a Bessel function. The Navier-Stokes equation gives the pressure. The most common example of a velocity field given by equation (80) is the filamentary vortex (the "cyclon"), with \(\mathbf{\omega}=const\), (\(\lambda=0\)), \(f(r)=-\ln r\) and the two-dimensional position vector \(r\) perpendicular to \(\omega\). In this case \(\mathbf{v}=\mathbf{r}\times\mathbf{\omega}/r^{2}\), \(\mathbf{A}=\mathbf{\omega}\ln r\) and \(curl\mathbf{v}=-2\pi\mathbf{\omega}\delta(\mathbf{r })\). The velocity is singular at \(\mathbf{r}=0\). For the filamentary vortex the Navier-Stokes equation can be satisfied for \(p=-\rho\omega^{2}/2r^{2}\), _i.e._ for an external potential \(\varphi=p/\rho=-\omega^{2}/2r^{2}\). This can be provided by the gravitational field for a fluid with a free surface. For convenience, we give the expression of the various terms in the Navier-Stokes equation for the vortex given by equation (80), \[\frac{\partial\mathbf{v}}{\partial t}=\left(\dot{\mathbf{ \omega}}\times\mathbf{r}\right)\frac{f^{{}^{\prime}}}{r}\,\] \[(\mathbf{v}grad)\mathbf{v}=\left[\mathbf{\omega}( \mathbf{\omega}\mathbf{r})-\omega^{2}\mathbf{r} \right]\frac{f^{{}^{\prime}2}}{r^{2}}\, \tag{83}\] \[\Delta\mathbf{v}=(\mathbf{\omega}\times\mbox{\boldmath$r$ })\frac{(f^{{}^{\prime\prime}}+2f^{{}^{\prime}}/r)^{{}^{\prime}}}{r}\ \,\] where the primes denote the derivatives of the function \(f\); the second equation (83) is derived by using the identity \[(\mathbf{v}grad)\mathbf{v}=-\mathbf{v}\times curl \mathbf{v}+grad(v^{2}/2). \tag{84}\] For a filamentary vortex the above expressions become \[\begin{array}{c}(\mathbf{v}grad)\mathbf{v}=-\omega^{2} \mathbf{r}\frac{f^{{}^{\prime}2}}{r^{2}}\,\\ \\ \Delta\mathbf{v}=(\mathbf{\omega}\times\mathbf{r}) \frac{(f^{{}^{\prime\prime}}+f^{{}^{\prime}}/r)^{{}^{\prime}}}{r}\.\end{array} \tag{85}\] In general, the vortex given by equation (80) does not satisfy the Navier-Stokes equation; it develops internal (Euler's) forces which cannot be compensated; the vortex is unstable. Such vortices are examples of singular velocities. ### Vorticial liquid A set of vectors \(\mathbf{\omega}_{i}\) placed at \(\mathbf{r}_{i}\) form a vorticial liquid. The velocity field is \[\mathbf{v}=\sum_{i}\mathbf{\omega}_{i}\times grad\!f_{i}(R_ {i})\ \, \tag{86}\] where \(\mathbf{R}_{i}=\mathbf{r}-\mathbf{r}_{i}\). This velocity field is a superpositon of independent vortices. Each \(i\)-th vortex develops an internal force, while the inertial term generates an interaction which brings an additional force; this additional force depends on all the other \(j\)-th vortices, \(j\neq i\). Therefore, the Navier-Stokes equation is not satisfied by the velocity field given by equation (86), in the sense that there is no physical external pressure to compensate the Euler force. We assume randomly fluctuating vectors \(\mathbf{\omega}_{i}\), as a distinctive feature of a fully developed vorticial turbulence. It is likely that the fluid develops fluctuations as a reaction to its uncompensated internal forces. Specifically, we assume \(\overline{\mathbf{\omega}}_{i}=0\) and \(\overline{\omega_{i}^{\alpha}\omega_{j}^{\beta}}=\frac{1}{3}\overline{\omega_ {i}^{2}}\delta_{ij}\delta^{\alpha\beta}\), where \(\alpha,\,\beta=1,2,3\) are the cartesian labels of the components of the vectors \(\mathbf{\omega}_{i}\), and the overbar indicates the average over time. Then, the average velocity is zero (\(\overline{\mathbf{v}}=0\)) and we are left with the inertial term \[\overline{(\mathbf{v}grad)\mathbf{v}}=\overline{v_{j} \partial_{j}v_{i}}. \tag{87}\] The calculation of this term is straightforward; we get \[\begin{array}{c}\overline{(\mathbf{v}grad)\mathbf{v}}= \frac{1}{3}\sum_{i}\omega_{i}^{2}[\frac{1}{2}grad\left(gradf_{i}(R_{i})\right)^ {2}-\\ \\ -gradf_{i}(R_{i})\cdot\Delta f_{i}(R_{i})]\.\end{array} \tag{88}\] The averaged Euler forces are \[\begin{array}{c}-\overline{\mathbf{v}\times curl\mbox{\boldmath$v$ }}=-\frac{1}{3}\sum_{i}\omega_{i}^{2}[\frac{1}{2}grad\left(gradf_{i}(R_{i}) \right)^{2}+\\ \\ +gradf_{i}(R_{i})\cdot\Delta f_{i}(R_{i})]\,\\ \\ \overline{grad(v^{2}/2)}=\frac{1}{3}\sum_{i}\omega_{i}^{2}grad\left( gradf_{i}(R_{i})\right)^{2}\.\end{array} \tag{89}\] By using the spherical symmetry of the function \(f_{i}(R_{i})\), these expressions can be cast in the form \[\begin{array}{c}\overline{(\mathbf{v}grad)\mathbf{v}}=- \frac{2}{3}\sum_{i}\omega_{i}^{2}f_{i}^{{}^{\prime}2}\frac{\mathbf{R} _{i}}{R_{i}^{2}}\,\\ \\ -\overline{\mathbf{v}\times curl\mathbf{v}}=-\frac{2}{3} \sum_{i}\omega_{i}^{2}f_{i}^{{}^{\prime}}\left(f_{i}^{{}^{\prime\prime}}+ \frac{1}{R_{i}}f_{i}^{{}^{\prime}}\right)\frac{\mathbf{R}_{i}}{R_{i}}\,\\ \\ \overline{grad(v^{2}/2)}=\frac{2}{3}\sum_{i}\omega_{i}^{2}f_{i}^{{}^{\prime}}f _{i}^{{}^{\prime\prime}}\frac{\mathbf{R}_{i}}{R_{i}}\.\end{array} \tag{90}\] We can see that even on average the Navier-Stokes equation is not satisfied, in the sense discussed above. Even on average the vorticial liquid develops internal forces which are not equilibrated; it is unstable. We shall give specific examples of such an instability below. Now, let us assume that the vorticial liquid is sufficiently dense, _i.e._ if we can define a density \(\rho_{v}\) of points \(\mathbf{r}_{i}\); further, we assume that the liquid is homogeneous and isotropic, _i.e._ this density is constant and the \(\omega_{i}\) and the functions \(f_{i}\) can be replaced in the above equations by uniform functions \(\omega_{i}=\omega\) and \(f_{i}=f\). We assume that this is another distinctive feature of a fully developed turbulence. Then, we may transform the summation over \(i\) in equations (90) into an integral, like in equation (65). By choosing the origin at \(\mathbf{r}=\mathbf{r}_{i}\) for a fixed \(\mathbf{r}_{i}\), and using the notation \(\mathbf{R}=\mathbf{r}-\mathbf{r}_{i}\), we get \[\overline{(\mathbf{v}grad)\mathbf{v}}=-\frac{2}{3}\rho_{v} \omega^{2}\int_{0}^{\infty}dR\cdot Rf^{{}^{\prime}2}(R)\int do(\mathbf{R }/R)\ ; \tag{91}\] the result of integration in this equation is zero, due to the integration over the solid angle \(o\), providing the radial integration is finite. In this case the Navier-Stokes equation is satisfied trivially. Let us assume that the integral over \(R\) is singular at \(R=0\), as another distinctive feature of a fully developed turbulence (the function \(f\) is assumed to decrease sufficiently rapid at infinity to have a finite integral in this limit). The integration outside a small region around the \(i\)-th point is zero, while the integration over such a small region is indefinite. This singularity implies \[f(R)\sim(a/R)^{n}\,\ n>0 \tag{92}\] for \(R\ll a\), where \(a\) is a small characteristic distance (compare with equation (66)). Therefore, we are left with a discrete set of points \(\mathbf{r}_{i}\), where the function \(f(R_{i})\) and the velocity (\(v\sim 1/R_{i}^{n+1}\)) are singular. We have now a small region of dimension \(a\), around each point \(\mathbf{r}_{i}\), which includes a mass of fluid, say, \(M\), where the inertial term given by equation (91) is not defined. Since the positions \(\mathbf{r}_{i}\) may change in time, we are left with a classical gas of particle-like vortices (or a solution of vortices in the background fluid), as discussed above. Of course, we may have also a mixture of vorticial gases, each characterized by a dimension \(a\) and a mass \(M\). The equation of energy conservation (equation (67)) \[\frac{\partial}{\partial t}\left(\frac{1}{2}v^{2}\right)+\mathbf{v} \cdot(\mathbf{v}grad)\mathbf{v}=\nu\mathbf{v} \Delta\mathbf{v}\, \tag{93}\] averaged over the fluctuating vectors \(\mathbf{\omega}_{i}\), is reduced to the viscosity term \[\nu\overline{\mathbf{v}\Delta\mathbf{v}}=\frac{2\nu}{3}\sum_{i} \omega_{i}^{2}f_{i}^{{}^{\prime}}\left(f_{i}^{{}^{\prime\prime}}+2f_{i}^{{}^{ \prime}}/R_{i}\right)^{{}^{\prime}}\;. \tag{94}\] This term should be computed outside the regions with dimension \(a\), where the motion is defined. The non-vanishing value of this term indicates an energy loss, which should be compensated from the outside. The vorticial gas is in quasi equilibrium. The above considerations are valid for spherical-symmetric functions \(f_{i}(R_{i})\); if the vortices have a lower (internal) symmetry, the unit vector \(\mathbf{R}/R\) in equation (91) is replaced by functions which do not have a spherical symmetry, and the inertial term is not vanishing, in general. The vortices are unstable, and, likely, they could tend to acquire a spherical symmetry, which ensures a (quasi)-equilibrium. ### Filamentary liquid Let us consider a set of rectilinear, parallel filaments, directed along the \(z\)-axis, placed at positions \(\mathbf{r}_{i}\) in the \((x,y)\)-plane, with vorticities \(\mathbf{\omega}_{i}\). This is a two-dimensional vorticial liquid of "cyclons". The velocity field is given by \[\mathbf{v}=-\sum_{i}\mathbf{\omega}_{i}\times grad\ln R_{i} =\sum_{i}\frac{\mathbf{R}_{i}\times\mathbf{\omega}_{i}}{R_{i} ^{2}}\;\;, \tag{95}\] where \(\mathbf{R}_{i}=\mathbf{r}-\mathbf{r}_{i}\). Equation (95) shows that the velocity is derived from a vector potential \(A\), through \(\mathbf{v}=curl\mathbf{A}\). By taking the \(curl\) in this equation, we get \[\Delta\mathbf{A}=-2\mathbf{\omega} \tag{96}\] (providing \(div\mathbf{A}=0\) and \(div\mathbf{\omega}=0\)), where we introduce the notation \(curl\mathbf{v}=2\mathbf{\omega}\); \(\omega\) is called vorticity. The vorticity distribution corresponding to equation (95) \[\mathbf{\omega}(\mathbf{r})=\frac{1}{2}curl\mathbf{v}=-\pi\sum_{i}\mathbf{\omega}_{i}\delta(\mathbf{ R}_{i})\;\;, \tag{97}\] gives the vector potential \[\mathbf{A}=\sum_{i}\mathbf{\omega}_{i}\ln R_{i}. \tag{98}\] According to equation (84), the inertial term has the components \[\mathbf{f}=-\mathbf{v}\times curl\mathbf{v}=-2\pi \sum_{i\neq j}\omega_{i}\omega_{j}grad_{i}\ln R_{ij}\cdot\delta(\mbox{\boldmath $R$}_{i})\ \, \tag{99}\] where \(\mathbf{R}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\), and \(grade\), where \[e=\frac{1}{2}v^{2}=\sum_{i\neq j}\omega_{i}\omega_{j}\frac{\mathbf{R$ }_{i}\mbox{\boldmath$R}_{j}}{2R_{i}^{2}R_{j}^{2}} \tag{100}\] is a density of kinetic energy (per unit mass). Apart from the force \(f\), which acts at the positions of the vortices, there exist internal forces given by \(grade\), which make the liquid unstable. The motion and the statistics of parallel, rectilinear filaments have been extensively investigated,[24]-[31] the instability being associated with a negative temperature in an attempt of a statistical theory.[25, 27] The total force given by equation (99) \[\mathbf{F}=\int d\mathbf{r}\mathbf{f}=-2\pi\sum_{i \neq j}\omega_{i}\omega_{j}grad_{i}\ln R_{ij} \tag{101}\] is zero. We can see that a force \[\mathbf{f}_{ij}=-\mathbf{f}_{ji}=-2\pi\omega_{i}\omega_{j} grad_{i}\ln R_{ij} \tag{102}\] acts between any pair \((ij)\) of vortices. This force derives from a potential \[U_{ij}=2\pi\omega_{i}\omega_{j}\ln R_{ij}\ \, \tag{103}\] such that \[\mathbf{F}=-\sum_{i\neq j}grad_{i}U_{ij} \tag{104}\] The density of kinetic energy \(e\) can be written as \[e=\frac{1}{2}v^{2}=\frac{1}{2}\mathbf{v}curl\mathbf{A}=- \frac{1}{2}div(\mathbf{v}\times\mathbf{A})+\mathbf{ A}\mathbf{\omega}. \tag{105}\] The first term in equation (105) is singular at \(\mathbf{r}=\mathbf{r}_{i}\); we integrate this term over the whole space, transform it into surface integrals, both at infinity and over small circles around each filament, and neglect their contributions. The result of such integrations is a self-energy (or a self-force), which may be left aside. We call this procedure a "renormalization".[32] By doing so, we are left with a total kinetic energy \[\begin{array}{c}E=\int d\mathbf{r}e=\int d\mathbf{r}\mathbf{A}\mathbf{\omega}=-\pi\sum_{i\neq j} \omega_{i}\omega_{j}\ln R_{ij}=\\ \\ =-\frac{1}{2}\sum_{i\neq j}U_{ij}=-U\;\;,\end{array} \tag{106}\] where \(U\) is the total potential energy. We can see that the total energy is conserved, _i.e._\(E+U=const\). Also, the total force \(\mathbf{F}=0\), such that the total momentum is conserved. The total angular momentum is \(-2\int d\mathbf{r}\mathbf{A}\); it is proportional to \(\sum_{i}\mathbf{\omega}_{i}\). The total torque \[2\pi\sum_{i\neq j}\omega_{i}\omega_{j}\frac{\mathbf{r}_{i}\times \mathbf{r}_{j}}{R_{ij}^{2}} \tag{107}\] is zero. By this "renormalization" procedure the points \(\mathbf{r}_{i}\) are completely decoupled from the fluid, and they may have their own motion. The average over fluctuating vorticities can be computed straightforwardly, by using \(\overline{\mathbf{\omega}_{i}^{2}}=\frac{1}{2}\omega_{i}^{2}\); it is given by \[\overline{(\mathbf{v}grad)\mathbf{v}}=-\sum_{i}\omega_{i}^{ 2}\frac{\mathbf{R}_{i}}{2R_{i}^{4}}\;. \tag{108}\] We can see that for a sufficiently dense, homogeneous and isotropic liquid we get a gas of (singular) vortices, as discussed above. ### Coulombian liquid For \(f_{i}(R_{i})=-1/R_{i}\) in equation (86) we get a coulombian vorticial liquid with the velocity field \[\mathbf{v}=-\sum_{i}\mathbf{\omega}_{i}\times grad(1/R_{i} )=\sum_{i}\frac{\mathbf{\omega}_{i}\times\mathbf{R}_{i}}{R_ {i}^{3}} \tag{109}\] and the vector potential \[\mathbf{A}=\sum_{i}\frac{\mathbf{\omega}_{i}}{R_{i}}\;. \tag{110}\] The equation \(\Delta\mathbf{A}=-2\mathbf{\omega}\) is satisfied for \[\mathbf{\omega}(\mathbf{r})=2\pi\sum_{i}\mathbf{\omega$ }_{i}\delta(\mbox{\boldmath$R}_{i})\;\;, \tag{111}\] but this vorticity differs from \[\frac{1}{2}curl\mathbf{v}=2\pi\sum_{i}\mathbf{\omega}_{i} \delta(\mathbf{R}_{i})-\frac{1}{2}grad\,div\sum_{i}(\mathbf{ \omega}_{i}/R_{i}). \tag{112}\] By applying the "renormalization" procedure we get the force \[\mathbf{F}=\int d\mathbf{r}\mathbf{f}=-\int dr \mathbf{v}\times curl\mathbf{v}=4\pi\sum_{i\neq j}grad_{i} \frac{\mathbf{\omega}_{i}\mathbf{\omega}_{j}}{R_{ij}} \tag{113}\] and the energy \[E=\int d\mathbf{r}e=\frac{1}{2}\int drv^{2}=2\pi\sum_{i\neq j}\frac{ \mathbf{\omega}_{i}\mathbf{\omega}_{j}}{R_{ij}}\;\;, \tag{114}\] where, in both cases \(\mathbf{v}=curl\mathbf{A}\) is used. The potential from equation (103) is now \(U_{ij}=-4\pi\frac{\mathbf{\omega}_{i}\mathbf{\omega}_{j}}{R_ {ij}}\). The total energy is conserved, the total force is zero, the total angular momentum is proportional to \(\sum_{i}\mathbf{\omega}_{i}\) and the total torque (zero) is \[4\pi\sum_{i\neq j}\mathbf{\omega}_{i}\mathbf{\omega}_{j} \frac{\mathbf{r}_{i}\times\mathbf{r}_{j}}{R_{ij}^{3}}. \tag{115}\] The average of the inertial term over fluctuating vortices is obtained from equation (90) \[\overline{(\mathbf{v}grad)\mathbf{v}}=-\frac{2}{3}\sum_{i} \omega_{i}^{2}\frac{\mathbf{R}_{i}}{R_{i}^{6}}\;\;, \tag{116}\] such that we may get a gas of singular vortices. ### Dipolar liquid A dipolar liquid is defined by the vorticity \[\mathbf{\omega}=-2\pi\sum_{i}\mathbf{m}_{i}\times grad \delta(\mathbf{r}-\mathbf{r}_{i})\;\;, \tag{117}\] where the vectors \(\mathbf{m}_{i}\) may depend on the time, at most. We get the vector potential \[\mathbf{A}=\sum_{i}\frac{\mathbf{m}_{i}\times\mathbf{R$ }_{i}}{R_{i}^{3}}=-\sum_{i}\mbox{\boldmath$m}_{i}\times grad(1/R_{i}) \tag{118}\] and the velocity field \[\mathbf{v}(\mathbf{r})=\sum_{i}\left[-\mathbf{m}_{i }/R_{i}^{3}+3\mathbf{R}_{i}(\mathbf{m}_{i}\mbox{\boldmath$R$ }_{i})/R_{i}^{5}\right]= \tag{119}\] \[=\sum_{i}grad\left[\mathbf{m}_{i}grad(1/R_{i})\right]\.\] We recognize in these equations magnetic (dipole) moments \(\mathbf{m}_{i}\), a dipolar vector potential \(A\) and a magnetic field \(v\). The inertial term has the components \[\mathbf{f}=-\mathbf{v}\times curl\mathbf{v}=2 \mathbf{\omega}\times\mathbf{v}=\] \[=4\pi\sum_{i\neq j}grad\left[\mathbf{m}_{j}grad(1/R_{j})\right]\times \left[\mathbf{m}_{i}\times grad\delta(\mathbf{R}_{i})\right] \tag{120}\] and \(grade\), where \[e=\mbox{$\frac{1}{2}$}v^{2}=\mbox{$\frac{1}{2}$}\sum_{i\neq j}[\frac{\mathbf{m}_{i}\mathbf{m}_{j}}{R_{i}^{3}R_{j}^{3}}-\frac{3(\mathbf{m}_{i}\mathbf{R}_{i})(\mathbf{m}_{j}\mathbf{R}_{j})}{R_{i}^{3}R_{j}^{5}}-\] \[-\frac{3(\mathbf{m}_{j}\mathbf{R}_{i})(\mathbf{m} _{i}\mathbf{R}_{i})}{R_{i}^{5}R_{j}^{3}}+ \tag{121}\] \[+\frac{9(\mathbf{R}_{i}\mathbf{R}_{i})(\mathbf{m} _{i}\mathbf{R}_{i})(\mathbf{m}_{j}\mathbf{R}_{i})} {R_{i}^{5}R_{j}^{5}}]\.\] We can see that the dipolar liquid is unstable. The total force and the total kinetic energy are \[\mathbf{F}=\int d\mathbf{r}\mathbf{f}=2\int d \mathbf{r}\mathbf{\omega}\times\mathbf{v}=-\sum_{i \neq j}grad_{i}U_{ij}\ \, \tag{122}\] \[E=\int d\mathbf{r}e=\int d\mathbf{r}\mathbf{\omega }\mathbf{A}=-\mbox{$\frac{1}{2}$}\sum_{i\neq j}U_{ij}\ \,\] where \[U_{ij}=-4\pi\mathbf{m}_{i}grad_{i}\left[\mathbf{m}_{j}grad_ {i}(1/R_{ij})\right]. \tag{123}\] By this "renormalization" procedure, the liquid is reduced to a set of interacting particle-like vortices. We can see that the energy is conserved, the total force is zero, the total angular momentum and the total torque are zero. The time average over vorticities in equation (121) leads to \[\overline{e}=\frac{1}{2}\sum_{i}\left[\frac{m_{i}^{2}}{R_{i}^{6}}+\frac{3\overline {(\mathbf{m}_{i}\mathbf{R}_{i})^{2}}}{R_{i}^{8}}\right]=\sum_ {i}\frac{m_{i}^{2}}{R_{i}^{6}}\, \tag{124}\] which gives a force \[grad\overline{e}=-6\sum_{i}\frac{m_{i}^{2}\mathbf{R}_{i}}{R_{i}^{8}}. \tag{125}\] For a dense, homogeneous and isotropic liquid the force given by equation (125) is zero (for \(\mathbf{R}_{i}\neq 0\)); also, the force \(f\) is zero for \(\mathbf{R}_{i}\neq 0\), and the Navier-Stokes equation is satisfied trivially (on average). We note that \(\Delta\mathbf{v}\) is zero for \(\mathbf{R}_{i}\neq 0\) (equation (119)), such that the viscosity contribution is zero. We are left with a set of positions \(\mathbf{r}_{i}\), each surrounded by a small region, where the motion is not defined. According to the above discussion, such a structure may be viewed as a (quasi) ideal classical gas of vortices (or a solution of vortices in the background fluid). ## 9 Concluding remarks In fairly general conditions we have given in this paper an explicit (smooth) solution for the potential flow. We have shown that, rigorously speaking, the equations of the fluid mechanics have not rotational solutions. However, usually we may neglect the variations of the density and the temperature, such that, in these conditions, the Navier-Stokes equation may exhibit (approximate) vorticial solutions, governed by the viscosity. We give arguments that the vortices are unstable. On the other hand, for large variations of the velocity over small distances, the fluid velocity exhibits turbulent, highly fluctuating instabilities, controlled by viscosity. Such a fully developed turbulence occurs as a consequence of the insufficiency of the viscosity to dissipate heat. We represent the fully developed turbulence as a superposition of highly fluctuating velocities, associated to a discrete distrbution of turbulence centres, and are interested in the temporal average of this velocity field. A regular mean flow may be added. It is shown that the Navier-Stokes equation is not satisfied on average. However, for a homogeneous and isotropic distribution of (non-singular) turbulence centres (as another distinctive feature of a fully developed turbulence), the temporal average of both the fluctuating velocity and the inertial term is zero, such that the Navier-Stokes equation is satisfied trivially. If the velocity is singular at the turbulence centres we are left with a quasi ideal classical gas of singularities (or a solution of singularities in the background fluid), in thermal quasi equilibrium, as an example of emergent dynamics. The Navier-Stokes equation for this fluid of singularities is reduced to Newton's law of motion (with a small friction). At a higher scale, equations of fluid mechanics can be considerd for this gas, as a continuous medium. We have illustrated all the above considerations with three examples of (singular) vorticial liquids. **Acknowledgements** The author is indebted to the members of the Department of Theoretical Physics, the Institute of Physics and Nuclear Engineering, the Institute of Atomic Physics, Magurele, for many enlightening discussions. A helpful analysis by dr. F. Buzatu is particularly acknowledged. This work was carried out within the Program Nuclei, funded by the Romanian Ministry of Research, Innovation and Digitization, projects no. PN23210101/2023. **Conflict of interests:** The author declare no conflict of interest.
2309.11673
Error mitigation via error detection using Generalized Superfast Encodings
We provide a new approach to error mitigation for quantum chemistry simulation that uses a Bravyi-Kitaev Superfast encoding to implement a quantum error detecting code within the fermionic encoding. Our construction has low-weight parity checks as well. We show that for the spinless Hubbard model with nearest-neighbor repulsion terms, one-qubit errors are detectable, and more complicated errors are detectable with high probability. While our error-detection requires additional quantum circuitry, we argue that there is a regime in which the beneficial effect of error-mitigation outweighs the deleterious effects of additional errors due to additional circuitry. We show that our scheme can be implemented under realistic qubit connectivity requirements.
Tobias Hagge, Nathan Wiebe
2023-09-20T22:47:23Z
http://arxiv.org/abs/2309.11673v1
# Error mitigation via error-detection using Generalized Superfast Encodings ###### Abstract We provide a new approach to error mitigation for quantum chemistry simulation that uses a Bravyi-Kitaev Superfast encoding to implement a quantum error detecting code within the fermionic encoding. Our construction has low-weight parity checks as well. We show that for the spinless Hubbard model with nearest-neighbor repulsion terms, one-qubit errors are detectable, and more complicated errors are detectable with high probability. While our error-detection requires additional quantum circuitry, we argue that there is a regime in which the beneficial effect of error-mitigation outweighs the deleterious effects of additional errors due to additional circuitry. We show that our scheme can be implemented under realistic qubit connectivity requirements. ## 1 Introduction In the current noisy-intermediate scale quantum (NISQ) era, quantum resources for a computation are severely constrained. Quantum error mitigation [1] techniques attempt to improve the accuracy of results of quantum programs while using no or minimal additional quantum resources. One approach to error mitigation is to leverage physical symmetries inherent in the problem description or its representation [2, 3, 4] to detect and correct errors. For electronic structure problems, one source for these symmetries is the encoding of fermions in qubit space; the Bravyi-Kitaev Superfast (BKSF) encoding [5], and variants such as the Generalized Superfast Encoding (GSE) [4] and Majorana Loop Stabilizer Encoding (MLSE) [6] have been shown to possess error-detecting properties, the latter two being capable of full one-qubit error-correction in some applications. Unfortunately, such encodings only protect the encoded fermions, not the quantum circuits which manipulate these fermions. It turns out that the set of operations which are protected by the encoding does not even include evolutions in the code space. In the VQE experiment we will consider in this paper, the region of the quantum circuit to which the proven error-mitigating properties apply, without special effort or circumstances, has circuit depth one. Ensuring fault-tolerance properties for the rest of the circuit requires careful management of encoding choices and implementation, with ancilla qubits and extra circuitry in some cases. The limitations are reminiscent of gate-set limitations imposed by the Eastin-Knill theorem [7], though that theorem does not apply here because the fermion-to-qubit encodings in question are not transverse. Error-correction capabilities are further limited because, unlike qubit-to-qubit encodings, fermion-to-qubit encodings cannot be applied recursively to increase code distance. On the other hand, the error-detection properties of such codes are fairly resilient; multiple errors are highly likely to trigger an error detection even when code distance bounds are exceeded. These limitations beg the question, is there a regime in which the error-mitigation properties of fermion-to-qubit encodings provide an advantage in practice? In this paper, we develop the use of stabilizer code properties of fermion to qubit mappings as an error mitigation technique. To this end, we describe error-detecting quantum circuits suitable for use in a VQE algorithm, and realize them under explicit and plausible qubit connectivity assumptions. We show that there is a plausible regime in which error-detection improves the accuracy of computed expectation values, at the cost of additional sampling complexity. The content of the rest of this paper is as follows. In Section 2 we recall the definition of an edge and vertex algebra. In Section 3 we review the fermion-to-qubit encoding in which we will work, the Bravyi-Kitaev generalized superfast encoding (GSE), motivate the choice of this particular fermion-to-qubit encoding in our context, and describe a procedure for zero-state initialization. In Section 4 we recall the definition of the Fermi-Hubbard model and describe an error-detecting variant for spinless two-dimensional lattices. In Section 5 we develop circuitry sufficient to implement a VQE algorithm using the GSE encoding, and analyze its requirements. In Section 6 we develop a version of the results in Section 5 under reduced connectivity requirements. In Section 7 we analyze the performance of fault-detecting VQE circuits under reduced connectivity requirements. ## 2 Edge and vertex algebras To evolve fermionic quantum Hamiltonians with a quantum computer, it is necessary to represent them in qubit space. For second-quantized Hamiltonians relevant to quantum computing applications, the Hamiltonian \(H\) is commonly represented using the fermionic operator algebra generated by creation and annihilation operators \(a_{j}^{\dagger}\) and \(a_{j}\), respectively, indexed over \(m\) fermionic occupancy sites \(j\in 1\ldots m\). The creation and annihilation operators have the following relations: 1. \(a_{j}a_{k}^{\dagger}+a_{k}^{\dagger}a_{j}=\delta_{j,k}\), 2. \(a_{j}a_{k}+a_{k}a_{j}=0\), 3. \(a_{j}^{\dagger}a_{k}^{\dagger}+a_{k}^{\dagger}a_{j}^{\dagger}=0\). The presence of commutation and anti-commutation relations makes it natural to represent \(a_{j}\) and \(a_{j}^{\dagger}\) in qubit space with Pauli operators. The most straightforward approach is the Jordan-Wigner embedding. Low-weight Pauli operators, however, have few anti-commutation relations relative to the number possessed by raising and lowering operators. This dichotomy constrains the compactness of the encoding; \(O(m)\) qubits are required to represent each \(a_{j}\) and \(a_{j}^{\dagger}\) in the Jordon-Wigner formalism. To solve this problem, a family of fermion-to-qubit mappings attempt to lower the weight of fermionic operators which appear in a given Hamiltonian. Some of these mappings [8, 9, 10] attempt to modify the Jordan-Wigner representation directly. Others [5, 4, 6, 11, 12], including the original Bravyi-Kitaev superfast method and the generalized superfast encoding method is based, reduce the weight of qubit operators by working in the even fermionic operator subalgebra and encoding a different basis of operators in qubit space. All of these methods can be interpreted as variants of exact bosonization [12]. For any fermionic Hamiltonian \(H\), expressed as a creation and annihilation operator polynomial, there is an interaction graph \(G\) with one vertex for each site index \(1\ldots m\) in \(H\) and one edge for each pair of site indices which co-occur in an \(H\) summand. The summands are contained in a fermionic operator subalgebra \(A_{G}\), the _local fermionic operator algebra_, which is generated by edge and vertex operators \(A_{j,k}\) and \(B_{j}\), defined below, one for each edge \((j,k)\) and each vertex \(j\) respectively of \(G\)[5]. **Definition 2.1**.: _The edge and vertex operators are usually defined in terms of the Majorana operators \(c_{2j}\) and \(c_{2j+1}\), for \(j\in 1\ldots m\), which are as follows:_ \[c_{2j}:=a_{j}+a_{j}^{\dagger}\qquad c_{2j+1}:=\frac{a_{j}-a_{j}^{\dagger}}{i},\] _The edge and vertex operators are then:_ \[B_{j}=-ic_{2j}c_{2j+1}=I-2a_{j}^{\dagger}a_{j},\qquad A_{j,k}=-ic_{2j}c_{2k}=- i(a_{j}+a_{j}^{\dagger})(a_{k}+a_{k}^{\dagger}).\] **Proposition 2.2**.: _The following algebraic properties hold._ 1. _for all_ \(j,k\in 1\ldots 2m\)_, the Majorana operators satisfy the following:_ \[c_{j}c_{k}+c_{k}c_{j}=2\delta_{j,k}.\] (1) 2. _For all edges_ \((j,k)\) _in the interaction graph_ \(G\)_, the edge and vertex operators_ \(A_{jk}\)_,_ \(B_{j}\)_, and_ \(B_{k}\) _satisfy the following:_ \[A_{j,k}^{\dagger} =A_{j,k},\] (2) \[B_{j}^{\dagger} =B_{j},\] (3) \[A_{j,k}^{2} =B_{j}^{2}=1,\] (4) \[B_{j}B_{k} =B_{k}B_{j},\] (5) \[A_{j,k} =-A_{k,j},\] (6) \[A_{j,k}A_{k,l} =-A_{k,l}A_{j,k},\text{if }j\neq l,\] (7) \[A_{jk}A_{l,m} =A_{l,m}A_{jk}\text{ if }j,k,l,m\text{ are distinct},\] (8) \[A_{j,k}B_{j} =-B_{j}A_{j,k},\] (9) \[A_{j,k}B_{l} =B_{l}A_{j,k}\text{ if }j,k,l\text{ are distinct},\] (10) \[i^{n}\prod_{j=0}^{n-1}A_{k_{j},k_{(j+1\mod n)}} =1,\text{for each cycle with ordered vertices }(k_{0},\ldots k_{n-1})\text{ in }G.\] (11) 3. _The following equalities relate raising and lowering operators to edge and vertex operators:_ \[\frac{I-B_{j}}{2} =a_{j}^{\dagger}a_{j},\] (12) \[B_{j}A_{j,k} =c_{2j+1}c_{2k}=-i(a_{j}-a_{j}^{\dagger})(a_{k}+a_{k}^{\dagger}),\] (13) \[A_{j,k}B_{k} =-c_{2j}c_{2k+1}=i(a_{j}+a_{j}^{\dagger})(a_{k}-a_{k}^{\dagger}),\] (14) \[B_{j}A_{j,k}+A_{j,k}B_{k} =2i(a_{j}^{\dagger}a_{k}+a_{k}^{\dagger}a_{j})\] (15) Proof.: All of these properties follow algebraically from the definitions. The above relations show that each edge and vertex operator anticommutes with those edge and vertex operators with which it shares a single vertex. The edge and vertex qubit operators we will consider have weight \(O(d)\), where \(d\) is the degree of the graph \(G\). This is an asymptotic improvement over the \(O(n)\) weight required for Jordan-Wigner operators for families of graphs with bounded \(d\). ## 3 The Bravyi-Kitaev superfast encoding, generalized superfast encodings (GSE), and stabilizer codes ### Generalized superfast encodings The superfast family of encodings map edge and vertex operators to Pauli operators, in a way that satisfies all of the edge and vertex operator relations except Equation 11. The left hand side of Equation 11 is known as a _loop operator_ and plays a special role in the construction. To make Equation 11 hold, fermionic states must be constrained during initial state preparation to lie in the mutual \(+1\) eigenspace of all loop operators. The generalized superfast encoding (GSE) construction of [4] improves on the original [5] construction by reducing operator weight and providing error-correction properties. The authors observe that because fermionic states lie within the mutual \(+1\) eigenspaces of commuting Pauli loop operators, the loop operators are stabilizers in a quantum stabilizer code [13]. Stabilizer codes allow correction of errors in quantum computations [14], and when a threshold of gate accuracy is present and sufficient quantum resources are available, enable fault-tolerant quantum computing [14, 15, 16]. Currently, the most promising stabilizer codes for this purpose are surface codes [17], due to their low-weight operators, modest qubit connectivity requirements and generous error thresholds [18]. Conceptually, surface-code qubits are connected in two-dimensional planar lattice configurations; prominent current quantum hardware schemes have qubit connectivity which efficiently supports computations on lattices [19, 20]. A stabilizer code can correct arbitrary one-qubit errors if and only if every logical operator has weight at least three. The authors of [4] show that for the generalized superfast encoding, under mild connectivity conditions and degree at least six for every vertex, the edge and vertex operators may be chosen so that all logical operators have weight three or greater, and thus the encoding corrects arbitrary one-qubit errors. This choice results in \(O(d)\) edge and vertex operator weight. The GSE encoding requires that the Hamiltonian interaction graph \(G=(V,E)\) be of even degree \(d(v)\) at each \(v\in V\). If need be, this assumption can be satisfied by augmenting \(G\) with additional edges representing zero-amplitude Hamiltonian interaction terms. The construction of the GSE encoding of [4] is as follows. To each \(v\in V\), assign \(\frac{d(v)}{2}\) qubits. Choose \(d(v)\) mutually anticommuting Pauli operators \(\gamma_{v,1}\ldots\gamma_{v,d(v)}\) with support on those qubits, assigning one to each half-edge incident to \(v\). For example, Figure 1 shows the mapping of \(\gamma_{v,j}\) to half-edges which will later be used for the square lattice, and Figure 4 shows the Pauli operators to which these \(\gamma_{v,j}\) will map. To define the edge and vertex operators, for each edge \(\{j,k\}\) choose an orientation \(\epsilon_{j,k}\in\pm 1\), \(\epsilon_{j,k}=-\epsilon_{k,j}\). Then define \[A_{j,k} =\epsilon_{j,k}\gamma_{j,m_{j,k}}\gamma_{k,n_{j,k}}, \tag{16}\] \[B_{j} =(-i)^{\frac{d(j)}{2}}\prod_{m=1}^{d(j)}\gamma_{j,m}, \tag{17}\] where \(\gamma_{j,m_{j,k}}\) and \(\gamma_{k,n_{j,k}}\) are the half-edge operators corresponding to edge \(\{j,k\}\). The operators \(\gamma_{v,i}\) are called _generalized Majorana operators_. The anticommutativity relations of generalized Majorana operators are leveraged to construct representations of \(A_{jk}\) and \(B_{j}\) with correct commutativity properties. The operators are "generalized" in the sense of having similar formal properties; there is no mapping or correspondence between individual Majorana operators used to construct \(A_{jk}\) and \(B_{j}\) and the generalized operators \(\gamma_{v,i}\) introduced in the next step. ### Generalized superfast encodings as stabilizer codes Under these definitions the loop operators defined in 11 correspond to Pauli operators. Following the stabilizer code formalism, these Pauli operators generate the loop operator subgroup \(\mathcal{L}\) which is a subgroup of the Pauli group \(\mathcal{G}\). Taken as group elements, loop operators are not independent; elements of \(\mathcal{L}\) are generated up to sign by a smaller set of basis loop operators. In the case of an embedded planar graph Figure 1: To construct local fermionic algebra operators, generalized Majorana operators \(\gamma_{v,j}\) for vertex \(v\) must be assigned to the outgoing edges of \(v\). One assignment on a square lattice (degree four) is shown here. Pairs of \(\gamma_{i,j}\) anticommute if they lie on half-edges of the same color, and commute otherwise. the plaquette loop operators form a basis; more precisely they form a basis for the interaction graph's first homology group \(H_{1}\) with \(\mathbb{Z}_{2}\) coefficients. The code space for the resulting stabilizer code is defined as the quotient group \(C_{\mathcal{G}}(\mathcal{L})/\mathcal{L}\), that is the group of elements \(C_{\mathcal{G}}(\mathcal{L})\) that commute with all loop operators, modulo the loop operator subgroup \(\mathcal{L}\). The logical operators in the code are defined to be the nontrivial elements of \(C_{\mathcal{G}}(\mathcal{L})/\mathcal{L}\), or equivalently the elements of \(C_{\mathcal{G}}(\mathcal{L})-\mathcal{L}\), taken as equivalence class representatives. By construction, the \(A_{j,k}\) and \(B_{j}\) operators lie in \(C_{\mathcal{G}}(\mathcal{L})\). By a dimension-counting argument (see [4]), they generate \(C_{\mathcal{G}}(\mathcal{L})\), but in general some may be trivial, and they may not all lie in distinct cosets (and represent logical operators). In practice, however, neither of these issues present difficulties. The only nontrivial edge or vertex operators are the self-loops, which do not appear in the Hamiltonian interaction graph construction. Furthermore, operators can share a coset only when they form a doubled-edge loop. In this case, in the construction one of the operators has weight zero. Exploiting the error-correcting properties of the GSE presents difficulties in practice. Without modification of the encoding, degree six is lowest degree for which single-qubit error-correction is possible, as the weight of a \(B_{j}\) operator is at most twice the degree of the vertex, and single-qubit error correction requires that all logical operators have weight at least three. Vertices of degree six lead to significant qubit-connectivity requirements, which, if not satisfied by hardware, will be compensated for with qubit swap operators, increasing the edge and vertex operator weights. Finally, the code distance is determined by the degree of the Hamiltonian graph, the structure of its loop operators, and the choice of \(\gamma_{v,j}\). Being a fermion-to-qubit mapping, recursive error-correction scaling methods for qubit-to-qubit mappings cannot be directly applied to increase the effective code distance. ### Definite-occupancy state preparation by syndrome measurement To perform computations with a stabilizer code it is necessary to produce an encoded initial state. Here, we demonstrate an efficient method for preparing an initial definite-occupancy state, applicable to GSE and other edge and vertex algebra encodings, based on the quiescient state method of [21]. The desired initial state is an encoded state which 1. has the correct orbital occupancies as measured by the \(B_{j}\) operators, 2. lies in the \(+1\) eigenspace of all loop operators. The process is as follows. First, we prepare a state with the correct \(B_{j}\) eigenvalues. Since each \(B_{j}\) is a Pauli operator acting on the qubits assigned to the \(j\)-th site orbital, and the assignments are all disjoint, we can prepare a mutual \(\pm 1\) eigenstate for all \(B_{j}\) operators with a depth-one circuit. Next, we measure the plaquette loop operators. These measurements do not alter subsequent \(B_{j}\) operator measurements because loop operators commute with \(B_{j}\) operators. The plaquette measurements collapse the state to a mutual loop operator eigenstate, since the plaquette operators generate the remaining loop operators, but usually not the mutual \(+1\) eigenstate that is needed. Undesired \(-1\) measurement values can be corrected without additional quantum circuitry by a change of operator basis, as shown in the following proposition: **Proposition 3.1**.: _In the local edge-and-vertex algebra for the graph \(G\) representing a square or toroidal lattice, suppose the plaquette loop operators \(P\) are measured, and the set \(D\) produce some defective (eigenvalue \(-1\)) syndrome measurements. The measured state is the \(+1\) mutual-eigenstate for an edge-and-vertex algebra given by replacing some of the edge operators \(A_{jk}\) with \(-A_{jk}\)._ Figure 2: Code distances and operator weights for BKSF, MLSE, and GSE. First two rows taken from [6]. Proof of Proposition 3.1.: Let \(\psi\) be the state that results from measurement, and let \(D\) be the set of plaquette or lattice boundary operators with defective measurements. Then \(|D|\) is even, since the boundary operator measurement is the product of the plaquette operator measurements. Choose a pair of loop operators \(d_{1},d_{2}\) in \(D\) and a sequence \(d_{1}=p_{1},p_{2},\ldots,p_{k}=d_{k}\) of pairwise edge-adjacent loop operators, such that \(p_{i}\) and \(p_{i+1}\) share edge operator \(e_{i}\). Replace \(e_{1},\ldots,e_{k-1}\) in the loop operator algebra with \(-e_{1},\ldots,-e_{k-1}\) respectively. In the new algebra, the defective loop operator set \(D^{\prime}=D-\{d_{1},d_{2}\}\). Repeat until \(D=\emptyset\). In the language of topology, we have reversed the signs on a \(1\)-cochain of edges, the co-boundary of which is the set of plaquettes with defective loop operator measurements. ## 4 Realizing the Hubbard model Here we describe the Hubbard model on the two-dimensional planar and toroidal lattices, along with its spinless variant. We express their Hamiltonians in the edge-and-vertex algebra, and construct a GSE encoding for the spinless case. Let \(G=(V,E)\) be the \(M\times N\) grid graph, which is the direct graph product of path graphs \(P_{M}\) and \(P_{N}\) of size \(M\) and \(N\) respectively. Then \[V=\{(i,j)|0\leq i\leq M-1,0\leq j\leq N-1\},\] \[E=\{((i,j),(i+1,j))|0\leq i\leq M-2,0\leq j\leq N-1\}\bigcup\{((i,j),(i,j+1))| 0\leq i\leq M-1,0\leq j\leq N-2\}.\] The \(M=N=4\) case is shown in the first diagram in Figure 3. The Hubbard Hamiltonian is as follows: \[H=\sum_{(j,k)\in E,\sigma\in\{\uparrow,\downarrow\}}-t(a^{\dagger}_{j,\sigma} a_{k,\sigma}+a^{\dagger}_{k,\sigma}a_{j,\sigma})+U\sum_{j\in V}a^{\dagger}_{j, \uparrow}a_{j,\uparrow}a^{\dagger}_{j,\downarrow}a_{j,\downarrow}.\] Using the equalities in Proposition 2.2, the Hubbard Hamiltonian can be rewritten in terms of the edge and vertex operators as \[H_{H}=\frac{it}{2}\sum_{(j,k)\in E,\sigma\in\{\uparrow,\downarrow\}}(B_{(j, \sigma)}A_{(j,\sigma),(k,\sigma)}+A_{(j,\sigma),(k,\sigma)}B_{(k,\sigma)})+U \sum_{j\in V}\frac{(1-B_{(j,\uparrow)})(1-B_{(j,\downarrow)})}{4}\] For the spinless version, all fermions are constrained to have the same spin. A major difference between the two models is that for the Hubbard model with spin the Pauli exclusion principle permits two electrons Figure 3: Variants of the \(4\times 4\) Hubbard lattice. Each vertex represents a pair of spin orbitals, or a single spin orbital in the spinless case. Edges represent nontrivial Hamiltonian interactions among the orbitals. Adjacent-site, same spin interactions, along with same-site different-spin interactions, may be performed locally in the fermionic operator algebra using \(A_{j,k}\) and \(B_{j}\) operators not involving any other spin orbital. with opposite spin to interact when at the same site. For the spinless case, the Pauli exclusion principle forbids such interactions, and the system becomes a system of non-interacting fermions which can be exactly solved using classical resources. To avoid this situation, the spinless model is assumed to include off-site nearest-neighbor repulsion terms in the Hamiltonian: \[H_{\uparrow}=\sum_{(j,k)\in E}-t(a^{\dagger}_{j,\uparrow}a_{k,\uparrow}+a^{ \dagger}_{k,\uparrow}a_{j,\uparrow})+U\sum_{j,k\in E}a^{\dagger}_{j,\uparrow} a_{j,\uparrow}a^{\dagger}_{k,\uparrow}a_{k,\uparrow},\] This Hamiltonian is expressed in the edge-and-vertex algebra as follows: \[H_{SLH}=\frac{it}{2}\sum_{(j,k)\in E}(B_{(j,\uparrow)}A_{(j,\uparrow),(k, \uparrow)}+A_{(j,\uparrow),(k,\uparrow)}B_{(k,\uparrow)})+U\sum_{(j,k)\in E }\frac{(1-B_{(j,\uparrow)})(1-B_{(k,\uparrow)})}{4}\] Replacing the graph \(G\) with another graph gives a Hamiltonian with the same summation, but summed over different edges. Figure 3 shows the toroidal lattice, as well as a planar lattice curved arcs added to bring the degree of each vertex to four. In the second case, the curved edges are assumed to have interaction weight zero, and \(M\) and \(N\) to be even. Let the Hamiltonians obtained be denoted \(H_{SLH,T}\) and \(H_{SLH,4}\) respectively. We work with the following GSE encoding of \(H_{SLH,T}\) and \(H_{SLH,4}\) **Definition 4.1**.: _Let \(G\) be a toroidal Hubbard lattice of size at least \(2\times 2\), or a planar Hubbard lattice with even numbers of rows and columns, and doubled edges as shown in Figure 3._ _For each vertex \(j\) of \(G\), assign to \(\gamma_{j,1},\ldots,\gamma_{j,4}\) the values \((XY,YY,IX,IZ)\) respectively, using the mapping of \(\gamma_{j,k}\) to half-edges shown in Figure 1, resulting in the assignment of qubit operators to half-edges shown in Figure 4.1 The resulting interior loop operator \(IYXZYXZI\) has weight six, with weight three boundary loop operators \(-YXZI\), \(-IYYX\), \(-IYXZ\), and \(-XZZI\)._ Footnote 1: Generalized Majorana operators for degree four graphs are not given in [4], but there is an obvious way to extend conventions given for larger-degree graphs to the degree four case, which does give the conventions used in this paper. In the conventions of [4], some errors which can propagate from single-qubit errors during swap gates are not detectable; our choices eliminate this possibility. Since there is no canonical ordering of sites, in contrast to the Jordan-Wigner encoding, ordering conventions for qubit-space representations of edge and vertex operators are needed. The ordering conventions of Figure 4 will be used in the remainder of this paper: horizontal edges vertex-ordered from left to right, vertical edges up to down,loop operators as shown in Figure 4. We assume arbitrary qubit connectivity until Section 6, where a reduced connectivity implementation will be described. **Proposition 4.2**.: _The choices given in Definition 4.1 produce a single-qubit error-detecting code._ Proof.: Single-qubit Pauli errors are, by definition, weight-one Pauli operators. Detectable errors are Pauli operators which are not logical operators; these fail to commute with at least one syndrome measurement and thus produce an error detection. For single-qubit error-detection we must therefore show that every logical operator has weight at least two. In our code, every logical operator with support on two or more vertices has weight at least two. Suppose \(R\) is a logical operator supported on a single vertex \(j\). Then \(R\) lies in the two-qubit Pauli group generated by generalized Majorana operators \(\gamma_{j,k}\). Figure 1 shows the assignment of \(\gamma_{j,k}\) to portions of plaquette loop operators at a vertex. For each such plaquette loop operator \(L\) incident to vertex \(j\), \(R\) must contain either both of the \(\gamma_{j,k}\) in \(L\) incident to vertex \(j\), or neither of them. Since this must hold for all four incident plaquette loops (or all three incident plaquette loops on a planar-lattice boundary vertex), either \(R=I\), or \(R=\pm B_{j}\). Since \(B_{j}\) has weight two in our code, all logical operators have weight at least two. Thus single-qubit errors are detectable. ## 5 Fault-detecting circuitry for Hamiltonian simulation As the BKSF encoding allows for error detection, an important remaining question is whether these error detection properties can be extended into scenarios where we apply gates (i.e. simulations) on the data within the code. We show in the following proposition that single errors within the syndrome measurements will never yield an undectable error. **Proposition 5.1**.: _With the choices in Definition 4.1, a single-qubit error during syndrome measurement cannot produce an undetectable fault._ Proof.: We first consider syndrome measurement for the non-boundary loop operator \(IYXZYXZI\). A syndrome measurement circuit is shown in Figure 5. It uses Pauli-controlled-Pauli gates; the rule for propagating Pauli errors through such gates is shown in Figure 6. The error detection behavior of this measurement is as follows: A single-qubit error on any of the vertex qubits will trigger one of the following consequences: 1. The error commutes with all remaining gates, producing a one-qubit error on one of the gate vertices, which is detectable, though this syndrome measurement does not detect it. 2. The error fails to commute with one Pauli-controlled-Pauli gate, propagating an \(X\) error to the ancilla qubit, which commutes with all remaining gates and is detected by the \(Z\) measurement. A single-qubit error on the ancilla propagates differently. Here, an \(X\) error is detected by ancilla measurement, but a \(Z\) error is not. A \(Z\) error propagates to one vertex qubit error at each two-qubit gate which follows the error on the ancilla. In order to retain detectability, this propagated product of vertex-qubit errors must be detectable or trivial. For example, a \(Z\)-error occurring immediately after the ancilla-qubit is initialized propagates an \(IYXZYXZI\) operator on the vertex qubits. This is acceptable since \(IYXZYXZI\) is a stabilizer and does not affect the encoded state. An error proceeding, say, the second \(Y\)-controlled Pauli, on the other hand, propagates a nontrivial \(IIIIYXZI\) vertex-qubit error which must be detected later. For a weight-six stabilizer there are five ancilla \(Z\)-error propagations to consider. All of them produce nontrivial syndrome measurements, as is shown in Figure 9, values for which are computed using single-qubit syndromes shown in Figure 8. Therefore none of these errors are logical operations or stabilizers. Figure 4: GSE qubit operator assignments for half-edges on spinless Hubbard Hamiltonian interaction graphs. These choices produce a weight-six loop operator for spinless Hubbard without boundary, and weight-three loop operators for boundary loops. The qubit operators representing the generalized Majorana operators from Figure 1 are illustrated in the first image, with doubled edges for the boundary cases of the planar lattice indicated in red. Each operator acts on the pair of qubits assigned to its adjacent vertex. The orientation for each \(A_{jk}\) operator has positive \(\epsilon_{jk}\) when the orientation aligns with the arrow in the second image. From this image, one may compute that the horizontal \(A_{jk}\) operators are of the form \(\pm IZXY\), and the vertical, \(\pm IXYY\). The \(A_{jk}\) corresponding to the red doubled edges are of the forms \(\pm XYXY\), \(\pm YYYY\), \(\pm IZIZ\), and \(\pm IXIX\). The loop operator for the central plaquette, with qubits ordered by vertex as in Figure 1, is \(i^{4}A_{12}A_{24}A_{43}A_{31}=IYXZYXZI\). The loop operators for boundary bigons are \(i^{2}A_{12}A^{\prime}_{21}=-YXZI\), \(i^{2}A_{24}A^{\prime}_{42}=-IYYX\), \(i^{2}A_{43}A^{\prime}_{34}=-IYXZ\), and \(i^{2}A_{31}A^{\prime}_{13}=-XZZI\). Figure 5: Syndrome measurement for the weight six loop operator \(IYXZYXZI\). Figure 6: Commutation rules for general Pauli-controlled Pauli operators. Here, \(P\), \(Q\), \(R\), and \(S\) are Pauli gates. It remains to consider the case of weight three loop operators on boundary edges. Here again, errors on the vertex qubits propagate to single-vertex-qubit errors, and errors on the ancilla qubit are suffixes of the measured loop operator. For a weight three operator, every such suffix is either a loop operator, a weight one operator, or an operator which differs from a weight one operator by a loop operator, and thus logically equivalent to a weight one error. The first of these is trivial, the other two are detectable. The lack of undetectable errors is to some extent the result of fortunate choices for the \(\gamma_{i,j}\) and their edge mappings. However, if undetectable errors do occur, it may be possible to eliminate them by reordering the Pauli-controlled-Pauli gates, since these commute with each other, but not with all errors. **Proposition 5.2**.: _With the choices made in Definition 4.1, a single-qubit error made during measurement of the \(B_{j}\) operators cannot produce an undetectable fault._ Note that this statement is only meaningful when operations are performed after the \(B_{j}\)-operator measurements. Also, in most circumstances fermion number parity considerations allow detection of a single incorrect \(B_{j}\) operator measurement. Proof of Proposition 5.2.: See Figure 7. The only one-qubit error which can propagate to an error on two vertex qubits is a \(Z\)-error on the ancilla qubit prior to the first gate. Since at this point the ancilla is in state \(|0\rangle\), \(Z\) acts trivially. ### Fault-detecting evolution Fault-detection for evolution operators is more complicated. Since the edge and vertex operators are logical operators, evolutions of edge-and-vertex algebra elements are logical operators as well. Detectable errors which occur prior to a logical evolution produce the same syndrome measurements as if they had occurred after evolution. However, errors may also occur during evolution, and must be detected. The most straightforward method to evolve a Pauli operator \(P\) is to find a Clifford operator \(U\) such that \(U\circ P\circ U^{\dagger}=ZII\ldots\), then implement \(e^{-iPt}\) as \(U^{\dagger}\circ(e^{-iZt}I\ldots I)\circ U\). Conjugation by \(U\) propagates the Figure 8: Single-qubit error syndromes on the vertex qubits as encoded in Figure 4 Figure 7: Measurement circuit for a \(B_{j}\) operator \(ZI\ldots I\) evolution to a \(P\) evolution. Such a circuit cannot avoid propagating a \(ZII\) error that occurs during \(ZII\) evolution to a logical operator. This is an issue which is encountered, and solved, in the context of error-correction; one distills high-fidelity magic states [22] and uses them to construct a circuit approximating \(e^{-iZt}\). However, in a NISQ context, the required resource overhead for magic states is undesirable. The cheapest solutions are to use a native hardware gate that evolves two qubits simultaneously (e.g. \(e^{-iZZt}\)), if one is available, or, just fail to detect this particular error. If undetectable one-qubit errors can be limited to a small number of cases it may make sense to mitigate them using probabilistic methods [23]. **Proposition 5.3**.: _With the choices in Definition 4.1, if sufficient two-qubit native hardware gates are available, evolution of local fermionic algebra operators can be performed so that one-qubit errors during evolution \(e^{-iPt}\) do not propagate to undetectable errors. If only \(CNOT\) and arbitrary one-qubit gates are available, a propagated undetectable one-qubit error takes the form of the operator \(p\) being evolved, possibly along with time-reversal of the evolution._ Proof.: The choices in Definition 4.1 give the two-vertex logical operators shown in Figure 10. For each given pair of interacting vertices, the edge orientation and whether the edge lies on the boundary together determine the logical operators supported on those vertices. As the figure enumerates, among the \(4^{4}=256\) possible two-vertex Pauli operators, considered up to scalar, a pair of vertices which bound a single edge admit seven logical operators (plus the trivial operator); a pair bounding a double edge admits fifteen. For example, an \(XIXI\) error is a logical error on a (doubled) top edge, but nowhere else. Figure 11a shows an evolution circuit for a four-qubit operator. The figure assumes that the fourth qubit, the bottom one, is acted upon by a nontrivial Pauli gate. If this is not the case, a smaller circuit can be constructed similarly evolving the second or third qubit instead, using a controlled \(Q_{2}\) or \(Q_{3}\) gate on the second or third strand respectively instead of controlled \(Q_{4}\) on the fourth strand. In each case \(P_{i}\) and \(Q_{i}\) anticommute, but \(Q_{i}\) is otherwise arbitrary. The possibilities for nontrivial propagations of single-qubit errors are summarized in Figure 12. There, an error is a Pauli gate \(Q\) which anticommutes with some gate \(P_{i}\) in the evolved Pauli operator. We will show that the following modifications to 11a allow us to construct evolution gates which do not propagate undetectable errors, aside from the Pauli operator that is being evolved: 1. Make one of two choices of Pauli gate for the controlled Pauli operators, 2. Add an ancilla and flag one of the qubits, 3. Reverse the order of the qubits (equivalent to reflecting Figure 11 about a horizontal axis). Figure 9: Ancilla error syndromes for the syndrome measurement in Figure 5. \begin{tabular}{|l|l|} \hline \multicolumn{2}{c}{Horizontal edge} & \multicolumn{2}{c}{Vertical edge} \\ \hline \(A_{jk}\) & \(IZXY\) \\ \(A_{jk}B_{k}\) & \(IZYI\) \\ \(B_{j}A_{jk}\) & \(ZXXY\) \\ \(B_{j}A_{jk}B_{j}\) & \(ZXYI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(A_{jk}\) & \(IXYY\) \\ \(A_{jk}B_{k}\) & \(IXXI\) \\ \(B_{j}A_{jk}\) & \(ZZYY\) \\ \(B_{j}A_{jk}B_{j}\) & \(ZZXI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}\) & \(ZYII\) \\ \(B_{k}\) & \(IIZY\) \\ \(B_{j}B_{k}\) & \(ZYZY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{k}\) & \(IIZY\) \\ \(B_{j}B_{k}\) & \(ZYZY\) \\ \hline \end{tabular} (a) Every pair of vertices which bound an edge supports seven logical operators, the values of which depend on the edge orientation. \begin{tabular}{|l|l|} \hline \multicolumn{2}{c}{Top edge} & \multicolumn{2}{c|}{Bottom edge} \\ \hline \(A^{\prime}_{jk}\) & \(YYYY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(YYXI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XIYY\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XIXY\) \\ \(A_{jk}A^{\prime}_{jk}\) & \(YXZI\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}\) & \(YXIY\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}\) & \(XZII\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}B_{k}\) & \(XZIY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(IXZZ\) \\ \(B_{j}A^{\prime}_{jk}\) & \(ZZIX\) \\ \(B_{j}A^{\prime}_{jk}B_{j}\) & \(ZZZZ\) \\ \(A_{jk}A^{\prime}_{jk}B_{j}\) & \(YIXZ\) \\ \(A_{jk}A^{\prime}_{jk}\) & \(XYZ\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}B_{k}\) & \(ZIYX\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(YIXY\) \\ \(B_{j}A^{\prime}_{jk}B_{k}\) & \(YIXY\) \\ \(A_{jk}A^{\prime}_{jk}\) & \(XZII\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}\) & \(YXZI\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}B_{j}\) & \(YXIY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(YIXY\) \\ \(A_{jk}A^{\prime}_{jk}\) & \(XZZI\) \\ \(B_{j}A_{jk}A^{\prime}_{jk}B_{k}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(YXY\) \\ \(B_{j}A^{\prime}_{jk}B_{k}\) & \(YXY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(YXY\) \\ \(B_{j}A^{\prime}_{jk}B_{k}\) & \(YXY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(YXY\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYYI\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYXY\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j}A^{\prime}_{jk}\) & \(XYXY\) \\ \(A^{\prime}_{jk}B_{k}\) & \(XYXY\) \\ \(B_{j}A^{\prime}_{jk}\) & \(XYZI\) \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \(B_{j Figure 11: Quantum circuits for evolution of \(e^{-iPt}\), where \(P=P_{1}P_{2}P_{3}P_{4}\). \(Q_{4}\) is any Pauli gate not commuting with \(P_{4}\). Figure 12: Propagation possibilities for a Pauli gate error of type \(Q\) during evolution of a weight two or greater four-qubit operator of the type shown in Figure 10(a). Errors which occur prior to or after evolution, possibly after application of far commutativity relations, are not listed, since evolution of a logical operator is a logical operator and does not change syndrome measurements. For ease of discussion, We call operators with reversed numbering _reflected operators_ and say that the circuits that evaluate them produce reflected errors when evaluated. For example, the operator \(ZXZX\) corresponds to reflected operator \(XZXZ\), and the reflected \(IQIQ_{4}\)-type error \(IYIY\) that can occur when \(XZXZ\) is evolved is a \(YIYI\) error in the unreflected code space. To organize the discussion, we consider it an \(IQIQ_{4}\) error on a reflected operator, rather than a \(Q_{4}IQI\) error. We proceed by considering the error-types shown in Figure 12. Some can be easily seen to be detectable, with no ancilla, whether they occur in an unreflected or reflected evolution. 1. Errors of type \(QIIQ_{4}\) or \(P_{1}IIQ\), as no logical operator or its reflection is of this form, 2. One-qubit propagated errors, as well as those which differ from the evolved logical operator or evolved reflected operator by a single qubit, namely \(P_{1}QII\), \(P_{1}P_{2}QI\), or \(P_{1}P_{2}P_{3}Q\), 3. Errors of type \(QQ_{2}II\). These errors can only occur when evolving operators with support on the first two qubits. The only such operators are \(ZYII\) and reflected operator \(YZII\), but \(Q_{2}\neq Y\) (and for the reflection, \(Q_{2}\neq Z\)) by construction so \(QQ_{2}II\) is not a logical operator. This leaves errors of types \(QIQ_{3}I\), \(IQQ_{3}I\),\(P_{1}IQI\), \(IQIQ_{4}\), \(IIQQ_{4}\), and \(P_{1}P_{2}IQ\) to be dealt with, as well as the errors \(P_{1}P_{2}II\), \(P_{1}P_{2}P_{3}II\) and \(P_{1}P_{2}P_{3}P_{4}\); these last three are logical errors corresponding to evolved operators and reflected operators, which the proposition does not require us to correct. Errors of the first three types (\(QIQ_{3}I\), \(IQQ_{3}I\), and \(P_{1}IQI\)) occur exclusively on operators which act nontrivially on the third qubit, and trivially on the fourth. Errors of the next three types (\(IQIQ_{4}\), \(IIQQ_{4}\), and \(P_{1}P_{2}IQ\)) occur on operators which act nontrivially on the fourth qubit. We first account for the \(QIQ_{3}I\) and \(IQIQ_{4}\) errors. Weight-two operators of the forms \(ABI\) and \(IAIB\) only occur on the boundary edges and there is exactly one for each edge type. Thus a \(QIQ_{3}I\) or \(IQIQ_{4}\) error is only a logical error when it occurs on a boundary edge and is logical for that boundary edge type. Such operators are never evolved as the construction gives them weight zero, but they are still logical operators, equivalent to the edge operators for which they are doubled edges, which we must detect as errors. \(QIQ_{3}I\) errors occur during \(P_{1}P_{2}P_{3}I\) gates on top and left boundary edges, and in reflection on the bottom and right boundary edges, and there is only one logical error \(ABI\) supported on each boundary edge. If \(P_{3}=B\), neither choice of \(Q_{3}\neq P_{3}\) makes a \(QIQ_{3}I\) error a logical operator, otherwise there is still a choice of \(Q_{3}\) such that \(QIQ_{3}I\) is not a logical operator. In similar fashion, for each operator \(P_{1}P_{2}P_{3}P_{4}\), there is at least one choice of \(Q_{4}\) such that an \(IQIQ_{4}\) error, or reflected error, is not a logical operator. We next consider \(IQQ_{3}I\) errors, which occur only in \(P_{1}P_{2}P_{3}I\) gates. Exactly one operator of type \(IABI\) occurs for each edge type, \(IZYI\) for horizontal edges (boundary or not), with reflected operator \(IYZI\), and \(IXXI\) for vertical edges. If the choice of \(Q_{3}\) was not previously forced, we can avoid \(IQQ_{3}I\) logical errors by choosing it now. Among non-reflected operators, the only instance in which logical \(IQQ_{3}I\) errors can't be avoided in this way is the top edge operator \(YXZI\), for which we chose \(Q_{3}=Y\) already to avoid \(XIXI\) logical errors. Propagated \(IZYI\) errors can be detected using a flag qubit in the manner of [24] to detect Figure 13: A \(P_{1}P_{2}P_{3}I\) operator with a flag on the second qubit to detect \(IQQ_{4}I\) errors. Here the \(Z\)-measurement on the last qubit discretizes and detects \(Q\) errors on the second qubit which could otherwise propagate to undetected \(IQQ_{3}I\) errors. errors on the second qubit, as shown in Figure 13. For vertical edges the story is similar; for \(XZZI\) we require \(Q_{3}=X\) in order to avoid logical \(YIYI\) errors, and can flag the second qubit to detect the \(IQQ_{3}I\) error. Reflected operators can be treated similarly; the bottom edge reflected operator \(XZYI\) requires \(Q_{3}=Z\) to avoid \(XIXI\) reflected errors, and right edge reflected operator \(XYYI\) forces \(Q_{3}=X\). The \(IYZI\) and \(IXXI\) reflected operator errors can both be detected with a flag on the second qubit. Next we consider \(P_{1}IQI\) errors. These are only possible when two logical operators with support on the first three qubits have the same first qubit. This occurs in two unreflected cases: \(XZZI\) can produce an \(XIXI\) error, and \(YXZI\) can produce a \(YIYI\) error. Unfortunately it is not possible to flag the third qubit as the flag operations don't commute with the evolution operator. Thus we evolve these operators in the reflected code space (also reflecting the flag qubits we added previously). The reflected \(P_{1}P_{2}P_{3}I\) operator \(IP^{\prime}_{2}P^{\prime}_{3}P^{\prime}_{4}\) has the wrong qubit support to produce \(P_{1}IQI\) logical errors. Next, the unique \(IIQQ_{4}\) unreflected error \(IIZY\) and reflected error \(IIYZ\) can be avoided by choosing \(Q_{4}\) to avoid the unique \(IIQQ_{4}\) logical error \(IIZY\), except when \(Q_{4}\) has already been chosen unfortunately, i.e as \(Q_{4}=Y\) in the non-reflected case, \(Q_{4}=Z\) in the reflected case. If a good choice of \(Q_{4}\) is unavailable, we can flag the third qubit to detect the \(Q\) error. Previously, we added ancilla qubits to operators where the fourth qubit was trivial, which can't produce \(IIQQ_{4}\) errors, and to \(YXZI\) and \(XZZI\), which we are evolving in reflection. Neither of these can produce an \(IIZY\) error. Thus we have added at most one ancilla to each evolution at this point. Errors of the form \(P_{1}P_{2}IQ\) are only logical errors, according to Figure 10, on the boundary and must propagate an evolved an operator \(P_{1}P_{2}P_{3}P_{4}\), \(P_{4}\neq I\), which differs from \(P_{1}P_{2}IQ\) by a \(B_{k}\) operator. However, for no such operator is it the case that both \(P_{1}P_{2}IQ\) and \(Q^{\prime}IP_{3}P_{4}\) are logical operators. If it were otherwise, since \(Q^{\prime}IP_{3}P_{4}\) differs from \(P_{1}P_{2}P_{3}P_{4}\) by \(B_{j}\), \(Q^{\prime}IIQ\) would appear in Figure 10. Therefore, evolution of at most one of \(P_{1}P_{2}P_{3}P_{4}\) or its reflection can produce logical \(P_{1}P_{2}IQ\) errors. Any operator which has not already been reflected may be reflected to avoid \(P_{1}P_{2}IQ\) errors. It remains to show that we have not reflected a \(P_{1}P_{2}P_{3}I\) operator which can produce a \(P_{1}IQI\) logical error to produce a reflected operator \(P^{\prime}_{1}P^{\prime}_{2}P^{\prime}_{3}P^{\prime}_{4}\) which can produce a \(P^{\prime}_{1}P^{\prime}_{2}IQ^{\prime}\) logical error. In this case, \(P^{\prime}_{1}=I\) and then \(P_{1}IQI\) and \(IP^{\prime}_{2}IQ\) are reflections of each other, since each boundary edge only supports a single logical operator among both of these shapes. But then \(Q=P^{\prime}_{2}\), which is impossible since \(P_{3}=P^{\prime}_{2}\) and \(Q\neq P_{3}\) by construction. Thus every logical operator \(P_{1}P_{2}P_{3}P_{4}\) may be evolved or reflection-evolved, at least one of the two, possibly with an ancilla qubit, so that single-qubit errors do not propagate to undetectable logical errors, except for the case when \(P_{1}P_{2}P_{3}P_{4}\) is propagated as an error. The evolved operator \(P_{1}P_{2}P_{3}P_{4}\) is propagated as an error when a single-qubit \(Q=P_{i}\) error occurs just prior to or just after evolution \(e^{-iP_{i}t}\) on the evolved strand. Since the evolution operator is designed to convert a \(P_{i}\) operator into a logical operator, the standard Pauli operator evolution circuit cannot detect this error. In addition to these \(P_{i}\) errors, it is not possible to detect an error in \(t\). \(P_{1}P_{2}P_{3}P_{4}\) errors can be eliminated, in our error model, if a native hardware gate \(e^{-iP_{i-1}P_{i}t}\) is available, giving the circuit shown in Figure (b)b. This gate propagates one-qubit errors to one-qubit errors, possibly replacing \(t\) with \(-t\) in the evolution in the process. The circuit in Figure (b)b has some different error propagations than those listed in Figure 12; these are summarized in Figure 14. In particular, the undetectable error types are removed, Figure 14: Propagation possibilities for a Pauli gate error of type \(Q\) during evolution of a weight two or greater four-qubit operator of the type shown in Figure (b)b. Only those types of error propagations which differ from those in Figure 12 are shown. and no new error types are introduced. Thus, with two-qubit native hardware Pauli evolutions, one-qubit errors are detectable. ## 6 Error-detection under reduced connectivity As written, the above circuits can be implemented by creating one ancilla for each plaquette of the (torus-embedded) Hamiltonian interaction graph. The assumed connectivity is shown in Figure 15. In particular, each ancilla has connections to each of the eight qubits assigned to the four corners of the plaquette. The two qubits within each vertex are connected, and for each edge, one of the four qubits in the edge is connected with all three others. This produces ancillas with degree eight connectivity and vertex qubits with degree seven or ten connectivity. These connectivity assumptions are generous and will not be satisfied by some NISQ hardware. **Proposition 6.1**.: _The connectivity requirements for performing fault-tolerant syndrome measurement (Figure 5), evolution (e.g. Figure 11, Figure 13), and measurement (Figure 7) circuits, as shown in Figure 15, can be reduced to the requirements shown in Figure 16 while still detecting arbitrary one-qubit errors, at the cost of some additional swaps which produce only detectable errors._ Proof.: The standard swap gate circuit (composed of three non-commuting controlled-not operations) propagates a one-qubit \(Q\) error to a \(QQ\) errors. The code has been chosen so that no \(QQ\) error is a logical error. Figure 15: Full qubit connectivity requirements for the eight vertex qubits of a plaquette, and the nine ancilla qubits which interact with them. Ancillas are orange. To detect arbitrary one-qubit errors it suffices to show that \(QQ\) errors produced by swaps performed during an evolution or syndrome measurement does not propagate to logical errors. For syndrome measurements, a reduced connectivity version of Figure 5 is shown in Figure 19. The only swaps needed are between qubits on the same vertex. Each \(QQ\) error produced by such swaps propagates to a \(QQ\) error, along with an \(X\) error on the flag qubit, at the time of syndrome measurement. Such errors are detectable. Evolution circuits such as those in Figure 11 and Figure 13, under reduced connectivity, have connectivity restrictions which depend on the orientation of evolved sites within the lattice. Figure 17 shows the qubit connectivities for various operators. The desired permutations can be accomplished with swaps of two forms: 1. Swap an ancilla qubit with an adjacent vertex qubit, 2. Swap the two qubits of a single vertex to or from their original positions. A reduced-connectivity version of the evolution circuit of Figure 11b, for vertical edges, is shown in Figure 18. A vertex-vertex swap error propagates to either a \(QQ\) error on a paired-vertex qubit pair, or a \(QQIQ_{4}\) error. A paired-vertex \(QQ\) error is never a logical error by the code's construction. Figure 10 shows that \(QQIQ_{4}\) can only be a logical error if it is a reflected \(YYIX\) error on a top edge or an unreflected \(ZZIX\) error on a bottom edge. The last (respectively first) gate must then anticommute with \(Q=Y\) (respectively \(Q=Z\)), but no such gates occur for top (respectively bottom) edges in the table. Thus, even in the presence of swaps, errors are detectable. Figure 16: Reduced qubit connectivity requirements for syndrome measurement and evolution. Ancillas are orange. For horizontal edges the initial qubit ordering will differ slightly. The circuit in Figure 18 may be modified to a horizontal edge circuit by conjugating with a \((1,2)\) swap. The conjugating swaps can propagate a vertex-vertex swap error, which propagates to a detectable \(QQ\) error. To reduce the degree of the interaction graph to three, each ancilla may be replaced with a pair of qubits connected by an edge, to form a hexagonal lattice. This increases the qubit cost of representing a fermion site from roughly three qubits per site to four. ## 7 Resource and performance bounds for a VQE circuit In order to establish that there is a performance regime in which the GSE's error detection properties are useful, in this section we derive parameterized circuits for a Variational Quantum Eigensolver [25, 26, 27]. Our ansatz states will be constructed from product states using parameterized Hamiltonian Variational Ansatz (HVA) circuits. We choose the HVA method because it has an efficient implementation in the edge and vertex algebra, namely, we prepare an initial state and then for parameters \(U_{j},U_{jk},\hat{U}_{jk}\), we evolve \[\prod e^{-iU_{jk}iB_{(j,\uparrow)}A_{(j,\uparrow),(k,\uparrow)}}\prod e^{-iU_ {jk}iA_{(j,\uparrow),(k,\uparrow)}B_{(k,\uparrow)}}\prod e^{iU_{j}B_{(j, \uparrow)}}\prod e^{-i\hat{U}_{jk}B_{(j,\uparrow)}B_{(k,\uparrow)}},\] with the products taken over site orbital indices \(j\) and positively oriented edges \(jk\). Circuit depth and two-qubit gate costs are computed as follows. To prepare the initial definite-occupancy state, syndrome measurements for each square plaquette and each bigon plaquette on the boundary must be performed. The circuit in Figure 19 performs a syndrome measurement by interacting a plaquette's vertex qubits with the ancilla lying in its center, using a northwest, northeast, southwest, southeast sweep. For each plaquette edge, the plaquette vertex-qubit interactions occur either both before or both after the vertex-interactions for the plaquette sharing an edge boundary. Thus, syndrome measurements commute and may be performed simultaneously. Figure 19 shows that a loop operator syndrome measurement requires at most ten two-qubit gates and a \(Z\) measurement. In total, preparation of a definite-occupancy state requires at most \((N-1)(M-1)*10\) two-qubit gates, circuit depth at most eight, and \((N-1)(M-1)\)\(Z\)-measurements. The evolution operators do not all commute, however the \(e^{-iU_{j}B_{(j,\uparrow)}}\) commute and have disjoint support. These may all be performed in a single layer. Figure 17: Qubits used for evolving local fermionic algebra operators and syndrome measurements in the reduced connectivity. Ancillas are orange. The qubit connectivity for various operators is as follows: 1. horizontal \(A_{jk}\): green – left image, 2. vertical \(A_{jk}\): black — left image, 3. \(B_{j}\): a horizontal edge — right image, 4. \(B_{j}\) measurement: each of the ancilla’s four arms — either image, 5. syndrome measurement: all edges — right image. Labelled qubit indices in the left image illustrate our conventional written order of qubits in horizontal and vertical \(A_{jk}\) operators. The right image shows the the qubit order for loop operators. Figure 19: Evaluation circuit for an \(IYXZYXZI\) syndrome measurement under reduced connectivity. The physical connectivity constraints are as shown in Figure 17. Figure 18: Quantum circuits for evolution of \(e^{-iPt}\), where \(P=P_{1}P_{2}P_{3}P_{4}\), with swaps, under reduced connectivity. The circuit shown is for a vertical edge operator. For a horizontal edge operator, qubits 1 and 2 are swapped in the connectivity graph; we assume this is accomplished with an additional pair of swaps on qubits 1 and 2 at the beginning and end of the circuit. Single-qubit errors during swaps propagate according to the slice labels at the top of the figure: to an \(IIQQ\) error at \(A\), a single-qubit vertex error plus an ancilla error at \(B\), and to a \(QQII\) or \(QQIQ_{4}\) error at \(C\). The operators \(e^{-i\hat{U}_{jk}B_{(j,\uparrow)}B_{(k,\uparrow)}}\) commute; the horizontal edge operators may be evolved simultaneously by an argument similar to that made above for syndrome measurements, as can the vertical edge operators. Two layers of evolutions are required in total. A pair of operators, each of the form \(e^{-iU_{jk}B_{(j,\uparrow)}A_{(j,\uparrow),(k,\uparrow)}}\) or \(e^{-iU_{jk}A_{(j,\uparrow),(k,\uparrow)}B_{(k,\uparrow)}}\), will commute in two cases: when their edge operators share an even number of vertices, and when exactly one of their vertex operators lies on the vertex shared by both edges. All operators of these two forms may be evolved in four layers, by evolving the operator types \(e^{-iU_{jk}B_{(j,\uparrow)}A_{(j,\uparrow),(k,\uparrow)}}\) and \(e^{-iU_{jk}A_{(j,\uparrow),(k,\uparrow)}B_{(k,\uparrow)}}\) types separately, each type's stage containing a layer for horizontal and a layer for vertical edges. Note that doubled edges have zero weight and are not evolved. This results in a total of \(1+2+4=7\) evolution layers to prepare the ansatz state. We assume as before that the two-qubit \(B_{i}\) evolutions require one hardware-native two-qubit gate. For the other two types of evolutions, the circuit shown in Figure 17 shows the worst depth and gate-count cases: 13 two-qubit gates in 11 layers. Thus, the total number of layers for ansatz preparation is \[1+11*(2+4)=67.\] The interaction graph has \(mn\) vertices and \(m(n-1)+n(m-1)\) edges, \(m+n\) of which are doubled with zero interaction weight on doubled edges. Thus the total number of two-qubit gates for ansatz preparation is \[mn+13\cdot 3(m(n-1)+n(m-1))=79mn-39(m+n-1).\] Figure 21 shows the two-qubit-gate and depth costs for \(4\times 4\), \(8\times 8\), and \(16\times 16\) planar lattices. Here, VQE is assumed to consist of zero-state preparation, ansatz evolution, application of a single Hamiltonian term (which costs zero two-qubit gates), time-reversed ansatz evolution, and measurement. The operators to be measured are the \(B_{j}\) operators, but in the non-error-detected case, this may be accomplished by performing Pauli-gate basis measurements on the individual qubits and classically post-processing, without using two-qubit gates. Error-detection is performed by repeating syndrome measurements performed during zero-state preparation. In this case, measuring the \(B_{j}\) operators using single-qubit measurements plus classical post-processing is undesirable. Such measurements must be performed at the end of a computation and a fault during error-detection could then propagate to the measurements. We therefore measure the \(B_{j}\) operators before error-detection, making the cost of error-detection equal to the cost of syndrome measurement, plus \(4MN\) two-qubit gates, assuming the \(B_{j}\) measurement circuit shown in Figure 20. Some error-detection metrics require sampling over the implemented circuit. The fraction of cases in which multiple errors prior to error-detection result in a detected error is one such metric; the rate of errors occurring during error-detection which result in a detection is another. For simplicity we shall, to begin with, assume that error-detection never occurs when multiple errors occur in a circuit, and that errors occurring during error detection are always undetected. These assumptions are extremely conservative and will be revised later. **Lemma 7.1**.: _Suppose a quantum circuit contains \(c\) noisy two-qubit gates, producing an error on each of the qubits with (independent) probability \(1-s\). After these gates, \(d\) noisy two-qubit gates are employed to provide single-qubit error detection. Suppose further that error-detection never succeeds when multiple errors Figure 20: Measurement of a \(B_{j}\) operator performed under the reduced connectivity assumptions shown in Figure 16. occur and that any error occurring during error detection is undetected. Then error detection increases the fraction of no-detected-error computations which are correct whenever_ \[-2cs^{2c+2d}+s^{2c+2d+1}+s^{2d}-1>0 \tag{18}\] Proof.: The probability of an unmitigated computation proceeding without error is \[p_{g}=s^{2c}. \tag{19}\] The probability of an error occurring during error detection, after an error-free computation is \[p_{e}=p_{g}(1-s^{2d})=s^{2c}(1-s^{2d})\] The probability of a single error occurring during computation, followed by an error-free error-detection, is \[p_{d}=2c(1-s)s^{2c-1}s^{2d}=2c(1-s)s^{2c+2d-1}. \tag{20}\] The probability of an error-detected computation proceeding with out error when no error is detected is \[p_{g}^{ed}=\frac{p_{g}-p_{e}}{1-p_{d}} \tag{21}\] The fraction of correct error-unmitigated computations is smaller than the fraction of correct error-mitigated computations when \[p_{g}<\frac{p_{g}-p_{e}}{1-p_{d}},\] which simplifies to \[p_{g}p_{d}>p_{e}.\] Substituting for \(p_{g}\) and \(p_{e}\) gives \[s^{2c}p_{d}>s^{2c}(1-s^{2d})\] which simplifies to \[p_{d}>1-s^{2d}. \tag{22}\] Substituting for \(p_{d}\) gives \[2c(1-s)s^{2c+2d-1}>(1-s^{2d})\Leftrightarrow\] \[\Leftrightarrow-2cs^{2c+2d}+2cs^{2c+2d-1}+s^{2d}-1>0\] Success probability threshold \(s\) values for improvement for different lattice sizes are shown in Figure 22. They show that error-detection in BKSF circuits can provide a benefit, compared to BKSF circuits without error detection, in error regimes above 1 in \(10^{-5}\) gates. However, the benefit is modest. On the other hand, our assumptions severely under count the number of detected errors. Figure 21: Two-qubit-gate and depth costs for planar lattices. An unmitigated VQE circuit consists of zero-state preparation, Ansatz state evolution, Hamiltonian evaluation (requiring zero two-qubit gates), conjugate Ansatz state evolution, and one-qubit measurements with classical post-processing. With error detection, the an error-detected circuit consists of zero-state preparation, Ansatz state evolution, Hamiltonian evaluation (requiring zero two-qubit gates), conjugate Ansatz state evolution, \(B_{j}\) operator measurements, and error detection (using the same circuitry as zero-state preparation). Computing the detected error rate exactly would require computing the distribution of error syndromes from the distribution of errors. Instead, we estimate the probability \(p_{a}\) of detecting an arbitrary error. If \(p_{a}\) is known, the probability of an arbitrary error during computation, followed by an error detection, is \[p_{d}=(1-s^{2c})p_{a}.\] Here it is not assumed that the error detection circuitry operates correctly, merely that in the presence of an error a nontrivial syndrome occurs. From Equation 22 in the proof of Lemma 7.1, error detection improves the ratio of correct no-detected error calculations when \[p_{d}>(1-s^{2d})\Leftrightarrow(1-s^{2c})p_{a}>(1-s^{2d})\Leftrightarrow-p_{a }s^{2c}+s^{2d}+p_{a}-1>0. \tag{23}\] Though \(p_{a}\) is difficult to compute, circuitry can be optimized in order to improve it. We propose that it should not be difficult to produce a scheme in which \(p_{a}\) is greater than the probability that two single-qubit syndromes, sampled uniformly, produce a detectable error. This value can be computed as an estimate for \(p_{a}\) and used to estimate error bounds. Consideration of Figure 10 shows that there is a single weight-two logical operator for each vertex, edge, and doubled edge, which may be obtained by ordering two single-qubit Pauli errors in two ways, plus there are six ways to obtain a trivial error on each of the vertices. Assuming a square lattice (\(n=m\)): \[1-p_{a}\cong\frac{2*(m^{2}+2*m*(m-1)+m+m)+6*m^{2}}{(m^{2}*6)^{2}}=\frac{12*m^{2 }}{(m^{2}*6)^{2}}=\frac{1}{3m^{2}}. \tag{24}\] Figure 23 shows \(p_{a}\) values for various lattice sizes, and the odds of completing a circuit successfully at those thresholds. At threshold, circuits complete successfully so rarely that the output is essentially noise. Suppose we want no-error-detected computations to be error-free with probability at least \(p_{g}^{ed}\). We will assume that when ever computation proceeds correctly but error-detection does not, a correct result is flagged erroneously and discarded. Note that this assumption underestimates the fraction of correctly executed circuits and overestimates the rate at which circuits are discarded. Under this assumption, the undiscarded results consist of those with correct computation and error detection (which occur with probability Figure 22: Threshold and high accuracy success probabilities. Improvement threshold is the error-free qubit gate rate \(s\) at which error-detection provides a benefit which justifies its cost. \(p_{g}\) is the fraction, in the absence of error detection, of circuit executions that complete without error. \(p_{d}\) is the probability that a single-qubit error will occur and be detected. \(p_{g}^{ed}\) is the fraction of circuits, after error detection and discarding, that complete without error. Figure 23: Circuit success probabilities and thresholds for a more optimistic error estimate. \(p_{a}\) is the estimated probability of error-detection circuitry flagging an arbitrary error. Improvement threshold is the per-qubit-gate-success rate \(s\) at which error detection improves accuracy. \(p_{g}\) is the probability of the error-detected circuit completing without flagging an error. \(s^{2(c+d)}\)) along with those with incorrect computation and unsuccessful error-detection (which are disjoint from the first set, and occur with probability \((1-s^{c})(1-p_{a})\)). Then we need the following: \[\frac{s^{2c+2d}}{s^{2c+2d}+(1-s^{2c})(1-p_{a})}>p_{g}^{ed}\Leftrightarrow s^{2( c+d)}(1-p_{g}^{ed})+s^{2c}\rho_{g}^{ed}(1-p_{a})-p_{g}^{ed}>0. \tag{25}\] The required \(s\) accuracy thresholds to obtain runs which are 95% accurate are shown in the rightmost columns of Figure 23. ### Repeated rounds of error detection Error-correction relies on repeated measurement of syndromes to keep accumulating errors within the correctable error space. It is reasonable to ask, in what regime are repeated rounds of error-detection helpful. Suppose a circuit contains two successive rounds of syndrome measurements, one after \(c_{1}\) gates of the computation and one at the completion, after \(c_{2}\) more gates, plus \(d\) gates of error detection. For simplicity, and compatibility with the single-round notations, let us assume \(c_{1}=c_{2}=c\). Then, using the formula from 25, the probability, after rejections due to error detection, of producing a correct calculation is estimated by \[\left(\frac{s^{2c+2d}}{s^{2c+2d}+(1-s^{2c})(1-p_{a})}\right)^{2},\] and the probability of producing a correct calculation during a single round of error detection is \[\frac{s^{4c+2d}}{s^{4c+2d}+(1-s^{4c})(1-p_{a})}.\] Thus improvement occurs when \[\left(\frac{s^{2c+2d}}{s^{2c+2d}+(1-s^{2c})(1-p_{a})}\right)^{2}>\frac{s^{4c+2 d}}{s^{4c+2d}+(1-s^{4c})(1-p_{a})}\Leftrightarrow\] (by taking reciprocals and cancelling denominators), \[\left(s^{2c+2d}+(1-s^{2c})(1-p_{a})\right)^{2}<s^{4c+4d}+s^{2d}(1-s^{4c})(1-p_ {a})\Leftrightarrow\] (by expanding terms and cancelling common summands), \[2s^{2c+2d}(1-s^{2c})(1-p_{a})+(1-s^{2c})^{2}(1-p_{a})^{2}<s^{2d}(1+s^{2c})(1-s ^{2c})(1-p_{a}).\] (by cancelling common factors), \[2s^{2c+2d}+(1-s^{2c})(1-p_{a})<s^{2d}(1+s^{2c})\Leftrightarrow\] (by solving for \(s^{2d}\)), \[s^{2d}<\frac{-(1-s^{2c})(1-p_{a})}{s^{2c}-1}\Leftrightarrow\] \[s^{2d}<1-p_{a}.\] This estimate gives a budget of \[d_{b}<\frac{\ln(1-p_{a})}{2\ln s} \tag{26}\] gates for error detection. Error-detection budgets for various lattice sizes and gate accuracies are shown in Figure 24. They show that a second round of error detection can provide improved probability of success, at the cost of modest increase in the chance of detectable failure. Note that the improvement in accuracy will also be modest, as the estimate \(p_{a}\) also serves as an estimate on the upper bound probability that an error which is detectable midway through a computation will remain so at final error-detection. If more rounds of error-detection are applied to the same computation, the probability of not discarding a computation decreases exponentially, while the benefits become increasingly modest; the unique contribution of each round of error detection is to catch errors occurring after the previous round and becoming undetectable between the current and next rounds. ## 8 Conclusion In this work we demonstrate that the Bravyi-Kitaev Superfast encoding can be used to perform error detection. This is significant because it shows that advanced forms of error mitigation may be possible when using such an encoding that would otherwise not be possible without encoding the result inside an error detecting code. We find that the process requires low weight stabilizer measurements and provide new approaches for performing a variational quantum eigensolver algorithm within the code space. We show the circuit is fault tolerant, given either a native or fault tolerant implementation of an \(e^{-i\theta Z\otimes Z}\) gate. We find from numerical studies that thresholds exist for these protocols which can range for a \(16\times 16\) Fermi-Hubbard model between 99.9% under optimistic assumptions and 99.997% under more pessimistic assumptions. These numbers are exacting, but it is worth noting that such an error detecting code arises for free when using a Bravyi-Kitaev superfast encoding and thus may be of great value for early fault tolerant calculations where low-distance quantum error correcting codes are possible but high distance codes may be impractical. In such cases, a modest amount of error correction will push us below such a threshold and then our techniques can be used to further improve the quality of the estimates of a variational eigensolver after post selecting on no-error-detection events. This work opens a number of interesting questions about the role that fermionic representations may have going forward in quantum computing. In particular, while our work suggests the existence of a threshold for gate errors in certain error detection schemes, demonstration of large thresholds will be necessary to show an advantage for either Fermi-Hubbard or chemistry simulations on NISQ era quantum devices. It is possible that with refinements of the methods presented here or other forms of error mitigation that such improvements could be demonstrated. Further, while the weight of the stabilizers that need to be measured are modest in our setting, they still are likely to prove experimentally challenging on near term hardware with limited connectivity and finding improved methods that require even lower weight checks may prove to be valuable in such settings to enable these methods to be practically deployed. Finally, while this work has focused on fermionic representations of creation and annihilation operators, a question arises about whether similar schemes could be proposed for systems with different particle statistics such as bosons or anyons. Further it remains an open question whether the notions of fermionic encodings considered here could be generalized to allow different encodings of operators, such as adders, that are not normally thought of in the context of particle encodings. Finding more general ways to blur the boundaries between quantum algorithms and quantum error correcting codes may not only provide new ways of reducing the costs of quantum error correction but also provide a deeper understanding of the nature and structure of fault tolerant quantum algorithms. Figure 24: Gate count budgets \(d_{b}\) for a second round of error detection for various lattice sizes \(m\times n\) and gate accuracies \(s\). ## Acknowledgements NW would like to acknowledge funding for this work from Google Inc. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704 (PNNL FWP 76274).
2302.14479
A Survey of Automatic Generation of Attack Trees and Attack Graphs
Graphical security models constitute a well-known, user-friendly way to represent the security of a system. These kinds of models are used by security experts to identify vulnerabilities and assess the security of a system. The manual construction of these models can be tedious, especially for large enterprises. Consequently, the research community is trying to address this issue by proposing methods for the automatic generation of such models. In this work, we present a survey illustrating the current status of the automatic generation of two kinds of graphical security models -Attack Trees and Attack Graphs. The goal of this survey is to present the current methodologies used in the field, compare them and present the challenges and future directions for the research community.
Alyzia-Maria Konsta, Beatrice Spiga, Alberto Lluch Lafuente, Nicola Dragoni
2023-02-28T10:38:06Z
http://arxiv.org/abs/2302.14479v3
# A Survey of Automatic Generation of ###### Abstract Graphical security models constitute a well-known, user-friendly way to represent the security of a system. These class of models are used by security experts to identify vulnerabilities and assess the security of a system. The manual construction of these models can be tedious, especially for large enterprises. Consequently, the research community is trying to address this issue by proposing methods for the automatic generation of such models. In this work, we present a survey illustrating the current status of the automatic generation of two kinds of graphical security models -Attack Trees and Attack Graphs. The goal of this survey is to present the current methodologies used in the field, compare them and present the challenges and future directions for the research community. Graphical Security Models, Attack Trees, Attack Graphs, Automatic Generation, Threat Modelling, Survey ## I Introduction During the last few decades, the use of electronic devices has spread significantly. Companies and individuals are using technology for both personal and work-related reasons. Especially, in the last decade with the use of IoT devices in all aspects of our everyday life a huge amount of personal and sensitive data is stored or processed on computer networks. These kinds of devices are limited in resources and in some cases, the cryptographic algorithms are not applicable. Along with the extended use of electronic devices, the attempts by malicious users to steal this data have increased. As a consequence, the research community has focused on finding ways to protect this data from malicious actors. One well-known solution for assessing the security of a system is the use of graphical security models. These models represent security scenarios and help security experts identify the system's weaknesses. Their graphical representation constitutes a user-friendly way to analyze the security of the system. In the scope of this paper, we are going to examine only two kinds of these graphical representations - Attack Trees [33] and Attack Graphs [29]. For a comprehensive overview of all the available formalisms, we suggest the reader refer to [21]. Given our focus, we will use the term "Attack Models" to refer to the class of graphical security models constituted by both Attack Trees and Attack Graphs. Currently, security experts manually produce graphical security models. This procedure, especially for such large systems used nowadays, can be tedious, error-prone, and non-exhaustive. Consequently, the need for automatic procedures arose and the research community has focused on addressing this issue. Especially in the last decade, researchers are studying different methods able to automatically produce graphical security models. In this work, we present an exhaustive survey on the automatic generation of Attack Models. The main goal of this work is to provide an overview of the field and to identify current challenges and future research opportunities. To our knowledge, it is the first survey paper focusing only on the automatic generation of Attack Models. The survey presents: * A quantitative study of the works included in this survey, with diagrams depicting important features to provide the reader with a comprehensive overview of the field. * A classification of the surveyed works according to the methodologies they use to automatically generate the graphical models. * A comparative study based on the main characteristics of each work. * An identification of limitations and future directions for the research community, in order to reduce the gap between research developments and practitioners' needs. The paper is structured as follows: Section II presents the related work, Section III presents the main concepts of Attack Models, Section IV presents an overview of the categories used to classify the papers under study, Section V presents the research method used to structure our research, Section VI presents the quantitative study and the overview of the categories, Section VII presents the classification of the papers, Section VIII discusses the challenges and proposes future research directions and Section IX concludes the paper. ## II Related Work In this section, we discuss other surveys focusing on the automatic generation of Attack Models. We chose these works since they are providing surveys on Attack Models, and some of them also consider their automatic generation. We also include a table to summarize the main characteristics of the papers included in the Related Work section. In Table I four different characteristics are presented. We denote the symbol \(\checkmark\) when the referred paper fulfills the corresponding characteristic. The first column includes the references to the paper under examination, the second column "After 2015" refers to the papers that have been published after 2015. This is an important characteristic since more than half of the papers included in our survey were published after 2015, the third column "Automatic Generation" refers to the papers that study the automatic generation of Attack Models, the fourth column refers to whether the paper presents a classification for the papers included in their survey, based on the techniques used in each paper and the last one if the paper indicates the challenges identified in the field. The survey papers that we have identified are the following: Kordy et al. [21] present a very comprehensive survey of DAG-based graphical models for security. Their work summarizes the state of the art of the existing methodologies, they compare them and classify them based on their formalism. Although this is a very extensive survey, they do not focus on the automated generation of these models. Since it does not cover works published after 2015, their work does not cover several papers on Attack Model generation, that are instead covered in our work. Lallie et al. [23] examine the effectiveness of the visual representation of Attack Models. They analyze how these structures are representing the cyber-attacks. They conclude that there is no standard method to represent the Attack Models and that the research community should turn their attention to standardizing the representation. Although this paper is a great contribution, it does not examine the issue of the automatic generation of these structures. Wojciech et al. [39] provide a survey of the application of formal methods on Attack Trees, their semi-automated or automated generation, and their quantitative analysis. Although they consider automatic generation, their research is not only focused on that. Also, they do not consider the automatic generation of Attack Graphs. Moreover, this paper considered only formal methods approaches and hence does not cover some of the papers we have studied, for example, machine learning approaches. In our work, we discuss the different methodologies applied by the research community for automating the generation of Attack Models. We only focus on automatic generation and study the weaknesses and future directions in the specific field. To our knowledge, it is the first survey focusing on the automatic generation of Attack Models. We are planning to give to the reader a wide overview of the existing techniques currently used in this field. We are also aiming to provide a structured and comprehensive study for someone who wants to study this topic. Also, we are pointing out the current challenges existing in this field, that have to be addressed. Taking everything into consideration, we are providing a wider overview of the automatic generation of Attack Models than the other surveys, since it is the first survey focusing on this particular field. To be more specific, the automatic generation of Attack Trees is only studied in [39], where all the papers referring to Attack Graphs are not considered- [13, 18, 14, 28, 29, 34, 35, 36, 37], we consider all of the above papers. Also, the following papers which generate Attack Tree are not considered- [17, 32, 12, 22, 7] in [39], we on the other hand consider all the papers mentioned above. Overall, our survey covers 15 papers that are not covered in any of the existing surveys. ## III Background Before starting our survey we recall here the basic concepts of Attack Models. ### _Attack Trees_ An Attack Tree is a graphical representation of potential attacks on the system's security in the form of a tree. Initially introduced by Schneiner [33], this type of representation enables developers to identify the vulnerabilities of the system and facilitates the implementation of countermeasures. They model attack scenarios presented in a hierarchical way with each labeled node corresponding to a sub-goal of the attacker and the **root** being the main one - the global goal of the attack. The rest of the labeled nodes can be either **children of a node**, a refinement of the parent's goal into subsidiary goals, or **leaf nodes**, representing attacks that cannot be further refined in order to be implemented, also called basic actions. An Attack Tree is a 3-tuple \((N,\rightarrow,n_{0})\), where \(N\) is a finite set of nodes, \(\rightarrow\) is a finite acyclic relation of type \(\rightarrow\subseteq N\times M(N)\) - where \(M(N)\) is the multi-set of \(N\), and \(n_{0}\) is the root node, such that every node in \(N\) is reachable from \(n_{0}\)[24]. The basic formal model of Attack Trees incorporates two types of refinements: OR and AND. OR nodes represent dis Fig. 1: Attack Tree from [16] junction (choice) where the parent node's goal is achieved when at least one of the children's sub-goals is achieved, AND nodes represent conjunction (aggregation) thus requiring for all children's sub-goals to be fulfilled. Several variants of Attack Trees have been proposed. For example, the sequential conjunction refinement SAND, is similar to AND but requires a specific sequential realization of the children [24, 39]. One example of an Attack Tree is illustrated in Figure 1. In summary, there are two separate ways for the attacker to accomplish their goal, namely becoming a root: with or without authentication. The two refined authentication options are ssh and rsa, and both must be used because an AND arc is shown. On the other hand, if no authentication is carried out, the user must first be granted access using ftp and then rsh. Following the acquisition of privileges, lobf comes next. It is readily apparent that the tree notation is very appealing and convenient for a threat analysis process since it can include multiple attacks derived from physical, technical, and even human vulnerabilities [39]. ### _Attack Graphs_ Attack graphs are graphical representations of all the paths through a system that ends in a condition in which an attacker has successfully achieved their malicious goal. They outline all potential vulnerabilities and all possible attack paths in a given system and they are frequently used to represent complex attacks that have multiple paths and goals [23]. Attack Graphs play an important role in security, as they directly show the existence of vulnerabilities in the system and how attackers can exploit these vulnerabilities to implement an effective attack [41]. An Attack Graph or AG is a tuple \(G=(S,\rightarrow,S_{0},S_{s})\), where S is a set of states, \(\rightarrow\subseteq S\times S\) is a transition relation, \(S_{0}\subseteq S\) is a set of initial states, and \(S_{s}\subseteq S\) is a set of success states [34]. Intuitively, \(S_{s}\) denotes the set of states where the intruder has achieved their goals. Unless stated otherwise, we assume that the transition relation \(\tau\) is total. We define an execution fragment as a finite sequence of states \(s_{0},s_{1},...,s_{n}\) such that \((s_{i},s_{i}+1)\in\tau\) for all \(0\leq i\leq n\). An execution fragment with \(s_{0}\in S_{0}\) is an execution, and an execution whose final state is in \(S_{s}\) is an attack, i.e., the execution corresponds to a sequence of atomic attacks leading to the intruder's goal [34]. One example of an Attack Graph can be seen in Figure 2. The example represents the Attack Graph of a simple network. There is one switch connected to the internet on one side and to a workstation and a printer on the other end. The Attack Graph represents different states of the network and the edges are exploits that an attacker can use [8]. In the specific example, the virus can exploit a particular vulnerability in the network in order to transition to State B, where it can exploit another vulnerability to gain root access. ### _Attack Trees vs Attack Graphs_ A graph can be seen as a set of objects connected by a set of edges. A graph can be either directed or undirected. A directed graph has edges that represent a specific direction-one node is set to be the origin and the other one the destination. The edges in an undirected graph do not specify which node is the origin and which is the end-point, they just denote a two-way connection. A tree is a special case of a graph, more specifically a Directed-Acyclic Graph (DAG). A tree is represented by three different types of nodes, the root node, which does not have any parents, the internal nodes, which have both a parent and at least one child and the leaf nodes which do not have children. So, the tree represents a hierarchy between the nodes. Taking the above into consideration, we can conclude that the basic structure of Attack Trees and Attack Graphs is different. The Attack Graph can represent more than one goal node or in some cases even circles if an attacker is trying the same unsuccessful action multiple times. On the other hand, the Attack Trees only represent one goal node and constitute an acyclic graph by definition. We can also observe the different visual representations in Figures 1 2. Furthermore, in Attack Graphs, the event flow is represented top-down, while in the majority of the Attack Trees, this is depicted in a bottom-up way [23]. Moreover, in the same sense, Attack Trees generally use vertices to represent exploits and not preconditions, while preconditions are assumed to have been met in the transition from one exploit to the next. Attack graphs represent both [23]. Essentially, Attack Models have a graph-based structure. The main differences are: how the event flow is depicted, the representation of full and partial attacks, and the representation of preconditions. ## IV Overview of the Categories In this section, we are going to present the different categories we used to classify the papers. Every category is a method we identified in the literature. We also analyze the dimensions we selected to study in every paper. ### _Methods_ One motivation for this work was to contribute to the research community by identifying the tools and methods that have been proposed to automatically produce Attack Models. In view of this, we examined every paper and identified Fig. 2: Attack Graph from [8] the main underlying technology or method used. During this process, we identified 7 different methods, that form the categories that the papers were classified in this survey. The categories are disjoing; the papers included in the survey fall into only one category each. In particular, the categories are: * _Templates:_ The authors are generating the Attack Models using templates, created by common patterns identified. Some of the variables on the templates can be adjusted for each case. * _Transformation Rules:_ A representation of the system is already available and some transformation rules are applied in order to obtain an Attack Model. * _Library based:_ The Attack Model is generated given a library. A library contains some rules/patterns that must be used identically. The _Templates_ can be modified in each case. Also, _Transformation rules_ refer to some specific rules for translating from one system to another. In any case, the papers classified in category _Libraries_ it is stated by the paper that they use some sort of library. The same applies to _Templates_. * _Artificial Intelligence:_ Artificial intelligence techniques and algorithms such as action planning and machine learning are used in order to obtain an Attack Model. * _Logical Formulas:_ A paper falls into this category when the system is represented by logical formulas. The Attack Model is derived from processing the logical formulas. * _Model Checking:_ A model checker is used to model the system and a security property is being checked. If the property does not hold, the model checker returns a counterexample that represents a path in the Attack Model. * _Reachability:_ The generated Attack Model is based only on an analysis of the reachability of the nodes of the network. This category includes only network security. While studying the literature, we selected some features worth studying in each paper, in order to identify challenges and future directions in this field. Our primary consideration is to spot the gap between the research on this field and the actual employment of the suggested techniques in real-case scenarios. We are going to examine 7 dimensions in each category: * _Proofs:_ If a paper includes formal proofs for the algorithms applied for generating the Attack Models. Proof and semantics are important in order to support a method and identify possible limitations. * _Code:_ If a paper includes a reference to the implementation of the proposed solution and if the code is available online. In this case, security experts will be able to use the code to test some cases and expand the method. * _Prerequisites:_ The prerequisites one should acquire in order to be able to apply the proposed solution. Some methods require certain information or data the user must possess in order to apply the suggested solution. It is important to know what exactly is required to apply each method. Some prerequisites may be hard to acquire or critical for security. * _Domain:_ If a paper proposed a Network Defined, Sociotechnical, or General solution. It is important to know what kind of attacks are considered in every solution, in order to select the best method. Each sub-class of the dimension _Domain_ is explained in detail in Subsection IV-B. * _Graph or Tree:_ If the proposed solution yields a Tree or a Graph. Some tools support Graphs and other Trees, so it is important to know the number and the quality of tools and methods available for each model. * _Experiments:_ If the authors conducted experiments. Experiments are important in order to ensure the employment of the method. We observed that non of the papers have a reproducibility batch. * _Scalability:_ If the authors discuss the scalability of their solution. Scalability is a crucial factor for real-life cases. ### _Network Security vs General case_ There is one of the dimensions that is worth discussing in more detail. The Domain dimension includes two sub-classes representing the attacks depicted by the Attack Models: the network-defined attacks and the general case. The first category is network-defined attacks. In this case, the works under study are focused on dealing with network security, meaning that they engage only with network-related attacks. We classify in this category only the papers that explicitly state that their work exclusively focuses on representing network-related attacks. The second category denoted the general case, where the papers are taking into account both physical and network-related attacks. In many cases, the security of the system is compromised by human errors. Especially in large organizations where many people interact with extensive networks, it is very common for attackers to use social engineering or take advantage of human errors. During our research, we also discovered two papers [9, 10] referring to the term _socio-technical_ system. This term refers to a system involving: humans, machines, and interaction with the environment [5]. The socio-technical systems have five key characteristics as stated by Baxter et al. [5] and form their own category. In terms of attack types covered by the socio-technical systems, we found during our research that they represent both network-related and physical attacks, like social engineering and so we include them in the general case, but we state explicitly that the papers are referring to socio-technical systems. ## V Research Method In this section, we are presenting the procedure we followed in order to identify the papers included in the survey. Following, we describe the research questions we are aiming to answer, and the research method applied in order to gather the final pool of papers used. ### _Research Questions_ Our work is aiming to examine and discuss the current literature for the automatic generation of Attack Models. Underlined below are the research questions that motivated us to conduct this survey. * **RQ1:**What kind of techniques have been proposed for automatically generating Attack Models? * **RQ2:**Which of the 7 dimensions are being considered in each paper (proofs, code, prerequisites, domain, graph or tree, experiments, scalability)? * **RQ3:**What are the challenges/limitations we identified in the field? ### _Research Method_ We decided to conduct our research using the snowballing technique [40]. The snowballing technique refers to the procedure of identifying relevant papers from the reference list or the citations of a selected paper. Our final pool includes 20 papers. The steps of the procedure we followed in order to identify our final pool of papers are the following: #### V-B1 Start Set The first step is forming the start set of papers. Firstly we had to identify relevant keywords to form a query to the selected database. The keywords we selected to use are: "Automatic Generation", "Automated Generation", "Attack Trees" and "Attack Graphs". We performed our research in DTU Findit [https://findit.dtu.dk](https://findit.dtu.dk), which is an open (guest access) database and includes publications from well-known journals and databases. We used the following query: title:"(Automatic generation" OR "Automated generation") AND title:("Attack trees" OR "Attack graphs"). Our search returns 13 papers, of which 3 were not available online. So based on relevance we formed our start set including five papers [2, 25, 34, 38, 41]. #### V-B2 Iterations After finding the start set, we have to decide which papers we are going to include in our final pool. For this purpose, we applied _Backward and Forward Snowballing_. Backward snowballing refers to the examination of the reference list of the papers. In order to identify if a paper will be included we extract some information regarding the title, the author, and the publication venue. Naturally, we should also take into account the context in which the paper is referenced. At this point, if a paper was still in consideration, we read the abstract and other parts of the paper in order to decide if the paper will be included. The forward snowballing is conducted in order to identify papers from the citation list of the paper being examined. Again, for each paper in the citation list, we extracted some basic information, we took into consideration in which context the citation is taking place and for the final decision, we read parts of the paper [40]. ## VI Overview - Quantitative Study In this section, we are giving an overview of the field through a quantitative study. The reader can find information regarding the quantity of the papers taking into account specific characteristics. The quantitative study allows us to have a better view of the field and the lack of specific information. ### _Publisher and Publication Year_ In this part, we provide statistics regarding the publishers and the publication year of the papers. Our goal is to identify the most popular venues and how the interest of the scientific community regarding the automatic generation of Attack Models altered over the years. We can observe in Figure 3 that the papers included in this survey have been published by 4 different publishers - IEEE, Springer, ACM, MDPI. The publishers with the most publications in the field are IEEE and Springer. Also in Figure 4 we can see the distribution of the papers through the years. The research community turned their interest to the field in 1998 and, at intervals, it has been active since. The interest decreased between 2007 to 2012, but since then the research community seems to be more active in the specific field. For better visualization, the years with zero publications are not presented in the diagram. We can see that the field is really limited, but we strongly believe that the automatic generation Fig. 4: Percentage of papers published each year Fig. 3: Percentage of papers published by each publisher of Attack Models is an upcoming topic, and a survey exploring its techniques is missing. ### _Percentage of papers in each category_ In this work, we classify the papers into 7 different categories. Here we provide information about how many papers are in each category. We can observe in Figure 6 that most of the papers fall into the model-checking category. Signifying that the research community preferred to use model-checking techniques for the Automatic Generation of Attack Models. Later, in the paper, we examine each category separately and provide the reader with proper information, in order to establish a better understanding of each one. ### _Experiments, Proofs, Code, Scalability_ We are also presenting how many papers provide experiments, proofs, code, or the scalability of their solution. In our perspective, these four dimensions (introduced in Section V) studied for each paper are concrete evidence of the quality of the suggested generation algorithm. We also present information about how many papers are referring to the automatic generation of Attack Graphs and how many papers are referring to the automatic generation of Attack Trees. We concluded that almost half of the papers are presenting Attack Graphs and the other half Attack Trees. We can see the results in Figure 5. We can observe in Figure 5 that less than half of the papers are presenting experiments or proofs and even less refers to the scalability, in terms of complexity of the Attack Model generation algorithm. ## VII Classification based on the categories In this section, we present our findings for each category. At the beginning of each category, we present a brief overview of our findings, followed by a detailed presentation of each paper and a discussion of the limitations of each one. We also include tables to graphically represent the key characteristics of every paper, including a column for limitations and the year f publication. On Tables II- VIII ten different characteristics have been identified. The first column includes the reference to the paper under examination, and the rest of the columns refer to the different dimensions we identified in Section III. More specifically, the second column "Domain" refers to whether the papers include only network-related attacks or general attacks, the third column refers to whether the paper presents Attack Trees or Attack Graphs, the fourth whether the paper presents experiments, the fifth whether the code of the proposed solution is available, the sixth whether the paper presents proofs, the seventh if there is a reference to the scalability in terms of complexity, the ninth if the proposed solution requires any prerequisites, and finally the tenth a limitation. We use text to indicate the Domain (general or network-define attacks), the Attack Model (Tree or Graph), the Prerequisites, and the Limitations. We denote the symbol \(\checkmark\) when the referred paper fulfills the corresponding characteristic for Experiments, Code, Proof, and Scalability. ### _Logical Formulas_ All of the papers falling in this category are using logical formulas to represent the system. Afterward, they exploit this structured formation to generate the Attack Models. In this category 10% of the papers are included. We can see that the range of publication years is from 2006 to 2014. All of the papers are referring to the scalability of their solution. We also can see that 1 out of 2 (50%) papers are generating Attack Trees and 1 out of 2 (50%) is generating an Attack Graph. Additionally, it is worth pointing out that 1 out of 2 (50%) papers presenting solutions only related to network security and only 1 out of 2 (50%) presenting a general approach regarding attacks taken into account. Furthermore, 1 out of 2 (50%) papers have the code available online, and 1 out of 2 (50%) papers present experiments. All of the papers Fig. 5: Number of papers that fall into/examining each dimension out of 20 papers Fig. 6: Percentage of papers in each category are providing formal proof for their solution. Below we are presenting the papers included in this category. Vigo et al. [38] propose a static-analysis approach. The authors use the Quality Calculus [26] (an extension of the \(\pi\)-calculus) as a specification language. The processes are translated to proposition formulae, which represent the connection between channel knowledge and program point accessibility. Then, the authors propose a backward chaining search to the formulae from a program point \(l\) in order to generate an Attack Tree. The main idea is to find the possible ways to access program point \(l\). The backward chaining exposes all the paths leading to point \(l\), thus unveiling all the information needed to reach that point. The attacker is able to send messages to a channel, for example, the password, but the content cannot be verified. Also, when an attacker achieves to send a message to the channel for the password, it is assumed that the password is known, and this property holds forever. The proposed way defines the system in a formal manner and the threat scenarios are derived automatically from the definition of the system. Moreover, process calculi have been used broadly for defining software systems and organizations in a coherent way. It is worth pointing out that the authors are studying Attack Trees, but their approach can be also applied to produce Attack Graphs. They also discuss the scalability saying that the worst case is exponential, but this case does not occur systematically as in the model checking approaches. The limitation of this approach is that they can only check if something has been received in the channel, but not the content. Finally, there is no reference to the soundness of the result and the generated Attack Trees are mostly used for quantitative analysis. for a more general class of properties. In this work, the state of the model is depicted as a set of booleans representing the configuration of the network and the attacker's actions as state transitions. As a consequence, the state space created is exponential to the number of the system's variables. This work involves Attack Graphs focused on network-defined attacks. The authors present proofs and experiments but do not present the code and the scalability of their solution. Sheyner et al. [35] in an earlier work presented a method to generate attack graphs automatically, with the use of model checking [34]. The authors choose six network components in order to construct network attack models: a set of hosts, a connectivity relation, a trust relationship among the hosts, an intruder model, a set of individual actions the intruder can exploit to design an attack, and an intrusion detection system. In order to construct the set of actions the intruder can exploit, the authors use real-world vulnerabilities from the Common Vulnerabilities and Exposures (CVE) database. In order to construct the attack graph, the toolkit checks a security property with model checking, in order to ensure that the property is satisfied. An overview of the toolkit is also presented, along with the main components and the user interface. The authors focus on Network Defined attacks. They do not present proofs, the scalability of the solution, or the code. However, they present experiments. The required prerequisites are the network topology, configuration data for each networked host, and a library of attack rules. As a future work, the authors want to specify a library of actions based on a vulnerability database provided by SEI/CERT. This paper outlines a toolkit. Pinchinat et al. [31] present ATSyRA. ATSyRA is a tool implemented on top of Eclipse to help security experts interact with a user-friendly environment when designing Attack Trees. The main motivation for the implementation of the tool was the security of military buildings. As a first step, the security expert has to define the system: the building description, the attacker's strength, and their attack objective. The second step is to run the generation of the attack scenarios. This step compiles the input specification into an Attack Graph. As a third step, the security expert specifies a set of high-level actions (HLA). The HLA implies how it can be refined into sub-actions. The final step is to run the Attack Tree synthesis. This step uses the information given in step 3 and the graph produced in step 2, to construct the Attack Tree [30]. The underlying interface for achieving this is a model-checking algorithm. The paper presents general attacks and creates Attack Trees. The authors also provide the code in the form of a tool [https://atsyra2.irisa.fr/](https://atsyra2.irisa.fr/). The scalability, proofs, and experiments are not provided in the paper. We can observe that the solution is not fully automated since the security expert is also taking an active part in the procedure. Ghazo et al. [3] introduce an algorithm for automatic graph construction and visualization that exploits an existing tool of model checking. The description of the system is given as an input using an Architecture Analysis and Design Language [1]. Afterward, JKind [http://loonwerks.com/tools/jkind.html](http://loonwerks.com/tools/jkind.html) model checker is employed to check a security property and automatically produce counterexamples that form a path on the produced Attack Graph. Finally, the GraphViz tool was used to combine the paths in an Attack Graph. The present tool can generate a representation of the system, i.e. network level, but also atomic vulnerabilities, post-conditions, and properties regarding security that can be valuable for our security purposes. The system's input is the description of the system and a security property that one wants to check. Finally, as a limitation, it is pointed out that much time is spent between of the multiple tools involved in the process. Ibrahim et al. [13] present a solution including Hybrid Attack Graphs (HAG). Generally, HAGs capture the changes in logical parameters, as defined by the pre and post-conditions, representing the state of the system under attack. They also capture the level of resilience, which is a number indicating the dynamic response of the system under a sequence of attacks. The worst-case scenario is selected to be included in the Attack Graph, based on the level of resilience. Automated HAGs can be visualized with a Java-based tool. This procedure requires a formal description of the system's model and the security property (written at Architecture Analysis and Design Language [1]), validated by a model checker named JKind [http://loonwerks.com/tools/jkind.html](http://loonwerks.com/tools/jkind.html). The first two papers included in this category [34, 35] are using the same underlined procedure. The basic limitation of this method is that it is exponential to the variables of the system, so it is not applicable to the large systems employed these days. Furthermore, [31] is mainly focused on military building, so more examples should be employed to test this method in other fields. In [3, 13], many tools are involved in the proposed procedures so a lot of time is spent on the interactions between the different components. ### _Templates_ In this category belong the papers using templates to generate the Attack Trees/Graphs. This category includes 25% of the total papers. We can see that the year of publication varies from 1998 to 2020. We can observe in Table IV the features of each paper. Also, 3 out of 5 (60%) papers are limited to network-defined attacks, and the rest of them (40%) are dealing with general attacks. Furthermore, 2 out of 5 papers are constructing an Attack Tree, while 3 out of 5 are constructing an Attack Graph. Only 1 out of 5 (20%) of the papers is presenting experiments, the same percentage includes proofs. Additionally, 1 out of 5 (20%) papers are sharing the code of their solution. It is important to point out that none of the papers are referring to the scalability of their proposed solution. Bryans et al. [7] introduce a method to generate Attack Trees automatically from the description of the network and a set of templates. Each template represents an attack step. The templates are using variables that have to be replaced by the components of the system under investigation. The proposed algorithm is recursive, at each step, a leaf of the template trees is investigated and expanded if the leaf contains an unbounded variable. If the name of the child matches the name of the root of one of the templates, it is replaced. When the unbounded variable matches multiple templates an OR node is introduced. The authors are constructing an Attack Tree and the code is available online at [https://tinyurl.com/uoptgfb](https://tinyurl.com/uoptgfb). They also present the experiments conducted, but they do not discuss the scalability and the proofs. As a future work, they aim to expand the method to support more networks, since right now they are focused on automotive communication networks. Swiler et al [36] proposed a method to generate an attack graph, in which each node represents a state of the network. The tool has as an input the configuration of the network, the attacker's profile, and the attack templates. The attack templates represent steps of known attacks or strategies for moving from one state to another. The tool combines the input information and customizes the generic template attacks according to the attacker profile and the network configuration. Every graph includes some variables that represent the state of the network. When a node in a graph matches the requirements of the template a new edge is added to the network and new nodes are created. This paper is dealing with network-defined attacks. The authors are not providing experiments, proofs, or code, or discussing the scalability of their solution. Tippenhauer et al. [37] are presenting a Goal-System-Attacker graph. There is a 3-step procedure to generate it. At first, the security goal and the workflow of the system are used to produce the G-graph (Goal-graph), which represents the security goal and the workflow description. The result is used in combination with the system description to generate the GS-graph (Goal-System graph), which is then combined with the attacker model to generate the GSA-graph (Goal-System-Attacker graph). At every step, the graph is updated with further information about the environment. The authors used the framework described above and noticed that there is a series of patterns. These patterns were used to implement some templates. Afterward, they defined local extensions to progressively generate these graphs using the predefined templates. If there is a matching node, the template is integrated into the main graph. The authors are taking into account general attacks and they present proof for their work. They do not give input for the scalability, the code, or any experiments conducted. Kumar et al. [22] introduce the use of an Attack Tree template as a feature diagram, i.e. formal and graphical notations, to solve the non-standardization problem of Attack Trees. This problem states that, in general, a standard template for Attack Trees design does not exist. The template considered in this work is structured in layers each refining the previous ones, in order to construct a tree semi-automatically. This Attack Tree template can be seen as an abstraction that can capture crucial scenarios about attacks by refining hierarchical relationship rules. Moreover, the template is constructed by going through the literature on Attack Trees so that common characteristics in their design can be found. Summing up, the authors present a way for identifying proper meta-categories suitable for Attack Trees by means of feature diagrams, in order to construct the trees in a semi-automated manner. Phillips et al. [29] present a system where the graph is generated using the attack profile, the input templates, and the configuration of the system. The procedure starts from the goal node and is built backward. Then, a search of the templates is conducted to find an edge that the head matches the goal node. The paths that do not satisfy the attacker profile are eliminated. The procedure is repeated until we reach an initial state that is not the head of an edge. The authors claim that their approach can model dynamic aspects by overwriting the configuration of the network. As a prerequisite, besides the configuration of the network and the attacker's profile, a database with common attacks is also required. They do not present the code, proofs, or experiments, or discuss the scalability. Taking the above into account, the method introduced in [7] is focusing on automotive communication, and the templates produced are adjusted to this specific field. Furthermore, at [36] the user has to specify the input templates manually which can result in a limited set of templates. The methods from [22, 37] result in static templates that do not catch up with technological and social advancements. Also, the proposed solution at [22] results in a template of how to design the whole attack tree, which makes it really abstract and some attacks for specific systems might be eliminated. The method at [29] produces only a limited set of templates. There are some works that require the attacker's profile as input. This addition can eliminate some possible attacks if the user does not cautiously define the profile of the attacker, the worst-case scenario should always take into account. ### _Library Based_ All the papers included to this category are constructing the Attack Models given a Library. In this category, the papers included are 10% of the total papers. We can see that the year of publication range from 2018 to 2020. All of the papers are taking into account general attacks and are constructing Attack Trees. Also, all of the papers are presenting the code and proofs supporting their solution. Finally, 1 out of 2 (50%) of the papers are referring to scalability. Pinchinat et al. [32] are constructing an Attack Tree. The main goal of this work is to construct the attack tree that explains a trace given a library L (a set of refinement rules). This approach is also good for forensics, but one should have the logs. The Attack Tree is being built according to the refinements provided by the library. The algorithm that handles the procedure is based on the CYK algorithm, which answers whether some input context-free grammar can generate some input world. The code is also available for the readers at [http://attacktreesynthesis.irisa.fr/](http://attacktreesynthesis.irisa.fr/). The procedure is semi-automatic to avoid unsound results. Also, the Attack Tree consists of AND, OR, and SAND refinements. The authors are also presenting proof for their solution. The mentioned scalability is that the algorithm is polynomial at the size of the trace. The attacks taken into account are general. Finally, the authors are not presenting experiments. Jhawar et al. [17] presented a paper focused on a semi-automatic procedure for creating attack trees. The construction of the Attack Trees can be summed up in four steps: First step: A group of experts defines an initial version of the tree. Second step: Some automatic mechanisms are used to enhance the tree. Third step: The experts curate the new version of the tree. Fourth step: Repeat step two and three. The paper is focusing on implementing the second step. The authors express a predicate-based annotation of the tree, to determine if an attack tree can be attached to another attack tree as a sub-tree. The authors construct a library of annotated attack trees using the National Vulnerability Database [https://nvd.nist.gov/](https://nvd.nist.gov/). After that, they use this library to extend attack trees manually constructed, using the Common Attack Pattern Enumeration and Classification [https://capec.mitre.org/](https://capec.mitre.org/). As a future work, they want to implement a dynamic library and expand the idea to also include counterexamples. The code is available online at [https://github.com/yramirezc/lib-annotated-attack-trees](https://github.com/yramirezc/lib-annotated-attack-trees). The authors are also providing the reader with proof of their solution. Finally, they do not present experiments or the scalability of their solution. The method introduced in [32] requires the traces of an attack that has already taken place. It might be difficult to acquire such information and also the attack tree is generated after the attack has already happened. This feature makes the method good for forensics. Also, the solution proposed in [17] produces a static library that has to be updated over time as society and technology are dynamic systems. ### _Artificial Intelligence_ All the papers assigned in this category are using Artificial Intelligence in order to construct the corresponding Attack Trees/Graphs. In this category are included 10% of the papers included in this survey. The range of publication years varies from 2019 to 2022. We can observe that all the papers are dealing with Network Defined attacks and are constructing the corresponding Attack Graph. Also, none of the papers share the code of their solution. None of the papers in this category are presenting proofs or the scalability of their approach. Koo et al. [18] introduces a method to support the generation of an attack graph using Deep Learning and Machine Learning. The final model takes as an input the network topology and the system information. Feature extraction is applied to acquire an attack graph generation model by exploiting the input data. Finally, the authors are using an evaluation metric to evaluate the predicted path. To sum up, the authors present a binary classification problem together with a multi-output learning algorithm, in order to generate an attack graph starting from the information about the system and the network. Bezawanda et al. [6] presented a tool that automatically generates the PDDL (Planner Domain Definition Language) representation of an Attack Graph from descriptions found in the Common Vulnerabilities and Exposures(CVE) [https://cve.mitre.org/](https://cve.mitre.org/) or the National Vulnerability Database (NVD) [https://nvd.nist.gov/](https://nvd.nist.gov/). PPDL is a set of languages able to depict a planning problem [11]. The PPDL is composed of two concepts, the PPDL domain, and the PPDL problem. The PPDL domain is some problem descriptions with the corresponding actions and constraints. The PPDL domain includes abstract variables. When these variables take a specific value, an instance of the PDDL domain is created which is called the PDDL problem. The PDDL problem is solved with the help of the PDDL planner, which is trying to find a plan to satisfy the PDDL problem (a sequence of actions one must perform to achieve the end goal). The authors are using natural language processing to produce the PDDL from the textual description. The procedure can be explained in 4 steps. First step: Extract information from the Vulnerability database and form the PDDL domain. Second step: Form the PDDL problems using the PDDL domain and event logs of the system. Third step: An Artificial Intelligence algorithm is generating a PDDL plan for each PDDL problem. Fourth step: The tool updates the content of the PDDL domain with every modification of the input data. Also, the tool offers the transformation of the PDDL to the corresponding Attack Graph for visualization purposes. The authors do not present experiments, code, proofs, or the scalability of their solution. The authors at [18] present an easy and cheap method for generating Attack Graphs, but they explicitly state that it is applicable to small organizations. This is not the usual case nowadays, since we are surrounded by huge systems and organizations. Furthermore, the proposed solution on [6] provides a method that represents an Attack Graph with a Planner Domain Definition Language. The transformation from this form to an Attack Graph is time-consuming. Also, tools available in the literature for analyzing these structures (quantitative and qualitative analysis) do not support the PPDL format. ### _Reachability_ All the papers included in this category are using only the reachability of the nodes in the network in order to construct the corresponding Attack Tree/Graph. This category includes 10% of the papers. The years of publication range from 2006 to 2013. All of the papers in this category are presenting experiments and show scalability. Also, all of the papers deal with Network defined attacks. Furthermore, 1 out of 2 (50%) papers is constructing an Attack Tree, and the other one an Attack Graph. None of the papers are providing the reader with their code or formal proofs of their solution. Ingols et al. [14] proposed a system based on multiple prerequisite graphs. They argue that this type of graph is better that the full graphs or the predictive graphs due to the lack of dependencies. The system processes the input data (Map of the network) and computes the reachability matrix. The computation of the reachability matrix is equipped with some improvements crucial to saving memory and time. Some sections of the matrix are collapsed into reachability groups. The filtering rules are replaced by Binary decision diagrams, resulting in filtering rules in linear time. The graph is constructed using a breadth-first technique. Multiple prerequisite graphs consist of three different kinds of nodes; state nodes, prerequisite nodes, and vulnerability instance nodes. During the construction, every type of node is added differently to the graph. Also, this work presents a graph simplification for visual presentation. The proposed solution imports data from Nessus [http://www.nessus.org](http://www.nessus.org), Sidewinder and Checkpoint firewalls, the CVE dictionary [http://cve.mitre.org](http://cve.mitre.org), and NVD [http://nvd.nist.gov](http://nvd.nist.gov).. The generation of the Attack Graph depends on data that can be obtained quickly. The data are evaluated and reported as early as possible. This work assumes that the paths are monotonic, meaning that the attacker will never go back. Also, the authors are not modeling client-side attacks, where an attacker is taking advantage of a server to harm a vulnerable client. The authors refer to the scalability in terms of complexity as almost linear to the size of the network. Also, they present experiments for their solution. Finally, it is also worth mentioning that they do not present the code or proofs for their work. Hong et al. [12] identify all the paths in the network of the system to construct the full AT, which consists of AND and OR gates. After the construction of the full Attack Tree, there are two proposed methods: 1. Simplified Attack Tree with Full Path Calculations 2. SATwIPC simplified Attack Tree with Incremental Path calculation. The first method requires a logical expression of the attack tree and removes the sequence information from the expression but it groups similar nodes. The second method is maintaining attack path information which is constructed by exploiting network configurations and sequences of vulnerabilities. The authors present Attack Trees that depict attacks concerning only network security. The authors do not present formal proofs or the code of their solution, but they present the scalability and experiments. Ingols et al. [14] assume that the paths are monotonic, meaning that the attacker will never go back. This is not the real case, since some actions may force the attacker to go back into the network. Also, their method does not model client-side attacks, which can cause major issues to a system. Furthermore, [12] proposes a method that represents the Attack Tree as different paths in the network, based only on connectivity. On the contrary, [14] also exploits extra data to understand the path the attacker has taken. ### _Transformation Rules_ In this category, the papers that are using transformation rules to obtain the corresponding Attack Tree are included. This category takes into account works where a model of the system is already available and transformation rules are applied to obtain the Attack Model. This category includes 10% of the papers included in this survey. Also, the publication year for all the papers is 2016. All of the papers included in this category are taking into account General Attacks and construct the corresponding Attack Tree. Furthermore, none of the papers in this category are presenting experiments, the code, proofs, or scalability of the proposed solution. Gadyatskaya et al. [10] presents an Attack-Defense tree generation for socio-technical models. The socio-technical models capture the organizational infrastructure and human-computer interactions. These models cannot depict all the security aspects, so the need for maintaining a separate Attack (Defense) Tree is essential. The paper deals with defining some transformation rules starting from the socio-technical model in order to obtain a group of attack-defense bundles. These attack-defense bundles can be combined to generate one Attack-defense Tree. This procedure constitutes the first step and the automated procedure. The authors also propose a way to expand the model and place more countermeasures. Basically, the aim of this work is the automation of Attack Tree construction, exploiting socio-technical models. Ivanova et al. [15] are aiming to transform a graphical system model into a graphical attack model. The graphical system model includes locations, actors, processes, and items. The actors and the processes can be decorated with policies and credentials. For every component, the authors introduce some guidelines for transforming the graphical representation to the corresponding Attack Tree. As a future work, the authors would like to extend their model to include attacks aiming to disturb the environment of the system, where the current solution deal with confidentiality and integrity. Gadyatskaya et al. [10] are exploiting socio-technical models for generating Attack-defense Trees, but they limit their investigation in a limited set of attributes. It would be interesting to expand their model with more complex attributes. Furthermore, [15] deals only with confidentiality and integrity components, so the model should be enriched with more attributes in order to be applied in real-life cases. ## VIII Challenges and Future Directions After classifying and examining different characteristics of every paper we identified some challenges in the field. In this section, we present some limitations based on each category and the challenges identified that can serve as a future direction for the research community. ### _Limitations_ In this subsection, we discuss the limitations we found in each category. Every method has strengths in Table IX we can see an overview of the limitations for each category and some suggestions for a future direction to deal with these issues. Table IX consists of three columns. The first column includes the categories, the second column the limitations of the corresponding category, and the third column some future directions to improve the limitations mentioned. Following, we discuss our findings in detail. The papers included in the logical formulas category do not mention the automatic generation of countermeasures for each attack. The models are focusing only on pointing out the attacks (weaknesses) of the system and until now there are no works mentioning the possibility of extending the model to also produce countermeasures. Also, there are fully automated techniques that can result in unsound results or redundant outcomes. An Attack Model is considered to be sound when the decomposition of the main goal results in a set of goals, that their accomplishment yields a partial accomplishment of the main goal. Also, the Attack Model has to be consistent, meaning that the proposed decomposition guarantees the achievement of the main goal, and complete, meaning that the proposed decomposition completely characterizes the main goal [4]. If the above statements are not met, the Attack Model is considered unsound. Fully automation is easier and more helpful for security experts but it can be tedious if the results are not sound. Finally, the system has to be defined as logical formulas, which in real-world scenarios might be really difficult for the security experts since they are dealing with very large networks and they might not be familiar with these languages unless the formulas can be extracted automatically. The templates category provides the community with a more coherent way to specify attacks and avoid unsound results. But the templates can be static and difficult to be adjusted to multiple systems. Also, templates created from the literature in the future can produce outdated results. Finally, in order for the templates to be adjusted to a specific system the security experts have to design these templates, a procedure that might be tedious as designing the Attack Models from scratch on the first try. Library-based approaches are either constructing a library using well-known databases in order to construct the tree or defining a library of refinement rules in order to automatically construct the Attack Model. The first approach results in a static library that has to be manually updated since the systems are dynamic entities. The second approach requires that the library with the refinement rules is updated when new components are integrated into the system. In both cases, at the first application of the method, one must define the library from scratch according to the system. Model checking is a really popular method in the field in order to produce Attack Models automatically. Model checking suffers from the state-space explosion and might not be applicable to very large systems, especially when the parameters of the system are being modeled. Also, according to our research, there are no works in this category that can produce countermeasures. Transformation rules methods depend only on the input model, thus making the procedure static. If a new component is added to the system the whole procedure must be repeated from scratch. Until now only a few models of Machine Learning and Deep Learning have been used for the automatic generation of Attack Models. The research community can experiment with other models to find the most suitable one. Reachability approaches depend only on the reachability of the nodes. This method can exclude different kinds of attacks that rely on more complex information. This is a major limitation since some of the attacks will not be taken into consideration. This method is applicable for networks but should be used as an extra component for the automatic generation of Attack Models. ### _Future Directions_ * _Use of \(SAND\):_ In the automatic generation of Attack Trees only a few papers are including the \(SAND\) operator. The \(SAND\) operator can represent multiple situations that occur in different kinds of attacks [16]. So we believe that it is important to also include this operator in the automatic generation. - i.e. one bomb available. In our opinion, it is necessary for the research community to include such an operator on the Attack Trees in general and later on their automatic generation. With the addition of the \(Xor\) operator the security experts or the programs analyzing the potential attacks can exclude some paths or traces. * _Dynamic Solutions:_ The systems are being upgraded constantly. New variables or configurations can make an Attack Model useless. The need for more dynamic solutions is crucial. Whenever there are some changes in the system the current version of Attack Models should be updated, helping security experts identify new potential threats. There is one paper in the literature proposing a more dynamic solution [29]. * _Poor Semantics / Proofs:_ The use of semantics and proofs is really important. There are some works that do not use proofs or semantics to support their solution. Neglecting the semantics can lead to poor or unsound results. The use of proofs can help the reader better understand the concepts used and help the researchers expand the field. Semantics can help the community develop the same language concerning the generation of Attack Models. As discussed in Section VI 35% of the papers are presenting formal proof of their findings. * _Forensics:_ Another field important to security experts is forensics. Forensics can help security experts identify vulnerabilities that did not previously take into account and secure better their systems. Using forensics one can generate Attack Models to depict what when wrong in a system. This procedure requires the logs or traces of a running system. * _Sound Results:_ The automatic generation of Attack Models provide a very good tool for security experts. But it might produce unsound results that are not usable. There are not many works providing proper means to prevent the generation of unsound results. Mostly semi-automated procedures have been proposed to avoid the generation of unsound results [17, 22, 32]. * _Prerequisites:_ All of the papers require some prerequisites in order to generate the Attack Models. In some cases, this might be tedious for the security experts if the prerequisites require their involvement (semi-automated), but a fully automated procedure might produce unsound results. Some prerequisites can be the configuration of the system, which is a very important file that an attacker can exploit to take control of the system. So the prerequisites can themselves be a vulnerability. It is important to find the right balance between the number of prerequisites and producing sound results. It is also important to examine the nature of the prerequisite and how crucial they can be for the security of the system. * _Scalability:_ Scalability is one of the main characteristics of a tool. Nowadays we use very large systems to cover our needs. The automatic generation of Attack Trees should be adapted to be able to perform under these circumstances. So, it is important to study the scalability and try to optimize the proposed algorithms. * _Attack Defence Trees:_ A few papers are investigating the Automatic generation of Attack Defence Trees [10]. After the automatic generation of Attack Models, is it natural to investigate the automatic generation of the corresponding defences/countermeasures. * _Attack Graphs \(\leftrightarrow\) Attack Trees_ It would be interesting to explore the transformation of Attack Graphs to Attack Trees and vice versa. Pinchinat et al. are exploiting an Attack Graph to produce an Attack Tree [30]. This method can be explored further and tested to produce more concrete results regarding this transformation. ### _Attack-Defense Trees_ Attack-Defense Trees are an extension of the Attack Trees. They consist of a root node, a set of actions an attacker might take to attack a system, and a set of defense mechanisms the defender can employ. On the Attack-Defense Trees, we have two kinds of nodes the attack nodes and the defend nodes, that both can be refined into sub-goals. Also, every node can have a child of the opposite type called countermeasure. One example of an Attack-Tree can be seen in Figure 7[19, 20]. We can observe in Figure 7 that the Attack-Defense Tree uses the same operators as the Attack-Tree. The example depicts a bank account attack. The attack can be achieved either from an ATM or online. We can see in the green rectangles the proposed defenses the defender can employ. In the scope of this paper, we are not going to examine specifically the automatic generation of Attack-Defense Trees. It is an upcoming field and it is included in the future directions proposed at the end of the paper. We strongly believe that the automatic generation of Attack-Defense Trees Fig. 7: Attack-Defense Tree form [19] can be the next step in the automatic generation of Attack-Trees/Graphs. ## IX Conclusion Graphical representation models are widely used in the security field to depict all possible attacks on a system. Two of the most common graphical representation models are Attack Trees and Attack Graphs. These structures provide a user-friendly representation for the security experts and in parallel, they constitute a useful tool for analyzing the systems under investigation. Designing these kinds of graphical representations by hand can be error-prone and tedious for security experts. Consequently, the research community turned their interest to find ways to automatically generate these structures. In this work, we present the state of the art of the automatic generation of Attack Models. We structured our survey to answer 3 research questions: **RQ1**, Which techniques are currently used in the field, **RQ2**, which of the seven dimensions introduced in Section IV are being considered in each paper (experiment, scalability, code, mathematical proofs), **RQ3**, what kind of limitations we identified in the field. We answered **RQ1**, by classifying the papers into 7 different categories in Section VII and presenting the categories with the percentage of their population in Section VI. Following that, we answered **RQ2** in Section VI, giving an overview of how many papers included the corresponding dimensions and in Section VII we also present tables that show in detail which paper includes which parameters. Finally, taking everything into consideration we answer **RQ3** in Section VIII and Section VII, where we presented some limitations of each paper, of every category and finally general limitations/challenges we identified in the field that also constitute future directions for the research community. ## Acknowledgment This work has been supported by Innovation Fund Denmark and the Digital Research Centre Denmark, through the bridge project "SIOT - Secure Internet of Things - Risk analysis in design and operation".
2309.17202
Dynamic Behavior of a Multi-Layer Quasi-Geostrophic Model: Weak and Time-Periodic Solutions
The quasi-geostrophic two-layer (QS2L) system models the dynamic evolution of two interconnected potential vorticities, each is governed by an active scalar equation. These vorticities are linked through a distinctive combination of their respective stream functions, which can be loosely characterized as a parameterized blend of both Euler and shallow-water stream functions. In this article, we study (QS2L) in two directions: First, we prove the existence and uniqueness of global weak solutions in the class of Yudovich, that is when the initial vorticities are only bounded and Lebesgue-integrable. The uniqueness is obtained as a consequence of a stability analysis of the flow-maps associated with the two vorticities. This approach replaces the relative energy method and allows us to surmount the absence of a velocity formulation for (QS2L). Second, we show how to construct $m$-fold time-periodic solutions bifurcating from two arbitrary distinct initial discs rotating with the same angular velocity. This is achieved provided that the number of symmetry $m$ is large enough, or for any symmetry $m\in \mathbb{N}^*$ as long as one of the initial radii of the discs does not belong to some set that contains, at most, a finite number of elements. Due to its multi-layer structure, it is essential to emphasize that the bifurcation diagram exhibits a two-dimensional pattern. Upon analysis, it reveals some similarities with the scheme accomplished for the doubly connected V-states of the Euler and shallow-water equations. However, the coupling between the equations gives rise to several difficulties in various stages of the proof when applying Crandall-Rabinowitz's Theorem. To address this challenge, we conduct a careful analysis of the coupling between the kernels associated with the Euler and shallow-water equations.
Zineb Hassainia, Haroune Houamed
2023-09-29T12:52:22Z
http://arxiv.org/abs/2309.17202v2
# Dynamic behavior of a multi-layer quasi-geostrophic model: weak and time-periodic solutions ###### Abstract. The quasi-geostrophic two-layer (QS2L) system models the dynamic evolution of two interconnected potential vorticities, each is governed by an active scalar equation. These vorticities are linked through a distinctive combination of their respective stream functions, which can be loosely characterized as a parameterized blend of both Euler and shallow-water stream functions. In this article, we study (QS2L) in two directions: First, we prove the existence and uniqueness of global weak solutions in the class of Yudovich, that is when the initial vorticities are only bounded and Lebesgue-integrable. The uniqueness is obtained as a consequence of a stability analysis of the flow-maps associated with the two vorticities. This approach replaces the relative energy method and allows us to surmount the absence of a velocity formulation for (QS2L). Second, we show how to construct \(m\)-fold time-periodic solutions bifurcating from two arbitrary distinct initial discs rotating with the same angular velocity. This is achieved provided that the number of symmetry \(m\) is large enough, or for any symmetry \(m\in\mathbb{N}^{*}\) as long as one of the initial radii of the discs does not belong to some set that contains, at most, a finite number of elements. Due to its multi-layer structure, it is essential to emphasize that the bifurcation diagram exhibits a two-dimensional pattern. Upon analysis, it reveals some similarities with the scheme accomplished for the doubly connected V-states of the Euler and shallow-water equations. However, the coupling between the equations gives rise to several difficulties in various stages of the proof when applying Crandall-Rabinowitz's Theorem. To address this challenge, we conduct a careful analysis of the coupling between the kernels associated with the Euler and shallow-water equations. Key words and phrases:Quasi-geostrophic equations, weak solutions, Lagrangian solutions, V-states, vortex patches ###### Contents * 1 Introduction and main results * 2 Functional tools and building blocks * 2.1 Bessel functions and asymptotic expansions * 2.2 Kernel estimates * 2.3 Holder-continuity of some singular operators * 2.4 Crandall-Rabinowitz's theorem * 3 Weak and Lagrangian solutions * 3.1 Existence of global solutions * 3.2 Stability of the flow-maps and uniqueness * 4 Uniformly rotating solutions * 4.1 Contour dynamics equation * 4.2 Linearization around discs * 4.3 Regularity properties * 4.4 Spectral analysis of the linearized operator * 4.5 Applying Crandall-Rabinowitz's theorem and proof of Theorem 1.3 * 5 Endnote ## 1. Introduction and main results In this work, we consider the quasi-geostrophic two-layer model (Phillips' model) given by (QS2L) \[\left\{\begin{array}{ll}\partial_{t}\omega_{i}+(\nabla^{\perp}\psi_{i})\cdot \nabla\omega_{i}=0,&(t,x)\in\mathbb{R}^{+}\times\mathbb{R}^{2},\\ \omega_{i}=\Delta\psi_{i}+(-\delta)^{i-1}\lambda^{2}(\psi_{2}-\psi_{1}),&i\in \{1,2\},\\ \omega_{i}|_{t=0}=\omega_{i,0},\end{array}\right.\] where \(\psi_{i}\) and \(\omega_{i}\) stand for the stream function and potential vorticity in the \(i^{\text{the}}\) layer, respectively. The parameter \(\delta>0\) above refers to the ratio of the upper to lower layer thickness when the fluid is at rest, whereas \(\lambda\geq 0\) describes the rigidity of the interface. In particular, when \(\lambda=0\) (which corresponds to a perfectly rigid interface), the two layers become uncoupled and behave as two independent two-dimensional systems obeying the Euler equations. The system of equations (QS2L) serves as a simplified model and a foundational concept in the study of large-scale atmospheric and oceanic flows. In atmospheric applications, these layers often represent the upper and lower troposphere. On the other hand, in oceanic modeling, they might correspond to the upper mixed layer and the deeper ocean. See for instance [45] for more details. Though it is still far less studied compared to Euler equations, from analytical point of view, the quasi-geostrophic two-layer model has enjoyed considerable interest in computational fluid dynamics. We refer to [1, 10, 11, 22, 44, 45] and the references therein for a series of results on the numerical analysis of (QS2L). In the first part of this paper, we are interested in the construction of weak solutions to the quasi-geostrophic two-layer system (QS2L) with initial data of Yudovich-type. These solutions solve a weak formulation of the equations in the distribution sense, which we introduce next. **Definition** (Weak-distributional solutions).: We say that \((\omega_{i})_{i\in\{1,2\}}\) is a weak distributional (or simply weak) solution of (QS2L) with initial data \(\omega_{i,0}\in L^{p}(\mathbb{R}^{2})\), for some \(p\in[1,\infty]\), if * \(\omega_{1},\omega_{2}\in L^{\infty}_{\mathrm{loc}}(\mathbb{R}^{+};L^{p}( \mathbb{R}^{2}))\), * \(\nabla^{\perp}\psi_{1},\nabla^{\perp}\psi_{2}\in L^{\infty}_{\mathrm{loc}}( \mathbb{R}^{+};L^{p^{\prime}}(\mathbb{R}^{2}))\), where \(p^{\prime}\) denotes the conjugate of \(p\), * for any test function \(\varphi\in C^{1}_{c}(\mathbb{R}^{+}\times\mathbb{R}^{2})\), it holds that \[\int_{[0,t]\times\mathbb{R}^{2}}\omega_{i}(t,x)\left(\partial_{t}\varphi+ \nabla^{\perp}\psi_{i}\cdot\nabla\varphi\right)(t,x)dxdt=\int_{\mathbb{R}^{2}} \omega_{i}(t,x)\varphi(t,x)dx-\int_{\mathbb{R}^{2}}\omega_{i}(0,x)\varphi(0,x )dx,\] for all \(i\in\{1,2\}\) and \(t\in\mathbb{R}^{+}\). Questions of global existence and uniqueness of weak solutions have been originally addressed in the case of Euler equations by Yudovich in [41, 51, 49], where this received satisfactory answers provided that the initial vorticity is bounded or lies in \(L^{p}\) spaces, for all \(p<\infty\), with adequate assumptions on the growth of its norm as \(p\to\infty\). Later on, non-uniqueness was obtained for the forced Euler equations [4], where the vorticity is barely bounded, and in the unforced case [8] in a weaker functional setting. As it is well known, the solenoidal velocity field associated with a bounded vorticity is not Lipshitz in general. Instead, it enjoys the so-called LogLipshitz regularity which is crucial in the original proof of uniqueness for the Euler equations [41, 51] where the approach therein is laid out by a stability analysis in the relative energy setting. This aproach does not seem to be applicable in the case of quasi-geostrophic two-layer model (QS2L) because the equations of the velocities \(\nabla^{\perp}\psi_{1}\) and \(\nabla^{\perp}\psi_{2}\) are far from been compared to the velocity equation in Euler's system. The alternative way to study stability aspects of models such as quasi-geostrophic two-layer (QS2L) would be by understanding first the same question about a different quantity. Here, this will be done for the flow-maps associated with the velocities \(\nabla^{\perp}\psi_{1}\) and \(\nabla^{\perp}\psi_{2}\). This draws insight from the recent work [16] on nonlinear transport equations advected by a non-local velocity field. This also motivates the consideration Lagrangian solutions that we recall next. **Definition** (Lagrangian solutions).: We say that \((\omega_{i})_{i\in\{1,2\}}\) is a Lagrangian solution of (QS2L) if \(\omega_{i}(t,\cdot)\) is the push forward of the initial data \(\omega_{i,0}\) by the flow-map associated with the velocity field \(\nabla^{\perp}\psi_{i}\), for any \(i\in\{1,2\}\). More precisely, \((\omega_{i})_{i\in\{1,2\}}\) is a Lagrangian solution of (QS2L) if \[\omega_{i}(t,\cdot)=X_{i}(t,\cdot)_{\sharp}\omega_{i,0}\] and \(X_{i}(t,\cdot)\) solves the ODE (ODE) \[\left\{\begin{array}{l}\frac{d}{dt}X_{i}(t,x)=\left(\nabla^{\perp}\psi_{i} \right)\left(X_{i}(t,x)\right),\\ X_{i}(0,x)=x,\end{array}\right.\] for all \(i\in\{1,2\}\) and \((t,x)\in\mathbb{R}^{+}\times\mathbb{R}^{2}\). In order to construct global solutions of (QS2L) in Yudovich's class, it is important to understand the coupling in that system of equations. A crucial step in our analysis below is introducing suitable new unknowns which are made of a specific combination of the solutions \(\omega_{1}\) and \(\omega_{2}\). The new unknowns allow us to recast the equations connecting the vorticities with their stream functions in a way that is naturally compatible with the coupling in (QS2L) and which also reveals the connexion between (QS2L), Euler and shallow-water equations. This is fundamental in our proof of existence of solutions and is discussed in Section 3. Afterwards, this observation inspires introducing a similar combination of the flow-maps \(X_{1}\) and \(X_{2}\) which serves in their stability analysis. The uniqueness of weak-distributional solutions in Yudovich's class follows subsequently. This is summarized in our first main theorem which we state next. **Theorem 1.1** (Existence and uniqueness of weak solutions).: _For any initial data satisfying_ \[(\omega_{1,0},\omega_{2,0})\in L^{1}\cap L^{\infty}(\mathbb{R}^{2}),\] _there is a unique global weak solution \((\omega_{1},\omega_{2})\) to (QS2L) enjoying the bounds_ \[(\omega_{1},\omega_{2})\in L^{\infty}(\mathbb{R}^{+},L^{1}\cap L^{\infty}( \mathbb{R}^{2})),\] _as well as the conservation of norms_ \[\left\|\omega_{j}(t,\cdot)\right\|_{L^{q}(\mathbb{R}^{2})}=\left\|\omega_{j,0 }\right\|_{L^{q}(\mathbb{R}^{2})},\] _for any \(i\in\{1,2\}\), \(q\in[1,\infty]\) and \(t\geq 0\). Moreover, the solution is continuous in time in the sense that_ \[(\omega_{1},\omega_{2})\in C(\mathbb{R}^{+},L^{p}(\mathbb{R}^{2})),\] _for all \(p\in[1,\infty)\)._ Before we move on to our second main result of this paper, allow us to establish a remarkable consequence of Theorem 1.1. In particular, the next corollary exhibits the connexion between solutions of the system of equation (QS2L) in the case \(\delta=1\) and the Euler equations. **Corollary 1.2**.: _The solution \((\omega_{1},\omega_{2})\) of the system of equations (QS2L) with \(\delta=1\) and initial data_ \[\omega_{0,1}=\omega_{0,1}\stackrel{{\rm def}}{{=}}\omega_{0}\in L ^{1}\cap L^{\infty}(\mathbb{R}^{2})\] _is given by_ \[\omega_{1}=\omega_{2}=\omega,\] _where \(\omega\) is the unique Yudovich solution of the Euler's equation_ (E) \[\left\{\begin{array}{l}\partial_{t}\omega+v\cdot\nabla\omega=0,\\ v=-\nabla^{\perp}(-\Delta)^{-1}\omega,\\ \omega_{|t=0}=\omega_{0}.\end{array}\right.\] Proof.: The proof is straightforward once we notice that the system of equations (QS2L) is symmetric when \(\delta=1\) in the sense that, if \((\omega_{1},\omega_{2})\) is the solution associated with the initial data \((\omega_{0,1},\omega_{0,2})\), then \((\omega_{2},\omega_{1})\) is the solution associated with the initial data \((\omega_{0,2},\omega_{0,1})\). Accordingly, if we set initially \[\omega_{0,1}=\omega_{0,1},\] then it immediately follows, by uniqueness of solutions from Theorem 1.1, that \[(\omega_{2},\omega_{1})=(\omega_{1},\omega_{2}).\] Consequently, we obtain from the second equation in (QS2L) that \[\left(-\Delta+\lambda^{2}\right)(\psi_{1}-\psi_{2})=0,\] whereby, we deduce that \[\psi_{1}=\psi_{2}.\] Therefore, by inserting the latter identity in (QS2L), we conclude that \(\omega\stackrel{{\mathrm{def}}}{{=}}\omega_{1}=\omega_{2}\) solves the Euler equation (E), thereby completing the proof of the corollary. Typical examples of solutions covered by Theorem 1.1 are the so-called vortex-patches. These are vorticities uniformly distributed in a bounded region \(D\) and are generated from given initial data of the format \[(\omega_{1,0},\omega_{2,0})=(1_{D_{1}},1_{D_{2}}),\] where \(D_{i}\), for \(i\in\{1,2\}\), is a bounded domain in \(\mathbb{R}^{2}\) with smooth boundary. These initial data fall in the setting of Theorem 1.1 which ensures the existence of a unique global solution associated with them. Moreover, the patch structure is preserved by the evolution and, at each time \(t\geq 0\), the potential vorticities are given by \[(\omega_{1},\omega_{2})=(1_{D_{1,t}},1_{D_{2,t}}),\] with, for all \(i\in\{1,2\}\), the transported domain \[D_{i,t}\stackrel{{\mathrm{def}}}{{=}}X_{i}(t,D_{i,t})\] denotes the image of \(D_{i}\) by the flow \(X_{i}\) defined as the solution of (ODE). Note that, when \(D_{1}\) and \(D_{2}\) are discs, then, the potential vorticities are stationary solutions to (QS2L). Thus, it is natural to look for periodic solutions close to these The second aim of this paper is to study the existence of time-periodic solutions of (QS2L) given by rigid body rotating vortex patches around the origin which are described by \[(D_{1,t},D_{2,t})=(e^{\mathrm{i}\Omega t}D_{1},e^{\mathrm{i}\Omega t}D_{2}),\] for some time-independent parameter \(\Omega\) referring to as the angular velocity. The potential vorticities corresponding to the preceding domains are then stationary along the rotating frame with angular velocity \(\Omega\) and the boundaries \(\partial D_{1,t}\) and \(\partial D_{1,t}\) evolve according to the non-linear, non-local, system \[\left(\nabla^{\perp}\psi_{i}(t,x)-\Omega x^{\perp}\right)\cdot\vec{n}_{i}(x)=0, \tag{1.1}\] for all \(x\in\partial D_{i}\) and \(i\in\{1,2\}\), where \(\vec{n}_{i}\) is the outward normal unit vector associated with the initial boundary \(\partial D_{i}\). Such solutions are called V-states and have been explored, first, numerically in the case of Euler's equations by Deem and Zabusky [18]. The analytical proof of their existence is due to Burbea [9] and relies on the contour dynamics equation, the conformal mapping parametrization and the celebrated local bifurcation theorem of Crandall and Rabinowitz [15]. The outcome is the existence of a countable family of local curves of V-states with \(m\)-fold symmetries (i.e. invariant by \(\frac{2\pi}{m}\) angular rotation), with \(m\geq 2\), bifurcating from the disc at the angular velocities \(\Omega_{m}=\frac{m-1}{2}\). A global continuation of these local curves were constructed in [32], where the global curves limit to a vanishing of the angular fluid velocity. Other uniformly rotating vortex patch solutions to Euler's equations were constructed, using Burbea's techniques, close to the annulus in [17] and close to the Kirchoff ellipses in [12, 36]. We also point out that, in the last few years, there have been several investigations on the V-states in different settings, such as the study of the boundary regularity of the V-states [38], the existence of multipole vortex patches [23, 24, 29, 37, 34] and the radial symmetry properties of stationary and uniformly-rotating solutions [28, 35, 27]. Very recently quasi-periodic patch solutions to Euler's equations were constructed using the Nash-Moser scheme and KAM theory, see [7, 31, 33]. Similar results to the ones mentioned above were also obtained for other active scalar equations such as the generalized surface quasi-geostrophic equation, the quasi-geostrophic shallow-water equations and Euler equations on the rotating unit 2-sphere. We refer to [26, 30, 46, 39, 25] and the references therein for a series of relevant results. In the second part of this paper, we adapt the techniques from [9, 36] and show how to extend their validity to build time-periodic solutions of (QS2L). In particular, our second result consists in establishing the existence of \(m\)-fold symmetric V-states for (QS2L). We point out in passing that there have been some numerical simulations suggesting the existence of V-state solutions for (QS2L) in some specific cases, see [44, 45] for instance. In the present study, we provide an analytical proof of their existence through the application of local bifurcation theory. Informally stated, our second main result is outlined in the following theorem. A slightly more detailed and precise version will be discussed in Section 4, later on. **Theorem 1.3** (Periodic solutions bifurcating from simple eigenvalues).: _Let \(m\in\mathbb{N}^{*}\), \(\delta\geq 1\), \(\lambda>0\) and \(0<b_{2}\leq b_{1}\) be a set of real numbers. Then, there exist two curves of \(m\)-fold symmetric pairs of simply connected V-states solving (QS2L) and bifurcating from the stationary states_ \[\omega_{1}=\mathds{1}_{\{x\in\mathbb{R}^{2}:|x|<b_{1}\}}\quad\text{and}\quad \omega_{2}=\mathds{1}_{\{x\in\mathbb{R}^{2}:|x|<b_{2}\}} \tag{1.2}\] _if one of the following two conditions is fulfilled:_ * _either_ \(b_{1}\neq b_{2}\) _and the number of symmetry_ \(m\) _is large enough, i.e.,_ \(m\gg 1\)_,_ * _or_ \(m\in\mathbb{N}^{*}\)_,_ \(b_{1}\in(0,\infty)\) _and_ \(b_{2}\not\in S_{m,b_{1}}\) _for some set_ \(S_{m,b_{1}}\subset(0,b_{1}]\) _containing, at most, a finite number of elements._ _Remark 1.1_.: It is to be emphasize that periodic solutions with \(m\)-fold symmetry, for any \(m\in\mathbb{N}^{*}\), bifurcating from the two discs (1.2), exist for almost all values of \(b_{1},b_{2}\in(0,\infty)\). However, there are some values of these radii (for instance some values of \(b_{1}=b_{2}\), as we will justify later on) where the celebrated Crandall-Rabinowitz's Theorem does not directly apply due to existence of "spectral collisions". In such cases, it is not clear yet, at least by the analysis in this work, if time-periodic solutions do exist. _Remark 1.2_.: Although solutions of (QS2L) are not stable in general by switching the positions of the vorticities--unless if we consider the case \(\delta=1\) as is shown in Corollary 1.2--our elements of proof of Theorem 1.3 remain unchanged in the case \(0<b_{1}\leq b_{2}\) and the same bifurcation result holds true, as well. In addition, the restriction on \(\delta\geq 1\) can in fact be relaxed, see Remark 4.2, below. #### Structure of the paper The remaining sections of this paper are organized as follows: The proof of existence and uniqueness of global weak solutions (Theorem 1.1) is the subject of Section 3. This is where the crucial coupling in the set of equations (QS2L) will be discussed in details, altogether with the approximate scheme of (QS2L) that we utilize to build global weak and Lagrangian solutions. Afterwards, a stability result of the flow-maps associated with the weak solutions will be established which eventually yields the uniqueness of weak solutions. At last, in Section 4, we build time-periodic solutions by employing Crandall-Rabinowitz's Theorem 2.6. This requires a precise analysis of spectral properties of the linearized operator associated with the boundary equation around steady states (discs) which is discussed with details in the same that section. The achievement of these results also hinges upon a careful understanding of several properties of Laplace and shallow-water kernels, as well as a fine analysis of Bessel functions. In Sections 2, below, we provide the reader with a tool box of various abstract results on functional analysis and operator theory that will be employed in the proof of our theorems, and which may also serve in other different contexts. ## 2. Functional tools and building blocks Before we step over the elements of proof of our main theorems, we devote this section to a short self-contained introduction of the basis analysis of functional and operational settings as well as properties of the relevant kernels and all notions that will play a role in the construction of weak and time-periodic solutions in the upcoming sections. In the sequel, we are going to use classical notations for functional spaces, such as Lebesgue spaces, space for log-Lipschitz functions.... Moreover, for the sake of simplicity, we will often utilize the symbol \(\lesssim\) (resp. \(\lesssim_{\delta}\)) instead of \(\leq C\) (resp. \(\leq C_{\delta}\)) when the dependance on the constant \(C\) (resp. the constant \(C_{\delta}\)) is harmless (resp. degenerates when the parameter \(\delta\) approaches its endpoint values). ### Bessel functions and asymptotic expansions Here, we recall a few important identities that we will later use to describe the decay properties of specific parameterized integrals. These identities, among more relevant other results, can be found in the books by Abramowitz and Stegun [3] and by Watson [50]. We first recall that the Bessel function of the first kind and order \(\nu\in\mathbb{N}\) given by the expansion \[J_{\nu}(z)\stackrel{{\rm def}}{{=}}\sum_{m=0}^{\infty}\frac{(-1) ^{m}\left(\frac{z}{2}\right)^{\nu+2m}}{m!\Gamma(\nu+m+1)},\quad\text{for all} \quad|\arg(z)|<\pi. \tag{2.1}\] In particular, when \(\nu=n\in\mathbb{N}\), the preceding Bessel function admits the following integral representation, which can be found in [3, Identity 9.1.21], \[J_{n}(z)=\frac{1}{\pi}\int_{0}^{\pi}\cos\big{(}n\theta-z\sin(\theta)\big{)}d \theta,\quad\text{for all}\quad z\in\mathbb{C}. \tag{2.2}\] On the other hand, the Bessel functions of imaginary argument are given by \[I_{\nu}(z)\stackrel{{\rm def}}{{=}}\sum_{m=0}^{\infty}\frac{ \left(\frac{z}{2}\right)^{\nu+2m}}{m!\Gamma(\nu+m+1)}, \tag{2.3}\] for all \(z\in\mathbb{C}\) with \(|\arg(z)|<\pi\). Thus, for all \(\nu\in\mathbb{C}\backslash\mathbb{Z}\), we define the \(K\)-Bessel function by \[K_{\nu}(z)\stackrel{{\rm def}}{{=}}\frac{\pi}{2}\frac{I_{-\nu}(z )-I_{\nu}(z)}{\sin(\nu\pi)}\] and, when \(n\in\mathbb{Z}\), we set \[K_{n}(z)\stackrel{{\rm def}}{{=}}\lim_{\nu\to n}K_{\nu}(z)\] for all \(z\in\mathbb{C}\) such that \(|\arg(z)|<\pi\). Moreover, we emphasize that \[I_{-n}\equiv I_{n}\qquad\text{and}\qquad K_{-n}\equiv K_{n},\] for all \(n\in\mathbb{N}\) and that \(I_{n}\) and \(K_{n}\) are smooth functions away from the origin and satisfy \[I_{0}^{\prime}(z)=I_{1}(z)\qquad\text{and}\qquad K_{0}^{\prime}(z)=-K_{1}(z), \tag{2.4}\] for all \(z\in\mathbb{C}\) such that \(|\arg(z)|<\pi\). The preceding identities can be found in [3, pages 357-356]. The Bessel functions \(I_{n}\) and \(K_{n}\) enjoy several properties and are featured by different equivalent representation formulas. Here, we recall a useful expansion for \(K_{n}\) that will be utilized in our proof below, which can be found in [3, Identity 9.6.11] \[K_{n}(z) =\frac{1}{2}\left(\frac{z}{2}\right)^{-n}\sum_{k=0}^{n-1}\frac{(n -k-1)!}{k!}\left(\frac{-z^{2}}{4}\right)^{k}+(-1)^{n+1}\log\left(\frac{z}{2} \right)I_{n}(z)\] \[\quad+\frac{1}{2}\left(\frac{-z}{2}\right)^{n}\sum_{k=0}^{\infty} \frac{\Phi(k+1)+\Phi(n+k+1)}{k!(n+k)!}\left(\frac{z}{2}\right)^{2k},\] for any \(n\in\mathbb{N}\) with \(z\in\mathbb{C}\) such that \(|\arg(z)|<\pi\), where \(\psi(1)=-\boldsymbol{\gamma}\) stands for Euler's constant, and we set, for any \(m\in\mathbb{N}^{*}\), that \[\Phi(m+1)=\sum_{k=1}^{m}\frac{1}{k}-\boldsymbol{\gamma}.\] In particular, one has the useful identity \[K_{0}(z)=-\log\left(\frac{z}{2}\right)I_{0}(z)+\sum_{m=0}^{\infty}\frac{\left( \frac{z}{2}\right)^{2m}}{(m!)^{2}}\Phi(m+1), \tag{2.5}\] for any \(z\in\mathbb{C}\) with \(|\arg(z)|<\pi\). The following lemma summarizes several key properties of the Bessel functions. Note that most of the results in the next lemma are established in previous works, whence, we only outline the idea to justify the ones that we were not able to find in the literature. **Lemma 2.1**.: _Let \(x,y\in(0,\infty)\) be fixed such that \(x\leq y\). Then, the following holds:_ 1. _The sequence_ \(\left((I_{n}K_{n})(x)\right)_{n\geq 1}\) _is strictly decreasing and converges to_ \(0\)_. More generally, the mapping_ \[(n,x)\mapsto(I_{n}K_{n})(x)\] _is strictly decreasing on_ \(\mathbb{N}\times\mathbb{R}^{+}\)_._ 2. _The sequence_ \(\left(\frac{(\frac{x}{y})^{n}}{2n}-I_{n}(x)K_{n}(y)\right)_{n\geq 1}\) _is positive, decreasing and converges to zero. Moreover, it holds that_ (2.6) \[0<\frac{1}{2n}\left(\frac{x}{y}\right)^{n}-I_{n}(x)K_{n}(y)\leq\frac{1}{2n},\] _for all_ \(n\in\mathbb{N}^{*}\)_._ 3. _The function_ \[x\mapsto\frac{I_{1}(x)}{x}\] _is strictly increasing on_ \((0,\infty)\)_._ Proof.: The proof of the first claim of the lemma is the subject of [6] and [48]. Now, in order to justify the second claim, we begin with writing that \[I_{n}(x)K_{n}(y)=\frac{1}{2}\int_{\log\frac{y}{x}}^{\infty}J_{0}\left(\mu\sqrt {2xy\cosh(t)-x^{2}-y^{2})}\right)e^{-nt}dt,\] where \(J_{0}\) is given by (2.1). The latter identity can be found in [42, page 140]. Therefore, one deduces, for any \(n\in\mathbb{N}^{*}\), that \[\frac{(\frac{x}{y})^{n}}{2n}-I_{n}(x)K_{n}(y)=\frac{1}{2}\int_{\log\frac{y}{x }}^{\infty}\Big{(}1-J_{0}\left(\sqrt{2xy\cosh(t)-x^{2}-y^{2})}\right)\Big{)}e ^{-nt}dt. \tag{2.7}\] On the other hand, using the integral representation (2.2), we observe that \[1-J_{0}\left(\sqrt{2xy\cosh(t)-x^{2}-y^{2})}\right)=\frac{1}{\pi}\int_{0}^{ \pi}\Big{[}1-\cos\left(\sqrt{2xy\cosh(t)-x^{2}-y^{2})}\sin\theta\right)\Big{]} d\theta\geq 0.\] Therefore, this implies that the sequence \(\left(\frac{(\frac{x}{y})^{n}}{2n}-I_{n}(x)K_{n}(y)\right)_{n\geq 1}\) is positive and decreasing. Next, the asymptotic decay (2.6), directly follows from the fact that \[I_{n}(x)K_{n}(y)>0,\] for all \(n\in\mathbb{N}\), and the assumption that \(x\leq y\). We now turn to justify that the function \(x\mapsto\frac{I_{1}(x)}{x}\) is strictly increasing on \((0,\infty)\). To that end, we simply need to notice, in view of the expansion formula (2.3), that \[\frac{I_{1}(x)}{x}=\frac{1}{2}\sum_{m=0}^{\infty}\frac{\big{(}\frac{x}{2}\big{)} ^{2m}}{m!(m+2)!},\quad\text{for all}\quad x\in(0,\infty).\] The function on the right-hand side above is clearly increasing on \((0,\infty)\), which concludes the justification of the last claim of the lemma and completes its proof. ### Kernel estimates In this paragraph, we intend to show the continuity and boundedness of convolution-type operators that naturally appear through a specific combination between velocities associated with (QS2L). But before that, let's introduce the kernels of our interest by first recalling that distributional solutions of \[-\Delta\mathbf{G}=\delta_{0},\quad\text{in}\quad\mathcal{S}^{\prime}(\mathbb{ R}^{2})\] and, for any \(\varepsilon>0\), \[(-\Delta+\varepsilon^{2})\mathbf{G}_{\varepsilon}=\delta_{0},\quad\text{in} \quad\mathcal{S}^{\prime}(\mathbb{R}^{2})\] are respectively given by \[\mathbf{G}(x)\stackrel{{\text{def}}}{{=}}-\frac{1}{2\pi}\log|x |\qquad\text{and}\qquad\mathbf{G}_{\varepsilon}(x)\stackrel{{ \text{def}}}{{=}}\frac{1}{2\pi}K_{0}(\varepsilon|x|),\] where \(K_{0}\) is the modified Bessel function of zero order introduced in the previous section. Indeed, the first fundamental solution is a classical fact, see for instance the book by Evans [21, Section 2.2]. As for the justification for the fundamental solution of the second problem above, we refer to [20, Section 4.2]. We now consider the system of equations \[\omega_{1} =\Delta\psi_{1}+\lambda^{2}(\psi_{2}-\psi_{1}),\] \[\omega_{2} =\Delta\psi_{2}+\delta\lambda^{2}(\psi_{1}-\psi_{2}),\] for given real parameters \(\delta,\lambda\in(0,\infty)\) and two functions \(\omega_{1}\) and \(\omega_{2}\) with appropriate decay at infinity1. Therefore, it is readily seen that Footnote 1: In the case of the present paper, \(\omega_{1}\) and \(\omega_{2}\) are assumed to belong to \(L^{1}\cap L^{\infty}(\mathbb{R}^{2})\), which is enough to give a sense to all the computations in this section. \[\delta\omega_{1}+\omega_{2}=\Delta(\delta\psi_{1}+\psi_{2})\] and that \[\omega_{1}-\omega_{2}=\Big{(}-\Delta+(\delta+1)\lambda^{2}\Big{)}(\psi_{2}- \psi_{1}).\] Hence, by employing the fundamental representation of Laplace and shadow-water operators, we arrive at the identities \[\delta\psi_{1}+\psi_{2}=-\int_{\mathbb{R}^{2}}\mathbf{G}(\cdot-\xi)\big{(} \delta\omega_{1}+\omega_{2}\big{)}(\xi)d\xi\] and \[\psi_{2}-\psi_{1}=\int_{\mathbb{R}^{2}}\mathbf{G}_{\mu}(\cdot-\xi)\big{(} \omega_{1}-\omega_{2}\big{)}(\xi)d\xi,\] where we denote \[\mu\stackrel{{\text{def}}}{{=}}\lambda\sqrt{1+\delta}.\] It is then readily seen that the previous two identities lead to the following representation \[\psi_{k}(z)=\sum_{j=1}^{2}\int_{\mathbb{R}^{2}}G_{k,j}(z-\xi)\omega_{j}(t,\xi) dA(\xi), \tag{2.8}\] where we set \[G_{k,j}(x)\stackrel{{\text{def}}}{{=}}\frac{\delta^{2-j}}{2\pi( \delta+1)}\log|x|+(-1)^{k+j-1}\frac{\delta^{k-1}}{2\pi(\delta+1)}K_{0}(\mu|x|), \tag{2.9}\] for any \(x\in\mathbb{R}^{2}\setminus\{0\}\) and \(k\in\{1,2\}\). This representation will come in handy in the construction of time-periodic solutions, later on. The short introduction above motivates the study of the kernels \(\mathbf{G}\) and \(\mathbf{G}_{\varepsilon}\). For a later use in the proof of existence of weak solutions, we will also need to prescribe some fine properties of the kernels associated with the operators \[\nabla^{\perp}(-\Delta)^{-1}\qquad\text{and}\qquad\nabla^{\perp}\big{(}\Delta -\mu^{2}\big{)}^{-1},\] which we respectively denote, from now on, by \(k_{+}\) and \(k_{-}\) (with \(\mu=1\) for simplicity). More precisely, we now set, for all \(x\in\mathbb{R}^{2}\setminus\{0\}\), that \[k_{+}(x)\stackrel{{\text{def}}}{{=}}-\frac{1}{2\pi}\frac{x^{ \perp}}{|x|^{2}}\qquad\text{and}\qquad k_{-}(x)=-\frac{x^{\perp}}{|x|}K_{1}(|x |), \tag{2.10}\] where, \(K_{1}=K_{0}^{\prime}\) denotes the Bessel function of order one, previously introduced in Section 2.1. Accordingly, we define the convolution operators \[K_{\pm}f\stackrel{{\text{def}}}{{=}}k_{\pm}\star f,\] for any suitable function \(f\) for which the preceding convolutions have a sense. The next lemma establishes quantitative estimates on the behavior of the kernels introduced above. This will serve later on to deduce essential properties of operator defined through a convolution with the preceding kernels which will be useful in the proof of uniqueness of solutions to (QS2L) in Yudovich's class. **Lemma 2.2**.: _Let \(k_{\pm}\) be given by (2.10). Then, it holds that_ \[|k_{\pm}(x)|\leq\frac{C_{1}}{|x|}, \tag{2.11}\] _and_ \[|k_{\pm}(x)-k_{\pm}(y)|\leq C_{2}\frac{|x-y|}{|x||y|}. \tag{2.12}\] _for all \(x,y\in\mathbb{R}^{2}\setminus\{0\}\), for some universal constants \(C_{1},C_{2}>0\)._ Proof.: Because the bounds on \(k_{+}\) above are classical, we will only focus on the proof of these bounds on \(k_{-}\) which we now split into two cases: _Case \(|x|\leq 1\)._ Here, we employ the observation that \(k_{-}(x)\) can be seen as a perturbation of \(|x|^{-1}\), for finite values of \(x\neq 0\). More precisely, we write, in view of (2.4) and (2.5), for any \(r>0\), that \[-K_{1}(r)=K_{0}^{\prime}(r)=-\frac{1}{r}+\frac{r}{2}\sum_{m=1}^{\infty}m\frac {\left(\frac{r}{2}\right)^{2(m-1)}}{(m!)^{2}}\left(\sum_{i=1}^{m-1}\frac{1}{i }-\gamma-\log\left(\frac{r}{2}\right)\right)\!,\] where, \(\gamma\) denotes Euler's constant. Therefore, noticing, for any \(r\in(0,1]\), that \[|rK_{0}^{\prime}(r)|\lesssim 1,\] we deduce that (2.11) holds for all \(x\in\mathbb{R}^{2}\setminus\{0\}\) with \(|x|\leq 1\). _Case \(|x|>1\)._ Now, we utilize the integral representation of \(K_{1}\) (see [2, Identity 9.6.23]) to write, for any \(r>0\), that \[K_{1}(r)=\frac{\pi^{\frac{1}{2}}}{2\Gamma\left(\frac{3}{2}\right)}r\int_{1}^{ \infty}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt. \tag{2.13}\] In view of that, we claim that \(K_{1}\) has an exponential decay as \(r\to\infty\). More generally, for a later use, we will now show that the sequence of functions \[r\mapsto R_{n}(r)\stackrel{{\mathrm{def}}}{{=}}\int_{1}^{\infty}t^{ n}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt, \tag{2.14}\] has an exponential decay at infinity, for any \(n\in\mathbb{N}\). That is, we now claim that \[|R_{n}(r)|\lesssim_{n}e^{-r}, \tag{2.15}\] for any \(r\geq 1\). The justification of the preceding bound is achieved by recurrence and simple integration by parts. To see that, we first write, for any \(n\in\mathbb{N}\) and all \(r\geq 1\), that \[R_{n}(r) =\int_{1}^{2}t^{n}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt+\int_{2}^{ \infty}t^{n}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt\] \[\leq 2^{n+1}\int_{1}^{2}e^{-rt}dt+\int_{2}^{\infty}t^{n}e^{-rt}(t ^{2}-1)^{\frac{1}{2}}dt\] \[\leq 2^{n+1}e^{-r}+\int_{2}^{\infty}t^{n}e^{-rt}(t^{2}-1)^{ \frac{1}{2}}dt.\] Thus, by integration by parts and using the elementary inequality \[t\leq\sqrt{2}\sqrt{t^{2}-1},\] which is valid for all \(t\geq 2\), we find that \[R_{n}(r) \leq 2^{n+2}e^{-r}+\frac{n+2}{r}\int_{2}^{\infty}t^{n-1}e^{-rt}(t^ {2}-1)^{\frac{1}{2}}dt\] \[\leq 2^{n+2}e^{-r}+(n+2)R_{n-1}(r).\] Therefore, it is readily seen that (2.15) follows by induction as soon as we show, for all \(r\geq 1\), that \[\int_{2}^{\infty}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt\lesssim e^{-r}.\] To that end, we perform one more integration by parts to obtain, for all \(r\geq 1\), that \[\int_{2}^{\infty}e^{-rt}(t^{2}-1)^{\frac{1}{2}}dt =\frac{\sqrt{3}}{r}e^{-r}+\frac{1}{r}\int_{2}^{\infty}e^{-rt}(t^{ 2}-1)^{-\frac{1}{2}}dt\] \[\leq\frac{\sqrt{3}}{r}e^{-r}+\frac{1}{\sqrt{3}r}\int_{2}^{\infty }e^{-rt}dt,\] which clearly leads to the desired bound. At last, we deduce that \[|k_{-}(x)|\lesssim\frac{1}{|x|},\] for any \(x\in\mathbb{R}^{2}\) with \(|x|\geq 1\), thereby concluding the proof of (2.11). Now, in order for us to establish (2.12), we proceed again in two steps. Note that w we can assume that \(|y|\leq|x|\) without any loss of generality. _Case \(|y|\leq|x|\leq 2|y|\)._ By employing the integral representation (2.13), we obtain that \[|k_{-}(x)-k_{-}(y)|\lesssim|x-y|R_{0}(|x|)+|y|\big{|}R_{0}(|x|)-R_{0}(|y|)\big{|},\] where \((R_{n})_{n\in\mathbb{N}}\) is defined in (2.14). On the one hand, due to the decay estimate (2.15) and the assumption \(|x|\geq|y|\), it follows that \[|x-y|R_{0}(|x|)\lesssim\frac{|x-y|}{|x||y|},\] as soon as \(|x|\geq|y|>0\). On the other hand, writing, in view of Taylor's expansion, that \[|R_{0}(|x|)-R_{0}(|y|)|\lesssim\big{|}|x|-|y|\big{|}\int_{0}^{1}\int_{1}^{ \infty}te^{-t\big{(}s|x|+(1-s)|y|\big{)}}\sqrt{t^{2}-1}dtds\] and exploiting the assumption \(|x|\geq|y|\), again, we obtain that \[|R_{0}(|x|)-R_{0}(|y|)| \lesssim\left||x|-|y|\right|\int_{1}^{\infty}te^{-t|y|}\sqrt{t^{2}- 1}dt\] \[=\left||x|-|y|\right|R_{1}(|y|).\] Therefore, by further employing the decay estimate (2.15) and the assumption \(|x|\leq 2|y|\), we arrive at the bound \[|y||R_{0}(|x|)-R_{0}(|y|)| \lesssim\frac{|x-y|}{|y|^{2}}\] \[\lesssim\frac{|x-y|}{|x||y|},\] thereby showing the validity of (2.12) whenever \(x,y\in\mathbb{R}^{2}\) with \(|y|\leq|x|\leq 2|y|\). _Case \(|y|<2|y|\leq|x|\)._ We first proceed in a similar way to the previous case by writing \[|k_{-}(x)-k_{-}(y)|\lesssim|x-y|R_{0}(|x|)+|y|\Big{(}R_{0}(|x|)+R_{0}(|y|) \Big{)}.\] Hence, by virtue of the decay estimate (2.15), we obtain, as long as \(|y|\leq|x|\), that \[|k_{-}(x)-k_{-}(y)| \lesssim\frac{|x-y|}{|x|^{2}}+\frac{|y|}{|x|^{2}}+\frac{1}{|y|}\] \[\lesssim\frac{|x-y|}{|x||y|}+\frac{1}{|y|}.\] At last, further employing the assumption \(2|y|\leq|x|\) entails that \[\frac{1}{|y|} \leq 2\left(\frac{1}{|y|}-\frac{1}{|x|}\right)\] \[\leq 2\frac{|x-y|}{|x||y|},\] thereby yielding the desired control (2.12) in the case where \(2|y|\leq|x|\), as well. All in all, combining the foregoing estimates leads to a complete justification of (2.12) and concludes the proof of the lemma. As a consequence of the preceding lemma, we now state a lemma which will be essential in the proof of the uniqueness in Theorem 1.1. The proof of the next lemma can be justified by a direct application of [16, Corollary 2.4] and [16, Remark 2.5] that only require suitable quantitative bounds on the kernels \(k_{\pm}\), which we already established in Lemma 2.2, above. Here and in what follows, the real function \(\ell\) is defined as follows \[\ell:[0,\infty)\to[0,1]\] where we set \[\ell(r)\stackrel{{\rm def}}{{=}}\left\{\begin{array}{ll}0,& \mbox{for }r=0,\\ r\log\left(\frac{e}{r}\right),&\mbox{if }r\in(0,1]\\ 1,&\mbox{elsewhere}.\end{array}\right. \tag{2.16}\] **Lemma 2.3**.: _For any \(f\in L^{1}\cap L^{\infty}(\mathbb{R}^{2})\), it holds that_ \[\|K_{\pm}f\|_{L^{\infty}}\lesssim\|f\|_{L^{1}\cap L^{\infty}}\,,\] _where \(K_{\pm}\) are the convolution operator associated with the kernels \(k_{\pm}\) defined in (2.10). Moreover, we have that_ \[\int_{\mathbb{R}^{2}}|k_{\pm}(x-z)-k_{\pm}(y-z)|\,|f(z)|dz\lesssim\|f\|_{L^{1} \cap L^{\infty}}\,\ell(|x-y|),\] _for any \((x,y)\in\mathbb{R}^{2}\times\mathbb{R}^{2}\) with \(x\neq y\)._ ### Holder-continuity of some singular operators For convenience, we state here several results on continuity properties of a specific type of singular operators. In particular, the ensuing bounds below will be employed later on in the regularity analysis of contour dynamics equation associated with time-periodic solutions of (QS2L). **Lemma 2.4** ([40, Lemma 2.6]).: _Let \(\alpha\in(0,1)\) and consider a measurable function \(\mathcal{K}\) defined on \(\mathbb{T}\times\mathbb{T}\setminus\{(\theta,\theta),\theta\in\mathbb{T}\}\) with values in \(\mathbb{C}\). Assume further, for all \(\theta\neq\eta\in\mathbb{T}\), that_ \[\left|\mathcal{K}(\theta,\eta)\right|\leq\frac{C_{0}}{\left|\sin\left(\frac{ \theta-\eta}{2}\right)\right|^{\alpha}},\] _and_ \[\left|\mathcal{K}(\theta,\eta)\right|\leq\frac{C_{0}}{\left|\sin\left(\frac{ \theta-\eta}{2}\right)\right|^{1+\alpha}},\] _for some \(C_{0}>0\). Then, the integral operator defined, for any \(f\in L^{\infty}(\mathbb{T})\), by_ \[\theta\mapsto\mathcal{T}f(\theta)\stackrel{{\mathrm{def}}}{{=}} \int_{0}^{2\pi}\mathcal{K}(\theta,\eta)f(\eta)d\eta\] _maps \(L^{\infty}(\mathbb{T})\) into \(C^{\alpha}(\mathbb{T})\). More precisely, it holds that_ \[\left\|\mathcal{T}f\right\|_{C^{\alpha}}\lesssim C_{0}\left\|f\right\|_{L^{ \infty}}.\] Later on, we are going to deal with operators having a singularity in their diagonal that corresponds to the endpoint case \(\alpha=1\) in the preceding lemma. The following lemma will then be useful in these situations. **Lemma 2.5** ([25, Proposition A.2]).: _Let \(\alpha\in(0,1)\) and \(g\) be a \(C^{\alpha}(\mathbb{T})\) function. Further consider a measurable function \(\mathcal{K}\) defined on \(\mathbb{T}\times\mathbb{T}\setminus\{(\theta,\theta),\theta\in\mathbb{T}\}\) with values in \(\mathbb{C}\) and assume, for all \(\theta\neq\eta\in\mathbb{T}\), that_ \[\left|\mathcal{K}(\theta,\eta)\right|\leq\frac{C_{0}}{\left|\sin\left(\frac{ \theta-\eta}{2}\right)\right|},\] _and_ \[\left|\mathcal{K}(\theta,\eta)\right|\leq\frac{C_{0}}{\left|\sin\left(\frac{ \theta-\eta}{2}\right)\right|^{2}},\] _for some \(C_{0}>0\). Then, the integral operator defined, for any \(f\in L^{\infty}(\mathbb{T})\), by_ \[\theta\mapsto\mathcal{T}_{g}f(\theta)\stackrel{{\mathrm{def}}}{{= }}\int_{0}^{2\pi}\mathcal{K}(\theta,\eta)\big{(}g(\theta)-g(\eta)\big{)}f( \eta)d\eta\] _maps \(L^{\infty}(\mathbb{T})\) into \(C^{\alpha}(\mathbb{T})\). More precisely, it holds that_ \[\left\|\mathcal{T}_{g}f\right\|_{C^{\alpha}}\lesssim C_{0}\left\|g\right\|_{C ^{\alpha}}\left\|f\right\|_{L^{\infty}}.\] ### Crandall-Rabinowitz's theorem The proof of Theorem 4.1, as we will show in Section 4, relies on a careful application of the generalized version of _implicit functions_ theorem--known as Crandall-Rabinowitz's Theorem. The precise statement of that theorem reads as follows **Theorem 2.6** (Crandall-Rabinowitz).: _Let \(X\) and \(Y\) be two Banach spaces. Further consider \(V\subset X\) to be a neighborhood of \(0\) and_ \[F:\ \mathbb{R}\times V\to Y\] _to be a function of class \(C^{1}\) with the following properties_ 1. _(Trivial solution) For all_ \(\Omega\in\mathbb{R}\)_, we assume that_ \[F(\Omega,0)=0.\] 2. _(Regularity) Moreover,_ \(F\) _is assumed to be regular in the sense that_ \(\partial_{\Omega}F\)_,_ \(\partial_{x}F\) _and_ \(\partial_{\Omega x}^{2}F\) _exist and are continuous._ 3. _(Fredholm property) Furthermore, we assume that_ \(\ker\left(\partial_{x}F(0,0)\right)\) _is not trivial of dimension one, i.e., there is_ \(x_{0}\in V\) _such that_ \[\ker\left(\partial_{x}F(0,0)\right)=\langle x_{0}\rangle,\] _that_ \(R\left(\partial_{x}F(0,0)\right)\) _is closed in_ \(Y\) _and that_ \(Y\setminus R\left(\partial_{x}F(0,0)\right)\) _is one dimensional._ 4. _(Transversality assumption) At last, we assume that_ \(\partial_{\Omega x}^{2}F(0,0)x_{0}\not\in R\left(\partial_{x}F(0,0)\right).\)__ _Under the foregoing assumptions, for \(\chi\) being the complement of \(\ker\left(\partial_{x}F(0,0)\right)\) in \(X\), there exist a neighborhood \(U\) of \((0,0)\), an interval \((-a,a)\), for some \(a>0\) and continuous functions_ \[\psi:(-a,a)\to\mathbb{R}\quad\text{and}\quad\phi:(-a,a)\to\chi\] _such that_ \[\psi(0)=\phi(0)=0\] _and_ ## 3. Weak and Lagrangian solutions This section is devoted to the proof of Theorem 1.1. We proceed first by introducing several notation that will be employed in the proof below. For a given \(\delta>0\), we introduce the matrix \[\mathcal{A}_{\delta}\stackrel{{\mathrm{def}}}{{=}}\begin{pmatrix} 1&\delta^{-1}\\ 1&-1\end{pmatrix} \tag{3.1}\] and, for a couple of real valued functions \(f=(f_{1},f_{2})\), we define \[\begin{pmatrix}f_{+}\\ f_{-}\end{pmatrix}\stackrel{{\mathrm{def}}}{{=}}\mathcal{A}_{ \delta}\cdot\begin{pmatrix}f_{1}\\ f_{2}\end{pmatrix}.\] Note that \[\begin{pmatrix}f_{1}\\ f_{2}\end{pmatrix}=\mathcal{A}_{\delta}^{-1}\cdot\begin{pmatrix}f_{+}\\ f_{-}\end{pmatrix},\] where \(\mathcal{A}_{\delta}^{-1}\) is the inverse of \(\mathcal{A}_{\delta}\), which is given by \[\mathcal{A}_{\delta}^{-1}=\left(1+\delta^{-1}\right)^{-1}A_{\delta}.\] Moreover, for another couple of real valued functions \(\widetilde{f}=(\widetilde{f}_{1},\widetilde{f}_{2})\), the following elementary inequality can justified by a direct computation \[\frac{1}{2\max\{1,\delta^{-1}\}}\left(\mathbf{d}(f_{+},\widetilde {f}_{+})+\mathbf{d}(f_{-},\widetilde{f}_{-})\right)\] \[\leq\;\mathbf{d}(f_{1},\widetilde{f}_{1})+\mathbf{d}(f_{2}, \widetilde{f}_{2})\] \[\leq\frac{1+\max\{1,\delta^{-1}\}}{1+\delta^{-1}}\left(\mathbf{d} (f_{+},\widetilde{f}_{+})+\mathbf{d}(f_{-},\widetilde{f}_{-})\right),\] where \(\mathbf{d}(A,B)\) denotes the distance between \(A\) and \(B\). For simplicity, we shall use the following version of the preceding inequalities \[C_{\delta}^{-1}\left(\mathbf{d}(f_{+},\widetilde{f}_{+})+\mathbf{d}(f_{-}, \widetilde{f}_{-})\right)\leq\;\mathbf{d}(f_{1},\widetilde{f}_{1})+\mathbf{d} (f_{2},\widetilde{f}_{2})\leq C_{\delta}\left(\mathbf{d}(f_{+},\widetilde{f}_{ +})+\mathbf{d}(f_{-},\widetilde{f}_{-})\right), \tag{3.2}\] where \(C_{\delta}\geq 1\) is a constant that only depends on \(\delta\). ### Existence of global solutions Here, we show existence of global solutions of (QS2L) as is claimed in Theorem 1.1. #### 3.1.1. Existence of weak-distributional solutions Weak solutions can be constructed by analyzing the convergence of the scheme below associated with a linearized transport equations. To that end, we assume that we have constructed a velocity \(u_{i}^{n}\), for \(n\in\mathbb{N}\), then build up the next iteration \((\omega^{n+1},u^{n+1})\) by solving the linear system of equations supplemented with the initial data \[\omega_{i}^{n+1}|_{t=0}=\omega_{i}(0),\quad\text{for all}\quad n\in\mathbb{N},\] and the initial iteration \[\begin{pmatrix}v_{1}^{0}\\ v_{2}^{0}\end{pmatrix}\stackrel{{\mathrm{def}}}{{=}}\mathcal{A}_{ \delta}^{-1}\cdot\begin{pmatrix}v_{+}^{0}\\ v_{-}^{0}\end{pmatrix}=\mathcal{A}_{\delta}^{-1}\cdot\begin{pmatrix}K_{+} \omega_{+}(0)\\ K_{-}\omega_{-}(0)\end{pmatrix}=\mathcal{A}_{\delta}^{-1}\cdot\begin{pmatrix}K_{ +}\left(\mathcal{A}_{\delta}\cdot\begin{pmatrix}\omega_{1}(0)\\ \omega_{2}(0)\end{pmatrix}\right)\\ K_{-}\left(\mathcal{A}_{\delta}\cdot\begin{pmatrix}\omega_{1}(0)\\ \omega_{2}(0)\end{pmatrix}\right)\end{pmatrix}.\] The adequate bounds can be proved by induction and by performing standard energy estimates on transport equation, altogether with the employment of the bounds on the operators \(K_{\pm}\) given by Lemma 2.3. This process establishes the existence of bounded sequence of solutions \((\omega_{1}^{n},\omega_{2}^{n})_{n\in\mathbb{N}}\) that convergences, in the sense of distributions, to a weak solution of the system of equations (QS2L). The compactness of the approximate solution \((\omega_{1}^{n},\omega_{2}^{n})_{n\in\mathbb{N}}\) and the convergence in nonlinear terms can be done by using classical techniques. Finally, we emphasize that the resulting limit as \(n\to\infty\) enjoys the following bounds \[\omega_{1},\omega_{2}\in C(\mathbb{R}^{+},L^{p}(\mathbb{R}^{2}))\cap C_{w}( \mathbb{R}^{+},L^{\infty}(\mathbb{R}^{2})),\] \[v_{1},v_{2}\in C(\mathbb{R}^{+},L^{q}(\mathbb{R}^{2}))\cap L^{\infty}(\mathbb{ R}^{+},LL(\mathbb{R}^{2})),\] for all \(p\in[1,\infty)\) and \(q\in(2,\infty]\). #### 3.1.2. Existence of Lagrangian solutions Now, we show that any weak solution is Lagrangian. To that end, for all \(i\in\{1,2\}\), we first define the flow-maps \(X_{i}\) associated with the velocity field \(v_{i}\) as the unique solution of (ODE). Again, we emphasize that the flow-map \(X_{i}\) is uniquely determined due to the Log-lipschitz regularity of \(v_{i}\) (see [14, Section 5.2]). Therefore, we define the Lagrangian solution \(\Omega_{i}\) of (QS2L) by the push forward \[\Omega_{i}\stackrel{{\mathrm{def}}}{{=}}\omega_{i,0}\left(X_{i} ^{-1}(t,\cdot)\right),\quad\text{for all}\quad t\geq 0.\] Observe that \(\Omega_{i}\) satisfies the same transport equation as \(\omega_{i}\) (with velocity \(v_{i}\)), i.e., \[\partial_{t}\Omega_{i}+v_{i}\cdot\nabla\Omega_{i}=0.\] Hence, by classical results on transport equations (see for instance [19]) we deduce that \(\Omega_{i}=\omega_{i}\), thereby establishing that \(\omega_{i}\) is a Lagrangian solution. ### Stability of the flow-maps and uniqueness The proof of the uniqueness follows the idea from [16] and will be acheived by establishing a stability result for the flow-maps associated with two different solutions. In the sequel, \(C_{0}>0\) denotes a constant that depends only on the \(L^{1}\cap L^{\infty}(\mathbb{R}^{2})\) norm of the initial data and is allowed to differ from one line to the next one. Let \((v_{i},\omega_{i})\) and \((\widetilde{v}_{i},\widetilde{\omega}_{i})\) be two Lagrangian solutions to (QS2L) with the same initial data \(\omega_{i,0}\) belonging to \(L^{1}\cap L^{\infty}(\mathbb{R}^{2})\). We denote by \(X_{i}\) and \(\widetilde{X}_{i}\) the unique flows associated to each solution, for \(i\in\{1,2\}\). Further introduce \[\begin{pmatrix}X_{+}\\ X_{-}\end{pmatrix}\stackrel{{\mathrm{def}}}{{=}}\mathcal{A}_{ \delta}\cdot\begin{pmatrix}X_{1}\\ X_{2}\end{pmatrix},\qquad\begin{pmatrix}\widetilde{X}_{+}\\ \widetilde{X}_{-}\end{pmatrix}\stackrel{{\mathrm{def}}}{{=}} \mathcal{A}_{\delta}\cdot\begin{pmatrix}\widetilde{X}_{1}\\ \widetilde{X}_{2}\end{pmatrix},\] where \(A_{\delta}\) is the matrix defined in (3.1). Then, in view of the equation (ODE) satisfied by each flow-map, we compute that \[X_{+}(t)-\widetilde{X}_{+}(t) =\int_{0}^{t}\left(v_{1}(X_{1}(s))+\delta^{-1}v_{2}(X_{2}(s))- \widetilde{v}_{1}(\widetilde{X}_{1}(s))-\delta^{-1}\widetilde{v}_{2}( \widetilde{X}_{2}(s))\right)ds\] \[=\int_{0}^{t}\left(v_{1}(X_{1}(s))-v_{1}(\widetilde{X}_{1}(s))+( v_{1}-\widetilde{v}_{1})(\widetilde{X}_{1}(s))\right)ds\] \[\quad+\delta^{-1}\int_{0}^{t}\left(v_{2}(X_{2}(s))-v_{2}( \widetilde{X}_{2}(s))+(v_{2}-\widetilde{v}_{2})(\widetilde{X}_{2}(s))\right)ds,\] for all \(t\geq 0\). Therefore, by further introducing the fluctuation \[X\stackrel{{\mathrm{def}}}{{=}}|X_{1}-\widetilde{X}_{1}|+|X_{2}- \widetilde{X}_{2}|,\] and owing to the fact that \(v_{1}\) and \(v_{2}\) are Log-Lipschitz, altogether with (3.2), we infer that \[\big{|}X_{+}(t)-\widetilde{X}_{+}(t)\big{|}\lesssim_{\delta}C_{0} \int_{0}^{t}\ell(|X_{1}-\widetilde{X}_{1}(s)|)+\ell(|X_{2}-\widetilde{X}_{2}( s)|)ds\] \[\qquad+\int_{0}^{t}\big{|}(v_{+}-\widetilde{v}_{+})(\widetilde{X} _{1}(s))\big{|}+\int_{0}^{t}\big{|}(v_{-}-\widetilde{v}_{-})(\widetilde{X}_{1 }(s))\big{|}ds\] \[\lesssim_{\delta}C_{0}\int_{0}^{t}\ell(X(s))ds\] \[\quad+\sum_{i=1}^{2}\left(\int_{0}^{t}\big{|}(v_{+}-\widetilde{v}_ {+})(\widetilde{X}_{i}(s))\big{|}ds+\int_{0}^{t}\big{|}(v_{-}-\widetilde{v}_{- })(\widetilde{X}_{i}(s))\big{|}ds\right),\] where the function \(\ell\) is defined in (2.16). Moreover, in the same way, we obtain that \[\big{|}X_{-}(t)-\widetilde{X}_{-}(t)\big{|}\lesssim_{\delta}C_{0 }\int_{0}^{t}\ell(X(s))ds\] \[\qquad+\sum_{i=1}^{2}\left(\int_{0}^{t}\big{|}(v_{+}-\widetilde{v }_{+})(\widetilde{X}_{i}(s))\big{|}ds+\int_{0}^{t}\big{|}(v_{-}-\widetilde{v}_ {-})(\widetilde{X}_{i}(s))\big{|}ds\right).\] Hence, utilizing (3.2), again, entails that \[X(t)\lesssim_{\delta}C_{0}\int_{0}^{t}\ell(X(s))ds+\sum_{i=1}^{2}\left(\int_{ 0}^{t}\big{|}(v_{+}-\widetilde{v}_{+})(\widetilde{X}_{i}(s))\big{|}ds+\int_{0} ^{t}\big{|}(v_{-}-\widetilde{v}_{-})(\widetilde{X}_{i}(s))\big{|}ds\right).\] On the other hand, observing, for all \(i\in\{1,2\}\), that \[\big{|}(v_{+}-\widetilde{v}_{+})(\widetilde{X}_{i})\big{|} =\big{|}(K_{+}\omega_{+}(\widetilde{X}_{i})-K_{+}\widetilde{\omega}_ {+}(\widetilde{X}_{i})\big{|}\] \[\lesssim_{\delta}\sum_{j=1}^{2}\big{|}(K_{+}\omega_{j}(\widetilde{ X}_{i})-K_{+}\widetilde{\omega}_{j}(\widetilde{X}_{i})\big{|},\] and exploiting the definition of \(K_{+}\) allows us to write, for all \(i,j\in\{1,2\}\), that \[\big{|}(K_{+}\omega_{j}(\widetilde{X}_{i})-K_{+}\widetilde{\omega }_{j}(\widetilde{X}_{i})\big{|} =\big{|}\int_{\mathbb{R}^{2}}k_{+}(\widetilde{X}_{i}(s,x),y) \omega_{j}(y)dy-k_{+}(\widetilde{X}_{i}(s,x),y)\widetilde{\omega}_{j}(y)dy\Big{|}\] \[=\Big{|}\int_{\mathbb{R}^{2}}k_{+}(\widetilde{X}_{i}(s,x),X_{j}(s,y))\omega_{j,0}(y)\] \[\qquad\qquad-k_{+}(\widetilde{X}_{i}(s,x),\widetilde{X}_{j}(s,y) )\omega_{0,j}(y)dy\Big{|}\] \[\leq\int_{\mathbb{R}^{2}}\Big{|}k_{+}(\widetilde{X}_{i}(s,x),X_{j }(s,y))\] \[\qquad\qquad-k_{+}(\widetilde{X}_{i}(s,x),\widetilde{X}_{j}(s,y) )\Big{|}\Big{|}\omega_{0,j}(y)\Big{|}dy.\] Consequently, we find that \[\big{|}(v_{+}-\widetilde{v}_{+})(\widetilde{X}_{i})\big{|} \lesssim_{\delta}\sum_{j=1}^{2}\int_{\mathbb{R}^{2}}\Big{|}k_{+}( \widetilde{X}_{i}(s,x),X_{j}(s,y))-k_{+}(\widetilde{X}_{i}(s,x),\widetilde{X}_ {j}(s,y))\Big{|}\Big{|}\omega_{0,j}(y)\Big{|}dy.\] Likewise, we obtain for the minus parts \[\big{|}(v_{-}-\widetilde{v}_{-})(\widetilde{X}_{i})\big{|} =\big{|}(K_{-}\omega_{-}(\widetilde{X}_{i})-K_{-}\widetilde{\omega }_{-}(\widetilde{X}_{i})\big{|}\] \[\lesssim_{\delta}\sum_{j=1}^{2}\big{|}(K_{-}\omega_{j}(\widetilde {X}_{i})-K_{-}\widetilde{\omega}_{j}(\widetilde{X}_{i})\big{|}\] \[\lesssim_{\delta}\sum_{j=1}^{2}\int_{\mathbb{R}^{2}}\Big{|}k_{-} (\widetilde{X}_{i}(s,x),X_{j}(s,y))-k_{-}(\widetilde{X}_{i}(s,x),\widetilde{X} _{j}(s,y))\Big{|}\Big{|}\omega_{0,j}(y)\Big{|}dy.\] All in all, gathering the foregoing estimates yields \[\begin{split} X(t)\lesssim_{\delta}C_{0}\int_{0}^{t}\ell(X(s))ds \\ +\sum_{\stackrel{{ i,j=1}}{{m=\pm}}}^{2}\int_{0}^{t} \int_{\mathbb{R}^{2}}\Big{|}k_{m}(\widetilde{X}_{i}(s,x),X_{j}(s,y))-k_{m}( \widetilde{X}_{i}(s,x),\widetilde{X}_{j}(s,y))\Big{|}\Big{|}\omega_{0,j}(y) \Big{|}dyds.\end{split} \tag{3.3}\] Onwards, we omit the subscript \(\delta\) and we keep in mind that the constants in the estimates below depend on \(\delta\), as well. Let \(\alpha>0\) be a function in \(L^{1}\cap L^{\infty}(\mathbb{R}^{2})\). Accordingly, we define the density \[\eta\stackrel{{\rm def}}{{=}}\sum_{i=1}^{2}|\omega_{i,0}|+\alpha\] and its push-forward by the mapping \(\widetilde{X}_{j}(s,\cdot)\) \[\eta_{j}(s,\cdot)\stackrel{{\rm def}}{{=}}\eta(\widetilde{X}_{j} ^{-1}(s,\cdot)).\] Then, we proceed by integrating (3.3) against to the measure \(\eta(x)dx\) and utilizing Fubini's theorem to obtain that \[\int_{\mathbb{R}^{2}}X(t,x)\eta(x)dx\lesssim C_{0}\int_{0}^{t}\int_{ \mathbb{R}^{2}}\ell\big{(}X(s,x)\big{)}\eta(x)dxds\\ +\sum_{\begin{subarray}{c}i,j=1\\ m=\pm\end{subarray}}^{2}\int_{0}^{t}\int_{\mathbb{R}^{4}}\Big{|}k_{m}( \widetilde{X}_{i}(s,x),X_{j}(s,y))-k_{m}(\widetilde{X}_{i}(s,x),\widetilde{X} _{j}(s,y))\Big{|}\Big{|}\omega_{0,j}(y)\Big{|}dy\eta(x)dxds\\ =C_{0}\int_{0}^{t}\int_{\mathbb{R}^{2}}\ell\big{(}X(s,x)\big{)} \eta(x)dxds\\ +\sum_{\begin{subarray}{c}i,j=1\\ m=\pm\end{subarray}}^{2}\int_{0}^{t}\int_{\mathbb{R}^{2}}\Big{|}\omega_{0,j} (y)\Big{|}\int_{\mathbb{R}^{2}}\Big{|}k_{m}(\widetilde{X}_{i}(s,x),X_{j}(s,y ))-k_{m}(\widetilde{X}_{i}(s,x),\widetilde{X}_{j}(s,y))\Big{|}\eta(x)dxdyds.\] Now, owing to the fact that \(\widetilde{X}_{i}(s,\cdot)\) preserves volumes, for any \(i\in\{1,2\}\) and \(s\geq 0\), and appealing to Lemma 2.3 entails, for any \(i,j\in\{1,2\}\) and \(m=\pm\), that \[\int_{\mathbb{R}^{2}}\Big{|}k_{\pm}(\widetilde{X}_{i}(s,x),X_{j}( s,y)) -k_{\pm}(\widetilde{X}_{i}(s,x),\widetilde{X}_{j}(s,y))\Big{|} \eta(x)dx\] \[=\int_{\mathbb{R}^{2}}\Big{|}k_{\pm}(x,X_{j}(s,y))-k_{\pm}(x, \widetilde{X}_{j}(s,y))\Big{|}\eta_{j}(s,x)dx\] \[\lesssim\|\eta_{j}(s,\cdot)\|_{L^{1}\cap L^{\infty}}\ell\big{(}| X_{j}(s,y)-\widetilde{X}_{j}(s,y)|\big{)}\] \[\lesssim C_{0}\ell\big{(}X(s,y)\big{)}.\] Therefore, it follows that \[\int_{\mathbb{R}^{2}}X(t,x)\eta(x)dx\lesssim C_{0}\int_{0}^{t}\int _{\mathbb{R}^{2}}\ell(X(s,x))\eta(x)dxds\\ +C_{0}\sum_{j=1}^{2}\int_{0}^{t}\int_{\mathbb{R}^{2}}\Big{|} \omega_{0,j}(y)\Big{|}\ell(X(s,y))dyds\\ \lesssim C_{0}\int_{0}^{t}\int_{\mathbb{R}^{2}}\ell(X(s,x))\eta(x )dxds.\] At last, since the function \(\ell\) is concave and \(\eta\in L^{1}(\mathbb{R}^{2})\), we end up with \[\int_{\mathbb{R}^{2}}X(t,x)\eta(x)dx\lesssim C_{0}\int_{0}^{t}\ell\left(\int_{ \mathbb{R}^{2}}X(s,x)\eta(x)dx\right)ds.\] The uniqueness of the flow-maps follows due to Osgood's Lemma, see [5, Lemma 3.4], for instance. Consequently, we deduce that \(\omega_{i}\equiv\widetilde{\omega}_{i}\), thereby showing the uniqueness of the solution and completing the proof of Theorem 1.1. ## 4. Uniformly rotating solutions This section is devoted to the proof of existence of rotating \(m\)-fold vortex patches (relative) equilibriums for the Surface Quasi-Geostrophic model with two boundaries given by (QS2L). The statement of that is given by Theorem 1.3 which we recast below in a more precise format. To that end, allow us first to set up few notations that will be constantly used throughout this section. For a given integer \(m\in\mathbb{N}^{*}\), and real numbers \(0<b_{2}\leq b_{1}<\infty\), we introduce the set \[S_{m,b_{1}}\stackrel{{\rm def}}{{=}}\big{\{}s\in(0,b_{1}]:\exists n \in\mathbb{N}^{*},\ \Omega_{m}^{-}(s,b_{1})=\Omega_{n}^{+}(s,b_{1})\big{\}}\] where \[\Omega_{n}^{\pm}(b_{1},b_{2})\stackrel{{\rm def}}{{=}}\frac{1}{2( \delta+1)}\left(-(A_{b_{1},b_{2},n}+B_{b_{1},b_{2},n})\pm\sqrt{(A_{b_{1},b_{2},n} -B_{b_{1},b_{2},n})^{2}+4\delta\gamma_{b_{1},b_{2},n}^{2}}\right),\] where, we set \[A_{b_{1},b_{2},n} \stackrel{{\rm def}}{{=}}(\delta+1)V_{b_{1},b_{2}}+\frac {\delta}{2n}+I_{n}(b_{1}\mu)K_{n}(b_{1}\mu)\] \[B_{b_{1},b_{2},n} \stackrel{{\rm def}}{{=}}(\delta+1)W_{b_{1},b_{2}}+ \frac{1}{2n}+\delta I_{n}(b_{2}\mu)K_{n}(b_{2}\mu),\] \[\gamma_{b_{1},b_{2},n} \stackrel{{\rm def}}{{=}}\ \frac{b^{n}}{2n}-I_{n}(b_{2}\mu)K_{n}(b_{1}\mu),\] with \(b\stackrel{{\rm def}}{{=}}\frac{b_{2}}{b_{1}}\) and \[V_{b_{1},b_{2}} \stackrel{{\rm def}}{{=}}-\frac{\delta+b^{2}}{2(1+ \delta)}-\frac{1}{1+\delta}\Big{(}I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)-bI_{1}(b_{2} \mu)K_{1}(b_{1}\mu)\Big{)},\] \[W_{b_{1},b_{2}} \stackrel{{\rm def}}{{=}}-\frac{1}{2}-\frac{\delta }{1+\delta}\Big{(}I_{1}(b_{2}\mu)K_{1}(b_{2}\mu)-\frac{1}{b}I_{1}(b_{2}\mu)K_ {1}(b_{1}\mu)\Big{)}. \tag{4.1}\] Above and in what follows, we denote \[\mu\stackrel{{\rm def}}{{=}}\lambda\sqrt{1+\delta},\] where \(\Lambda,\delta>0\) are the constants appearing in the system of equations (QS2L), whereas we recall that \(I_{n}\) and \(K_{n}\), for \(n\in\mathbb{N}^{*}\), stand for the modified Bessel functions previously introduced in Section 2.1. For a mere simplification of notation, we drop the dependence of \(\Omega_{n}^{\pm}(b_{1},b_{2})\), as well as \(A_{b_{1},b_{2},n}\) and \(B_{b_{1},b_{2},n}\), on \(b_{1}\) and \(b_{2}\) even though this is crucial and will be utilized in our proofs, later on. Theorem 1.3 is a consequence of the following slightly more precise theorem. **Theorem 4.1** (Periodic solutions bifurcating from simple eigenvalues).: _Let \(m\in\mathbb{N}^{*}\) and \(\lambda>0\). Further fix two radii \(0<b_{2}\leq b_{1}\) and assume that \(\delta\geq\frac{b_{2}}{b_{1}}\). Then, the set \(S_{m,b_{1}}\) defined above contains at most a finite number of elements. Moreover, the following holds:_ * _Assuming that_ \(b_{2}\) _does not belong to_ \(S_{m,b_{1}}\)_, there exist two curves of_ \(m\)_-fold symmetric pairs of simply connected V-states, for any_ \(m\in\mathbb{N}^{*}\)_, bifurcating from the steady states_ \[\omega_{1}=\mathds{1}_{\{x\in\mathbb{R}^{2}:|x|<b_{1}\}}\quad\text{and}\quad \omega_{2}=\mathds{1}_{\{x\in\mathbb{R}^{2}:|x|<b_{2}\}}\] _at each of the angular velocities_ \(\Omega=\Omega_{n}^{\pm}\) _defined above._ * _Assuming_ \(b_{1}\neq b_{2}\)_, there is_ \(m_{0}\in\mathbb{N}^{*}\) _such that_ \(S_{m,b_{1}}\) _is empty for any_ \(m\geq m_{0}\) _and for which there exist two curves of_ \(m\)_-fold symmetric pairs of simply connected V-states satisfying the same properties as in the preceding case._ _Remark 4.1_.: In the case of identical initial discs, we will show in the proof of Proposition 4.8 below that there are values \(b_{1}=b_{2}\in(0,\infty)\) for which spectral collisions can happen, i.e. situations where the angular velocities satisfy \(\Omega_{m}^{-}=\Omega_{n}^{+}\), for some \(n,m\in\mathbb{N}^{*}\). In that particular case, we are not able to directly apply Crandall-Rabinowitz's Theorem 2.6 to prove existence of time-periodic solutions bifurcating from the same initial disc. This also shows that the set \(S_{m,b_{1}}\) is not empty in general. _Remark 4.2_.: Note that Theorem 4.1 covers Theorem 1.3. Moreover, Theorem 4.1 emphasizes that \(\delta\) is allowed to take values in \([b^{2},1]\), as well. This condition can probably be further relaxed so that \(\delta\) would take values in the whole half-line \((0,\infty)\), though a justification of that is not available by in proof below. Indeed, it is to be emphasized later on that the sequences of angular velocities \((\Omega_{n}^{\pm})_{n\in\mathbb{N}^{*}}\) are non-decreasing for any value of \(\delta\) in \((0,\infty)\), which means that spectral collisions cannot occur from the same sequence \((\Omega_{n}^{+})_{n\in\mathbb{N}^{*}}\) or \((\Omega_{n}^{-})_{n\in\mathbb{N}^{*}}\). Furthermore, under the assumption that \(\delta\geq\frac{b_{1}}{b_{2}}\) with \(b_{1}\neq b_{2}\), it is shown in the proof of Proposition 4.7 below that these sequences have different limits at infinity. This allows for avoiding spectral collusions between the sequences \((\Omega_{n}^{+})_{n\in\mathbb{N}^{*}}\) and \((\Omega_{n}^{-})_{n\in\mathbb{N}^{*}}\) for large symmetries. However, because of the implicit expression of the Bessel functions, we were not able to rigorously justify that fact in the range of parameter \(\delta\in(0,\frac{b_{1}}{b_{2}})\), even though several numerical simulations, that we do not include in this paper, suggest that Theorem 4.1 holds for any value of \(\delta>0\). The proof of Theorem 4.1 is carried out in several steps and is detailed in the forthcoming sections. In particular, our strategy consists in establishing all the prerequisites in Crandall-Rabinowitz's Theorem 2.6 which, subsequently, will grant us the results of Theorem 4.1. ### Contour dynamics equation Here, we set up the contour dynamics governing the motion of vortex patches. Onward, we identify \(\mathbb{C}\) with \(\mathbb{R}^{2},\) where \(\mathbb{C}\) is naturally endowed with the Euclidean scalar product which writes, for \(z_{1}=x_{1}+\mathrm{i}y_{1}\) and \(z_{2}=x_{2}+\mathrm{i}y_{2},\) as \[z_{1}\cdot z_{2}\stackrel{{\mathrm{def}}}{{=}}\mathrm{Re}( \overline{z_{1}}z_{2})=\tfrac{1}{2}\left(\overline{z_{1}}z_{2}+z_{1}\overline{ z_{2}}\right)=x_{1}x_{2}+y_{1}y_{2}.\] Let \(D_{1}\) and \(D_{2}\) be two simply-connected domains, close to the discs, of radii \(b_{1}\) and \(b_{2},\) respectively. Further consider polar parametrization of the associated boundaries, for \(k\in\{1,2\}\) \[z_{k}:\ \mathbb{T} \mapsto \partial D_{k}\] \[\theta \mapsto R_{k}(\theta)e^{\mathrm{i}\theta}\stackrel{{ \mathrm{def}}}{{=}}\sqrt{b_{k}^{2}+2r_{k}(\theta)}e^{\mathrm{i}\theta}, \tag{4.2}\] for some function \(r(\theta).\) Accordingly, we introduce the initial vorticities \[(\omega_{1},\omega_{2})|_{t=0}=\big{(}\mathbf{1}_{D_{1}},\mathbf{1}_{D_{2}} \big{)}\] and assume that \(\omega_{1}\) and \(\omega_{2}\) give rise to two rotating patch solutions of (QS2L) about the origin, with an angular velocity \(\Omega\in\mathbb{R}.\) The vortex patch equation (1.1) provides a system of coupled nonlinear and nonlocal PDE satisfied by the radial deformations \(r_{1}\) and \(r_{2}\). The precise statement is the content of the next lemma. **Lemma 4.2**.: _The radial deformations \(r\stackrel{{\mathrm{def}}}{{=}}(r_{1},r_{2})\) defined through (4.2) satisfy the nonlinear coupled system_ \[\mathcal{F}(\Omega,r)\stackrel{{\mathrm{def}}}{{=}}\big{(} \mathcal{F}_{1}(\Omega,r),\mathcal{F}_{2}(\Omega,r)\big{)}=0, \tag{4.3}\] _where we set, for all \(k,j\in\{1,2\}\) and \(\theta\in\mathbb{T}\), that_ \[\mathcal{F}_{k}(\Omega,r)(\theta)\stackrel{{\mathrm{def}}}{{=}} \Omega\partial_{\theta}r_{k}(\theta)+\mathcal{F}_{k,1}(r)(\theta)+\mathcal{F}_{ k,2}(r)(\theta), \tag{4.4}\] _and_ \[\mathcal{F}_{k,j}(r)(\theta)\stackrel{{\mathrm{def}}}{{=}}\int_{0 }^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{j}(\eta)e^{\mathrm{ i}\eta}\big{)}\partial_{\theta\eta}^{2}\Big{(}R_{k}(\theta)R_{j}(\eta)\sin(\eta- \theta)\Big{)}d\eta, \tag{4.5}\] _where the kernels \(G_{k,j}\) are introduced in (2.9)._ _Moreover, the Rankine vorticities, corresponding to \(r\equiv 0\), are stationary solutions, for any radii \(b_{1},b_{2}>0\). More precisely, one has that_ \[\mathcal{F}(\Omega,0)=0,\] _for all \(\Omega\in\mathbb{R}\)._ Proof.: Note first that the normal inward vector to the boundary \(\partial D_{k}\) of the patch at the point \(z_{k}(\theta)\) is given by \[n_{k}(z_{k}(\theta))=\mathrm{i}\partial_{\theta}z_{k}(\theta),\] for any \(k\in\{1,2\}\). According to (1.1), the initial datum \(\omega_{k}|_{t=0}=\mathbf{1}_{D_{k}}\) generates a rotating patch about the origin with uniform angular velocity \(\Omega\in\mathbb{R}\) if and only if it holds, for all \(\theta\in\mathbb{T}\), that \[\Omega\,\mathrm{Re}\Big{\{}\overline{z_{k}(\theta)}\partial_{\theta}z_{k}( \theta)\Big{\}}=2\mathrm{Re}\Big{\{}\partial_{\overline{z}}\psi_{k}(z_{k}( \theta))\partial_{\theta}\overline{z_{k}(\theta)}\Big{\}}. \tag{4.6}\] Therefore, observing from (4.2) that \[\Omega\partial_{\theta}r_{k}(\theta)=\Omega\,\mathrm{Re}\Big{\{}\overline{z_{ k}(\theta)}\partial_{\theta}z_{k}(\theta)\Big{\}},\] entails, for all \(k\in\{1,2\}\) and \(\theta\in\mathbb{T}\), that \[\Omega\partial_{\theta}r_{k}(\theta)=2\mathrm{Re}\Big{\{}\partial_{\overline{ z}}\psi_{k}(z_{k}(\theta))\partial_{\theta}\overline{z_{k}(\theta)}\Big{\}}. \tag{4.7}\] Hence, writing, by the chain rule, that \[2\mathrm{Re}\Big{\{}\partial_{\overline{z}}\psi_{k}(z_{k}(\theta))\partial_{ \theta}\overline{z_{k}(\theta)}\Big{\}}=\partial_{\theta}\big{(}\psi_{k}(t,z_{k }(\theta))\big{)}, \tag{4.8}\] leads, for all \(k\in\{1,2\}\) and \(\theta\in\mathbb{T}\), to the equivalence reformulation \[\Omega\partial_{\theta}r_{k}(\theta)=\partial_{\theta}\big{(}\psi(z_{k}(\theta ))\big{)}.\] Now, we recall from (2.8) that \[\psi_{k}(z)=\sum_{j=1}^{2}\int_{D_{j}}G_{k,j}(z-\xi)d\xi,\] for all \(z\in\mathbb{C}\), where \(G_{k,j}\) is given by (2.9). Observe that the preceding representation can be recast in polar coordinates (4.2) as \[\psi_{k}(z)=\sum_{j=1}^{2}\int_{0}^{2\pi}\int_{0}^{R_{j}(\eta)}G_{k,j}(z-\rho e ^{\mathrm{i}\eta})\rho d\rho d\eta. \tag{4.9}\] Therefore, employing the identity \[\partial_{\overline{z}}G_{k,j}(z,\xi)=-\partial_{\overline{\xi}}G_{k,j}(z-\xi),\] yields, for any \(z\in\mathbb{C}\), that \[\partial_{\overline{z}}\psi_{k}\big{(}z\big{)}=-\sum_{j=1}^{2}\int_{0}^{2\pi }\int_{0}^{R_{j}(\eta)}\partial_{\overline{\xi}}G_{k,j}\big{(}z-\rho e^{ \mathrm{i}\eta}\big{)}\rho d\rho d\eta.\] Thus, by further employing Gauss-Green theorem, we arrive at the representation \[2\partial_{\overline{z}}\psi_{k}\big{(}z\big{)}=\mathrm{i}\sum_{j=1}^{2}\int_ {0}^{2\pi}G_{k,1}\big{(}z-R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\partial_{\eta} \big{(}R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\big{)}d\eta, \tag{4.10}\] whereby we deduce, for any \(k\in\{1,2\}\) and \(\theta\in\mathbb{T}\), that \[2\mathrm{Re}\Big{\{}\partial_{\overline{z}}\psi_{k}\big{(}z_{k} (\theta)\big{)}\partial_{\theta}\overline{z_{k}(\theta)}\Big{\}}\] \[\qquad\qquad=-\sum_{j=1}^{2}\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}( \theta)e^{\mathrm{i}\theta}-R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\mathrm{Im} \Big{\{}\partial_{\theta}\big{(}R_{k}(\theta)e^{-\mathrm{i}\theta}\big{)} \partial_{\eta}\big{(}R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\Big{\}}d\eta.\] At last, inserting the latter identity above in (4.7) and employing the fact that \[\mathrm{Im}\Big{\{}\partial_{\theta}\big{(}f(\theta)e^{-\mathrm{i}\theta} \big{)}\partial_{\eta}\big{(}g(\eta)e^{\mathrm{i}\eta}\big{)}\Big{\}}= \partial_{\theta\eta}^{2}\Big{(}f(\theta)g(\eta)\sin(\eta-\theta)\Big{)},\] for any functions \(f,g\), yields the contour dynamics equation. To conclude, by a direct computation, one sees that \[\mathcal{F}_{k}(\Omega,0)(\theta) =b_{k}\sum_{j=1}^{2}b_{j}\int_{0}^{2\pi}G_{k,j}\big{(}b_{k}e^{ \mathrm{i}\theta}-b_{j}e^{\mathrm{i}\eta}\big{)}\sin(\eta-\theta)d\eta\] \[=b_{k}\sum_{j=1}^{2}b_{j}\int_{0}^{2\pi}G_{k,j}\big{(}b_{k}-b_{j} e^{\mathrm{i}(\eta-\theta)}\big{)}\sin(\eta-\theta)d\eta\] \[=b_{k}\sum_{j=1}^{2}b_{j}\int_{0}^{2\pi}G_{k,j}\big{(}b_{k}-b_{j} e^{\mathrm{i}\eta}\big{)}\sin(\eta)d\eta,\] for any \(\Omega\in\mathbb{R}\) and \(\theta\in\mathbb{T}\). Finally, by virtue of symmetry properties of the kernels \(G_{k,j}\),it is then readily seen that preceding integral is identically zero, thereby concluding that \((\Omega,0)\) is a zero for the contour dynamics, for any \(\Omega\in\mathbb{R}\). This completes the proof of the lemma. _Remark 4.3_.: Let \((\omega_{1},\omega_{2})|_{t=0}=\big{(}\mathbf{1}_{D_{1}},\mathbf{1}_{D_{2}}\big{)}\) be a rotating patch to (QS2L) with constant angular velocity \(\Omega\). Further consider a real number \(a>0\) and denote by \(D_{k}^{a}=aD_{k}\), for \(k\in\{1,2\}\). Then \((\omega_{1},\omega_{2})|_{t=0}=\big{(}\mathbf{1}_{D_{1}^{a}},\mathbf{1}_{D_{2}^ {a}}\big{)}\) is also a rotating patch with angular velocity \(\Omega\) to the system (QS2L) with \(\lambda\) replaced by \(a\lambda\). Indeed, according to (4.6), (4.8) and (2.8), one has that \[\Omega\operatorname{Re}\Bigl{\{}\overline{z_{k}(\theta)}\partial_{\theta}z_{k }(\theta)\Bigr{\}}=\partial_{\theta}\bigg{(}\sum_{j=1}^{2}\int_{D_{j}}G_{k,j} \big{(}z_{k}(\theta)-\xi\big{)}dA(\xi)\bigg{)}.\] Thus, denoting \(w_{k}=az_{k}\), then multiplying the preceding equation by \(a^{2}\) and implementing the change of variables \(\xi^{\prime}=a\xi\) yields that \[\Omega\operatorname{Re}\Bigl{\{}\overline{w_{k}(\theta)}\partial_{\theta}w_{ k}(\theta)\Bigr{\}}=\partial_{\theta}\bigg{(}\sum_{j=1}^{2}\int_{D_{j}^{a}}G_{k,j} \big{(}\tfrac{\lambda}{a}w_{k}(\theta)-\tfrac{1}{a}\xi^{\prime}\big{)}dA(\xi^ {\prime})\bigg{)}.\] Therefore, in view of (2.9), we find that \[G_{k,j}\big{(}\tfrac{1}{a}w_{k}(\theta)-\tfrac{1}{a}\xi^{\prime} \big{)} =-\frac{\delta^{2-j}}{2\pi(\delta+1)}\log(a)+\frac{\delta^{2-j}}{2 \pi(\delta+1)}\log(|w_{k}(\theta)-\xi^{\prime}|)\] \[+(-1)^{k+j-1}\frac{\delta^{k-1}}{2\pi(\delta+1)}K_{0}\left(\tfrac {\lambda}{a}\sqrt{1+\delta}|w_{k}(\theta)-\xi^{\prime}|\right).\] Hence, we deduce that \[\Omega\operatorname{Re}\Bigl{\{}\overline{w_{k}(\theta)}\partial_{\theta}w_{ k}(\theta)\Bigr{\}}=\partial_{\theta}\bigg{(}\sum_{j=1}^{2}\int_{D_{j}^{a}} \widetilde{G}_{k,j}\big{(}w_{k}(\theta)-\xi^{\prime}\big{)}dA(\xi^{\prime}) \bigg{)},\] where \[\widetilde{G}_{k,j}\big{(}w-\xi^{\prime}\big{)}\stackrel{{\rm def }}{{=}}\frac{\delta^{2-j}}{2\pi(\delta+1)}\log(|w-\xi^{\prime}|)+(-1)^{k+j-1} \frac{\delta^{k-1}}{2\pi(\delta+1)}K_{0}\left(\frac{\lambda}{a}\sqrt{1+\delta }|w-\xi^{\prime}|\right).\] In the next step, we establish some specific symmetry properties of the functional \(\mathcal{F}\), introduced in Lemma 4.2, above. In particular, this is crucial and will allow us to simplify the spectral analysis of the linearized operator associated with \(\mathcal{F}\). More importantly, it is to be emphasized that the property of \(m\)-fold symmetry from the next lemma will be useful to eliminate the instabilities and prevent their appearing at the linear level when we study spectral properties of the linearized operator associated with \(\mathcal{F}\), later on. **Lemma 4.3**.: _Let \(\mathcal{F}=(\mathcal{F}_{1},\mathcal{F}_{2})\) be given by (4.4). Then, the following holds_ 1. _The refection symmetry property: if_ \(r=(r_{1},r_{2})\) _are even, i.e, if_ (4.11) \[r(-\theta)=r(\theta),\] _for any_ \(\theta\in\mathbb{T}\)_, then_ \(\mathcal{F}(\Omega,r)\) _is odd, i.e._ \[\mathcal{F}(\Omega,r)(-\theta)=-\mathcal{F}(\Omega,r)(\theta).\] 2. \(m\)_-fold symmetry: if_ \(r\) _satisfy, for some_ \(m\in\mathbb{Z}^{*}\) _and all_ \(\theta\in\mathbb{T}\)_, that_ \[r\left(\theta+\tfrac{2\pi}{m}\right) =r(\theta),\] _then, it holds that_ \[\mathcal{F}(\Omega,r)\left(\theta+\tfrac{2\pi}{m}\right) =\mathcal{F}(\Omega,r)(\theta),\] _for all_ \(\theta\in\mathbb{T}\)_._ Proof.: The justification of reflection symmetry property of \(\mathcal{F}\) relies upon the structure of the kernel \(G_{k,j}\) defining \(\mathcal{F}_{k,j}\) in (4.5). Accordingly, by performing the change of variables \(\eta\mapsto-\eta\) in (4.5), it is readily seen, for any \(k,j\in\{1,2\}\) and \(\theta\in\mathbb{T}\), that \[\mathcal{F}_{k,j}(\Omega,r)(-\theta)=-\mathcal{F}_{k,j}(\Omega,r)(\theta),\] whereby establishing the first claim in the lemma. Likewise, the \(m-\)fold symmetry follows by making the change of variables \(\eta\mapsto\eta+\frac{2\pi}{m}\) in (4.5), which also follows due to the symmetry property of the kernels \(G_{k,j}\) \[G_{k,j}\big{(}e^{i\alpha}(x-y)\big{)}=G_{k,j}(x-y),\] for any \(x\neq y\in\mathbb{C}\) and all \(\alpha\in\mathbb{T}\). The proof of the lemma is now completed. ### Linearization around discs Here, we explore the structure of the linearized operator associated with the functional \(\mathcal{F}\) defined in (4.3). We emphasize that the analysis we perform in this section will be rigorously justified by a detailed regularity study in Section 4.3, later on. **Lemma 4.4**.: _The Gateaux derivative of \(\mathcal{F}\) at \(r=(r_{1},r_{2})\) in the direction \(h=(h_{1},h_{2})\) is given by_ \[d_{r}\mathcal{F}(\Omega,r)[h]=\Omega\begin{pmatrix}h_{1}^{\prime}&0\\ 0&h_{2}^{\prime}\end{pmatrix}+\partial_{\theta}\begin{pmatrix}V_{1}(r)h_{1}- L_{1,1}(r)[h_{1}]&-L_{1,2}(r)[h_{2}]\\ -L_{2,1}(r)[h_{1}]&V_{2}(r)h_{2}-L_{2,2}(r)[h_{2}]\end{pmatrix}, \tag{4.12}\] _for any \(\Omega\in\mathbb{R}\), where we set_ \[V_{k}(r)(\theta) \stackrel{{\rm def}}{{=}}\frac{1}{R_{k}(\theta)} \sum_{j=1}^{2}\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{{\rm i}\theta}-R_{j }(\eta)e^{{\rm i}\eta}\big{)}\partial_{\eta}\Big{(}R_{j}(\eta)\sin(\eta-\theta )\Big{)}d\eta, \tag{4.14}\] \[L_{k,n}(r)[h_{n}](\theta) \stackrel{{\rm def}}{{=}}\int_{0}^{2\pi}G_{k,n}(R_{ k}(\theta)e^{{\rm i}\theta}-R_{n}(\eta)e^{{\rm i}\eta})h_{n}(\eta)d\eta, \tag{4.13}\] _for any \(\theta\in\mathbb{T}\)._ Proof.: Let \(\theta\in\mathbb{T}\) and \(k\in\{1,2\}\) be fixed. Now, in view of Lemma 4.2 above, notice that it is sufficient to differentiate the stream function. To that end, we first write, by virtue of (4.9), that \[d_{r_{k}}\psi_{k}(z)[h_{k}](\theta)=\int_{0}^{2\pi}G_{k,k}(z-R_{k}(\eta)e^{{ \rm i}\eta})h_{k}(\eta)d\eta\] and \[d_{r_{3-k}}\psi_{k}(z)[h_{3-k}](\theta)=\int_{0}^{2\pi}G_{k,3-k}(z-R_{3-k}( \eta)e^{{\rm i}\eta})h_{3-k}(\eta)d\eta,\] for any \(z\in\mathbb{C}\). On the other hand, by differentiating (4.2) we infers that \[d_{r_{k}}\overline{z}_{k}(\theta)[h_{k}](\theta)=\frac{h_{k}(\theta)}{R_{k}( \theta)}e^{-{\rm i}\theta}.\] Therefore, due to (4.10), it follows that \[2\mathrm{Re}\Big{\{}\partial_{\overline{z}}\psi_{k}\big{(}z_{k}( \theta)\big{)}d_{r_{k}}\overline{z_{k}[h_{k}](\theta)}\Big{\}}\\ =-\frac{h_{k}(\theta)}{R_{k}(\theta)}\sum_{j=1}^{2}\int_{0}^{2\pi }G_{k,j}\big{(}R_{k}(\theta)e^{{\rm i}\theta}-R_{j}(\eta)e^{{\rm i}\eta}\big{)} \partial_{\eta}\mathrm{Im}\Big{\{}R_{j}(\eta)e^{{\rm i}(\eta-\theta)}\Big{\}} d\eta.\] At last, combining the foregoing identities yields that \[d_{r_{k}}\big{(}\psi_{k}(z_{k})\big{)}[h_{k}]=-V_{k}(r)h_{k}+L_{k,k}(r)[h_{k}]\] and \[d_{r_{3-k}}\big{(}\psi_{k}(z_{k})\big{)}[h_{3-k}]=L_{k,3-k}(r)[h_{3-k}],\] thereby completing the proof of the lemma. Before we move on to a more refined analysis of linearized operator \(d_{r}\mathcal{F}\), allow us first to recall that \(\delta\) and \(\lambda\) are considered here as fixed non-negative parameters defined by the system of equations (QS2L), above. Accordingly, we recall the notations \[\mu\stackrel{{\mathrm{def}}}{{=}}\lambda\sqrt{1+\delta},\qquad \text{and}\qquad b\stackrel{{\mathrm{def}}}{{=}}\frac{b_{2}}{b_{1}} \in(0,1),\] where \(b_{1}\) and \(b_{2}\) refer to the radii of the initial patches. At last, we recall that the modified Bessel functions \(I_{n}\) and \(K_{n}\), for \(n\in\mathbb{N}\), are introduced in Section 2.1 as well as their useful properties which will come in handy later on in this section. In the next lemma, we compute the differential of \(\mathcal{F}\) at the equilibrium \((\Omega,0)\) and show that it acts as a Fourier multiplier. **Lemma 4.5**.: _Given \((h_{1},h_{2})\) with the Fourier series expansions_ \[h_{k}(\theta)=\sum_{n\in\mathbb{N}^{*}}c_{n,k}\cos(n\theta),\] _for some sequence \((c_{n,k})_{n\in\mathbb{N}^{*}}\subset\mathbb{R}\), all \(k\in\{1,2\}\) and \(\theta\in\mathbb{T}\), it then holds that the linearized operator of \(\mathcal{F}\) at \(r=0\) writes, for any \(\theta\in\mathbb{T}\), as_ \[d_{r}\mathcal{F}(\Omega,0)\begin{pmatrix}h_{1}\\ h_{2}\end{pmatrix}(\theta)=-\sum_{n\in\mathbb{N}^{*}}nM_{n}(\Omega)\begin{pmatrix} c_{n,1}\\ c_{n,2}\end{pmatrix}\sin(n\theta),\] _where we set_ \[M_{n}(\Omega)\stackrel{{\mathrm{def}}}{{=}}\begin{pmatrix}\Omega+ V_{b_{1},b_{2}}+\frac{\frac{\delta}{2n}+I_{n}(b_{1}\mu)K_{n}(b_{1}\mu)}{\delta+1}& \frac{\frac{b^{n}}{2n}-I_{n}(\mu b_{2})K_{n}(b_{1}\mu)}{\delta+1}\\ \\ \frac{\delta}{\delta+1}\Big{(}\frac{b^{n}}{2n}-I_{n}(b_{2}\mu)K_{n}(b_{1}\mu) \Big{)}&\Omega+W_{b_{1},b_{2}}+\frac{\frac{1}{2n}+\delta I_{n}(b_{2}\mu)K_{n} (b_{2}\mu)}{\delta+1}\end{pmatrix},\] _for all \(n\in\mathbb{N}^{*}\), and where \(V_{b_{1},b_{2}}\) and \(W_{b_{1},b_{2}}\) are defined in (4.1)._ Proof.: Let \(\theta\in\mathbb{T}\) and \(k\in\{1,2\}\) be fixed in the proof and we proceed first by recasting the following identities: for any \(n\in\mathbb{N}^{*}\) and \(x,y,\lambda\in(0,\infty)\) with \(x\leq y\), that \[\frac{1}{2\pi}\int_{0}^{2\pi}\log\Big{(}\big{|}1-xe^{\mathrm{i}\theta}\big{|} \Big{)}\cos(n\theta)d\theta=-\frac{x^{n}}{2n} \tag{4.15}\] and \[\frac{1}{2\pi}\int_{0}^{2\pi}K_{0}\left(\lambda|x-ye^{\mathrm{i}\theta}| \right)\cos(n\theta)d\theta=I_{n}(\lambda x)K_{n}(\lambda y). \tag{4.16}\] The proof of the first identity can be found in [13, Lemma A.3], whereas the justification of the second one follows from the Beltrami's summation formula [50, page 361] \[K_{0}\left(|x-ye^{\mathrm{i}\theta}|\right)=\sum_{m\in\mathbb{Z}}I_{m}(x)K_{m }(y)\cos(m\theta).\] Now, observe that substituting the value \(r=0\) in (4.13) gives \[V_{k}(0)(\theta)=\sum_{j=1}^{2}\frac{b_{j}}{b_{k}}\int_{0}^{2\pi}G_{k,j}\big{(} b_{k}e^{\mathrm{i}\theta}-b_{j}e^{\mathrm{i}\eta}\big{)}\cos(\eta-\theta)d\eta.\] On the one hand, in view of (2.9), we write that \[\int_{0}^{2\pi}G_{k,k}\big{(}b_{k}e^{\mathrm{i}\theta}-b_{k}e^{ \mathrm{i}\eta}\big{)}\cos(\eta-\theta)d\eta =\frac{\delta^{2-k}}{2\pi(\delta+1)}\int_{0}^{2\pi}\log\big{(}b_{ k}\big{|}1-e^{\mathrm{i}(\eta-\theta)}\big{|}\big{)}\cos(\eta-\theta)d\eta\] \[\quad-\frac{\delta^{k-1}}{2\pi(\delta+1)}\int_{0}^{2\pi}K_{0}\big{(} \mu b_{k}\big{|}1-e^{\mathrm{i}(\eta-\theta)}\big{|}\big{)}\cos(\eta-\theta)d\eta.\] Therefore, changing the variable \(\eta\mapsto\eta+\theta\) and employing the identities (4.16) and (4.15) entails that \[\int_{0}^{2\pi}G_{k,k}\big{(}b_{k}e^{\mathrm{i}\theta}-b_{k}e^{ \mathrm{i}\eta}\big{)}\cos(\eta-\theta)d\eta=-\frac{\delta^{2-k}}{2(\delta+1)}- \frac{\delta^{k-1}}{\delta+1}I_{1}(b_{k}\mu)K_{1}(b_{k}\mu).\] On the other hand, by virtue of (2.9), writing \[\int_{0}^{2\pi}G_{k,3-k}\big{(}b_{k}e^{\mathrm{i}\theta}-b_{3-k} e^{\mathrm{i}\eta}\big{)} \cos(\eta-\theta)d\eta\] \[=\frac{\delta^{k-1}}{2\pi(\delta+1)}\int_{0}^{2\pi}\log\big{(}b_{ 1}\big{|}1-be^{\mathrm{i}(\eta-\theta)}\big{|}\big{)}\cos(\eta-\theta)d\eta\] \[\quad+\frac{\delta^{k-1}}{2\pi(\delta+1)}\int_{0}^{2\pi}K_{0} \big{(}\mu\big{|}b_{k}-b_{3-k}e^{\mathrm{i}(\eta-\theta)}\big{|}\big{)}\cos( \eta-\theta)d\eta,\] it then follows, by the same arguments as before, that \[\int_{0}^{2\pi}G_{k,3-k}\big{(}b_{k}e^{\mathrm{i}\theta}-b_{3-k} e^{\mathrm{i}\eta}\big{)}\cos(\eta-\theta)d\eta=-\frac{\delta^{k-1}b}{2( \delta+1)}+\frac{\delta^{k-1}}{\delta+1}I_{1}(b_{2}\mu)K_{1}(b_{1}\mu).\] Next, according to (4.14) and (2.9), one has that \[L_{k,j}(0)[h_{j}](\theta) \stackrel{{\mathrm{def}}}{{=}}\sum_{n\geq 1}c_{n,j} \int_{0}^{2\pi}G_{k,j}(b_{k}e^{\mathrm{i}\theta}-b_{j}e^{\mathrm{i}\eta})\cos( n\eta)d\eta\] \[=\sum_{n\geq 1}\tfrac{c_{n,j}}{2\pi(\delta+1)}\bigg{[}\delta^{2-j} \int_{0}^{2\pi}\log\big{(}\big{|}b_{k}e^{\mathrm{i}\theta}-b_{j}e^{\mathrm{i} \eta}\big{|}\big{)}\cos(n\eta)d\eta\] \[\qquad-(-1)^{k+j}\delta^{k-1}\int_{0}^{2\pi}K_{0}\big{(}\mu \big{|}b_{k}e^{\mathrm{i}\theta}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)}\cos(n \eta)d\eta\bigg{]},\] for any \(j\in\{1,2\}\). Thus, by a change of variables, we find that \[L_{k,j}(0)[h_{j}](\theta) =\sum_{n\geq 1}\frac{c_{n,j}}{2\pi(\delta+1)}\bigg{(}\delta^{2-j} \int_{0}^{2\pi}\log\big{(}\big{|}b_{k}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)} \cos(n\eta+n\theta)d\eta\] \[\qquad-(-1)^{k+j}\delta^{k-1}\int_{0}^{2\pi}K_{0}\big{(}\mu\big{|} b_{k}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)}\cos(n\eta+n\theta)d\eta\bigg{)}\] \[=\sum_{n\geq 1}\frac{c_{n,j}}{2\pi(\delta+1)}\bigg{(}\delta^{2-j} \cos(n\theta)\int_{0}^{2\pi}\log\big{(}\big{|}b_{k}-b_{j}e^{\mathrm{i}\eta} \big{|}\big{)}\cos(n\eta)d\eta\] \[\qquad-\delta^{2-j}\sin(n\theta)\int_{0}^{2\pi}\log\big{(}\big{|} b_{k}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)}\sin(n\eta)d\eta\] \[\qquad-(-1)^{k+j}\delta^{k-1}\cos(n\theta)\int_{0}^{2\pi}K_{0} \big{(}\mu\big{|}b_{k}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)}\cos(n\eta)d\eta\] \[\qquad-(-1)^{k+j}\delta^{k-1}\sin(n\theta)\int_{0}^{2\pi}K_{0} \big{(}\mu\big{|}b_{k}-b_{j}e^{\mathrm{i}\eta}\big{|}\big{)}\sin(n\eta)d\eta \bigg{)}.\] Therefore, noting, by symmetry, that \[\int_{0}^{2\pi}\log\big{(}\big{|}b_{k}-b_{j}e^{\mathrm{i}\eta} \big{|}\big{)}\sin(n\eta)d\eta=0\] and \[\int_{0}^{2\pi}K_{0}\big{(}\mu\big{|}b_{k}-b_{j}e^{\mathrm{i} \eta}\big{|}\big{)}\sin(n\eta)d\eta=0,\] we conclude, by utilizing (4.16) and (4.15), again, that \[L_{k,k}(0)[h_{k}](\theta) =-\sum_{n\geq 1}\frac{c_{n,k}}{\delta+1}\left(\frac{\delta^{2-k}}{2n}+ \delta^{k-1}I_{n}(b_{k}\mu)K_{n}(b_{k}\mu)\right)\cos(n\theta),\] \[L_{k,3-k}(r)[h_{3-k}](\theta) =\sum_{n\geq 1}\frac{c_{n,3-k}\delta^{k-1}}{\delta+1}\left(- \frac{b^{n}}{2n}+I_{n}(b_{2}\mu)K_{n}(b_{1}\mu)\right)\cos(n\theta).\] At last, gathering the foregoing identities and plugging them in (4.12) at the equilibrium \(r=0\) completes the proof of the lemma. ### Regularity properties Here, we justify the regularity properties of the nonlinear functional \(\mathcal{F}\) introduced in (4.3). In particular, by virtue of the results laid out in this section, all the formal computations in the Section 4.2 above will be fully justified. Let us first set a few notations to be used afterwards. For \(\alpha\in(0,1)\), and \(m\in\mathbb{N}^{+}\), consider the \(m\)-fold Banach spaces \[X_{m}^{\alpha}\stackrel{{\mathrm{def}}}{{=}}\Big{\{}h\in C^{1+ \alpha}(\mathbb{T}):\,h(\theta)=\sum_{n\geq 1}c_{n}\cos(nm\theta),\,c_{n}\in \mathbb{R},\,\theta\in\mathbb{T}\Big{\}}\] and \[Y_{m}^{\alpha}\stackrel{{\mathrm{def}}}{{=}}\Big{\{}h\in C^{ \alpha}(\mathbb{T}):\,h(\theta)=\sum_{n\geq 1}c_{n}\sin(nm\theta),\,c_{n}\in \mathbb{R},\,\theta\in\mathbb{T}\Big{\}}\] equipped with their usual norms. Accordingly, we define the product spaces \[\mathcal{X}_{m}^{\alpha}\stackrel{{\mathrm{def}}}{{=}}X_{m}^{ \alpha}\times X_{m}^{\alpha},\qquad\mathcal{Y}_{m}^{\alpha}\stackrel{{ \mathrm{def}}}{{=}}Y_{m}^{\alpha}\times Y_{m}^{\alpha},\] and, for all \(\epsilon\in(0,1)\), we denote by \(\mathcal{B}_{m,\epsilon}^{\alpha}\) the open ball in \(\mathcal{X}_{m}^{\alpha}\) centered at the origin and with radius \(\epsilon\), i.e., \[\mathcal{B}_{m,\epsilon}^{\alpha}\stackrel{{\mathrm{def}}}{{=}} \big{\{}r\in\mathcal{X}_{m}:\|r\|_{\mathcal{X}_{m}}<\epsilon\big{\}}.\] All regularity properties of the functional \(\mathcal{F}\) that we need are now established in the next proposition. **Proposition 4.6**.: _Let \(\lambda>0,\,b\in(0,1)\), \(\alpha\in(0,1)\) and \(m\in\mathbb{N}^{*}.\) Then, there exists \(\epsilon>0\) such that the functional \(\mathcal{F}\) introduced in (4.3) is well defined as a mapping_ \[\mathcal{F}:\mathbb{R}\times\mathcal{B}_{m,\epsilon}^{\alpha}\to\mathcal{Y}_{m }^{\alpha},\] _and is of class \(C^{1}.\) Moreover, the partial derivative_ \[\partial_{\Omega,r}^{2}\mathcal{F}:\mathbb{R}\times B_{m,\epsilon}^{\alpha} \to\mathcal{L}(\mathcal{X}_{m}^{\alpha},\mathcal{Y}_{m}^{\alpha})\] _exists and is continuous._ Proof.: Throughout the proof, \(\alpha\in(0,1)\) and \(r\in B_{m,\varepsilon}^{\alpha}\) are fixed parameters. In view of (4.3), notice that it suffices to establish the regularity properties for \(\mathcal{F}_{k,j}\), for all \(j,k\in\{1,2\}\), where \(\mathcal{F}_{k,j}\) are introduced in (4.5) and can be recast here as \[\mathcal{F}_{k,j}(r(\theta))=\mathrm{i}\partial_{\theta}\big{(}R_{k}(\theta)e ^{\mathrm{i}\theta}\big{)}\cdot\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{ \mathrm{i}\theta}-R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\partial_{\eta}\big{(}R _{j}(\eta)e^{\mathrm{i}\eta}\big{)}d\eta,\] where we recall that \[R_{k}(\theta)=\sqrt{b_{k}^{2}+2r_{k}(\theta)},\] for all \(k\in\{1,2\}\). Thus, it is readily seen that \[\theta\mapsto\partial_{\theta}\big{(}R_{k}(\theta)e^{\mathrm{i}\theta}\big{)} \in C^{\alpha}(\mathbb{T}),\] as soon as the deformation \(\theta\mapsto r(\theta)\) belongs to \(C^{1+\alpha}(\mathbb{T})\), which holds by assumption. Next, observe, for \(r\in\mathcal{X}_{m}^{\alpha}\), that \[b_{k}\leq|R_{k}(\theta)|\leq\sqrt{b_{k}^{2}+2\epsilon},\quad\text{for all}\quad\theta\in\mathbb{T}. \tag{4.17}\] Therefore, in view of the identities (2.9) and (2.5), altogether with the fact that \[\big{|}\log|x|\big{|}\lesssim|x|^{-\alpha},\] for all \(x\in\mathbb{R}^{2}\setminus\{0\}\) and any \(\alpha\in(0,1)\), it is then readily seen that \[\bigg{|}G_{k,j}\big{(}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{n}(\eta)e^{\mathrm{ i}\eta}\big{)}\bigg{|}\lesssim\big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{n}( \eta)e^{\mathrm{i}\eta}\big{|}^{-\alpha},\] for any \(j,k\in\{1,2\}\). Accordingly, one deduces for \(\epsilon\) small enough (see for [40, inequality (61)]) that \[\Big{|}G_{k,n}\big{(}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{n}(\eta)e^{\mathrm{ i}\eta}\big{)}\Big{|}\lesssim\Big{|}\sin\left(\tfrac{\theta-\eta}{2}\right) \Big{|}^{-\alpha}, \tag{4.18}\] for all \(\theta\neq\eta\in\mathbb{T}\). Moreover, noticing that \[\Big{|}\nabla\left(G_{k,j}(|x|)\right)\Big{|}\lesssim|k_{+}(x)|+|k_{-}(x)|,\] where \(k_{\pm}\) are introduced in (2.10), and employing (2.11) yields that \[\Big{|}\nabla\left(G_{k,j}(|x|)\right)\Big{|}\lesssim|x|^{-1}.\] Thus, in view of (4.17), we deduce that \[\Big{|}\partial_{\theta}\left(G_{k,j}(R_{k}(\theta)e^{\mathrm{i}\theta}-R_{j} (\eta)e^{\mathrm{i}\eta})\right)\Big{|}\lesssim\Big{|}\sin\left(\tfrac{\theta -\eta}{2}\right)\Big{|}^{-(1+\alpha)}, \tag{4.19}\] for all \(\theta\neq\eta\in\mathbb{T}\). All in all, with (4.18) and (4.19) in hand, Lemma 2.4 allows us to deduce that \[\theta\mapsto\mathcal{F}_{k,j}(r(\theta))\in C^{\alpha}(\mathbb{T}),\] for any \(j,k\in\{1,2\}\). In addition to that, the reflection symmetry of \(\mathcal{F}_{k}\) is already established in (4.11), whereby we arrive at the conclusion that the mapping \[\mathcal{F}:\mathbb{R}\times B^{\alpha}_{m,\varepsilon}\to\mathcal{Y}^{\alpha}_ {m}\] is well defined. Now, we show that this mapping is \(C^{1}\) with respect to the variable \(r\), as its \(C^{1}\) regularity wit respect to variable \(\Omega\) clearly holds true as it follows from the observation that \[\partial_{\Omega}\mathcal{F}(\Omega,r)=r^{\prime}(\theta).\] To that end, allow us first to recall the expression (4.12) here for convenience \[d_{r}\mathcal{F}(\Omega,r)[h]=\Omega\begin{pmatrix}h^{\prime}_{1}&0\\ 0&h^{\prime}_{2}\end{pmatrix}+\partial_{\theta}\begin{pmatrix}V_{1}(r)h_{1}-L _{1,1}(r)[h_{1}]&-L_{1,2}(r)[h_{2}]\\ -L_{2,1}(r)[h_{1}]&V_{2}(r)h_{2}-L_{2,2}(r)[h_{2}]\end{pmatrix},\] which holds for any \(h=(h_{1},h_{2})\in\mathcal{X}^{\alpha}_{m}\), where \(V_{k}\) and \(L_{k,n}\) are respectively given by (4.13) and (4.14) which can also be recast as \[V_{k}(r)(\theta)=\frac{e^{\mathrm{i}\theta}}{R_{k}(\theta)}\cdot\sum_{j=1}^{2 \pi}\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{j}(\eta )e^{\mathrm{i}\eta}\big{)}\partial_{\eta}\Big{(}R_{j}(\eta)e^{\mathrm{i}\eta} \Big{)}d\eta\] and \[L_{k,n}(r)[h_{n}](\theta)=\int_{0}^{2\pi}G_{k,n}(R_{k}(\theta)e^{\mathrm{i} \theta}-R_{n}(\eta)e^{\mathrm{i}\eta})h_{n}(\eta)d\eta.\] Now, we shall prove that \(d_{r}\mathcal{F}(\Omega,r)[\cdot]\) is a well defined linear mapping from \(\mathcal{X}^{\alpha}_{m}\) into \(\mathcal{Y}^{\alpha}_{m}\) and we emphasize that we can restrict our focus on the regularity property, for the symmetry follows by similar arguments to the proof of Lemma 4.3. Moreover, observe that the \(C^{\alpha}\) regularity of \(d_{r}\mathcal{F}(\Omega,r)[h]\), for any \(h\in C^{\alpha+1}\times C^{\alpha+1}\), directly follows from the \(C^{\alpha+1}\) regularity of \(V_{k}(r)[h]\) and \(L_{k,n}(r)[h]\), which we will now prove in details. Thus, we now claim that \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{ \mathrm{i}\theta}-R_{j}(\eta)e^{\mathrm{i}\eta}\big{)}\partial_{\eta}\Big{(}R _{j}(\eta)e^{\mathrm{i}\eta}\Big{)}d\eta\] \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,j}(R_{k}(\theta)e^{{\rm i}\theta }-R_{j}(\eta)e^{{\rm i}\eta})h_{j}(\eta)d\eta\] belong to \(C^{\alpha}({\mathbb{T}})\) and we split the proof of that into two parts: _Regularity of the anti-diagonal._ This is the case where the kernels \(G_{j,k}\) are regular due to a crucial cancellation that only appears in the elements of the anti-diagonal of the gradient-matrix. This cancellation holds to be a consequence of the specific combination of the kernels associated with \(\Delta^{-1}\) and \((\Delta-\lambda^{2}(1+\delta)\operatorname{Id})^{-1}\) through the coupling in (QS2L). In order to observe this phenomenon, we first emphasize that the expression of \(G_{j,k}\) in (2.9), for \(j\neq k\), can be recast as \[G_{1,2}(z)=\frac{1}{2\pi(1+\delta)}\Big{(}\log(|z|)+K_{0}(\mu|z|)\Big{)}\] and \[G_{2,1}(z)=\frac{\delta}{2\pi(1+\delta)}\Big{(}\log(|z|)+K_{0}(\mu|z|)\Big{)}.\] Therefore, we expand \(K_{0}\) by utilizing (2.3) to find, after performing minor simplifications, that \[K_{0}(\mu|z|)=-\log(|z|)-\left(\underbrace{\log\Big{(}\frac{\mu}{2}\Big{)}I_ {0}(\mu|z|)+\log\left(\frac{|z|}{2}\right)\sum_{m=1}^{\infty}\frac{\Big{(}\mu \frac{|z|}{2}\Big{)}^{2m}}{m!\Gamma(m+1)}}_{\stackrel{{\rm def}}{{= }}Q(|z|)}\right). \tag{4.20}\] Thus, we deduce that \[G_{1,2}(z)=\frac{1}{2\pi(1+\delta)}Q(|z|)\] \[G_{2,1}(z)=\frac{\delta}{2\pi(1+\delta)}Q(|z|).\] The crucial observation here is that \(Q\) is more regular than the previous kernels, \(\log\) and \(K_{0}\), themselves. More precisely, we emphasize that one can show by a direct computations that \[Q(|\cdot|)\in C^{1+\alpha}_{\rm loc}({\mathbb{R}}^{+}).\] Therefore, it is readily seen that \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,j}\big{(}R_{k}(\theta)e^{{ \rm i}\theta}-R_{j}(\eta)e^{{\rm i}\eta}\big{)}\partial_{\eta}\Big{(}R_{j}( \eta)e^{{\rm i}\eta}\Big{)}d\eta\] and \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,j}(R_{k}(\theta)e^{{\rm i} \theta}-R_{j}(\eta)e^{{\rm i}\eta})h_{j}(\eta)d\eta\] belong to \(C^{\alpha}({\mathbb{T}})\), for any \(i\neq j\in\{1,2\}\), as soon as \(r\in C^{1+\alpha}({\mathbb{T}})\), which holds by assumption. _Regularity of the diagonal._ Owing again to (2.9) and (4.20), we write that \[G_{1,1}(z)=\frac{1}{2\pi(1+\delta)}\Big{(}(\delta+1)\log(|z|)-Q(|z|)\Big{)}\] and \[G_{2,2}(z)=\frac{1}{2\pi(1+\delta)}\Big{(}(\delta+1)\log(|z|)-\delta Q(|z|) \Big{)},\] where the function \(Q\) is defined in (4.20) and belongs to \(C^{1+\alpha}({\mathbb{T}})\) as it is emphasized above. Accordingly, in order to show the regularity of the diagonal elements, i.e. the fact that \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,k}\big{(}R_{k}(\theta)e^{{ \rm i}\theta}-R_{k}(\eta)e^{{\rm i}\eta}\big{)}\partial_{\eta}\Big{(}R_{k}( \eta)e^{{\rm i}\eta}\Big{)}d\eta\] \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,k}(R_{k}(\theta)e^{\mathrm{i} \theta}-R_{k}(\eta)e^{\mathrm{i}\eta})h_{k}(\eta)d\eta\] belong to \(C^{\alpha}(\mathbb{T})\), for any \(k\in\{1,2\}\), it only remains to prove that \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}\log\Big{|}R_{k}(\theta)e^{ \mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}\Big{|}\partial_{\eta}\Big{(}R_ {k}(\eta)e^{i\eta}\Big{)}d\eta\] and \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}\log\Big{|}R_{k}(\theta)e^{ \mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}\Big{|}h_{k}(\eta)d\eta\] belong to \(C^{\alpha}(\mathbb{T})\). Note that this is equivalent to showing that the functions \[\theta\mapsto\mathcal{U}(\theta)\stackrel{{\mathrm{def}}}{{=}} \int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{ i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta} \Big{|}^{2}}\cdot\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta} \Big{)}\partial_{\eta}\Big{(}R_{k}(\eta)e^{i\eta}\Big{)}d\eta\] and \[\theta\mapsto\mathcal{W}(\theta)\stackrel{{\mathrm{def}}}{{=}} \int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{ i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta} \Big{|}^{2}}\cdot\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta} \Big{)}h_{k}(\eta)d\eta\] enjoy the \(C^{\alpha}\) regularity. The achievement of the preceding claims relies on the observation that \[\int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{ i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta} \Big{|}^{2}}\cdot\partial_{\eta}\Big{(}R_{k}(\eta)e^{\mathrm{i}\eta}\Big{)}d \eta=0, \tag{4.21}\] for any \(k\in\{1,2\}\), which, in particular, allows us to write, for all \(\theta\in\mathbb{T}\), that \[\mathcal{U}(\theta) =\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta}\Big{)} \cdot\int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{ \mathrm{i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{ \mathrm{i}\eta}\Big{|}^{2}}\Bigg{(}\partial_{\eta}\Big{(}R_{k}(\eta)e^{i\eta} \Big{)}-\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta}\Big{)} \Bigg{)}d\eta\] \[\quad+\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta} \Big{)}\int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{ \mathrm{i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{ \mathrm{i}\eta}\Big{|}^{2}}\cdot\Bigg{(}\partial_{\theta}\Big{(}R_{k}(\theta) e^{\mathrm{i}\theta}\Big{)}-\partial_{\eta}\Big{(}R_{k}(\eta)e^{i\eta} \Big{)}\Bigg{)}d\eta \tag{4.22}\] and that \[\mathcal{W}(\theta) =\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta}\Big{)} \cdot\int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{j}(\eta)e^{ \mathrm{i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{ \mathrm{i}\eta}\Big{|}^{2}}\Big{(}h_{k}(\eta)-h_{k}(\theta)\Big{)}d\eta\] \[\quad+h_{k}(\theta)\int_{0}^{2\pi}\frac{R_{k}(\theta)e^{\mathrm{ i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}( \eta)e^{\mathrm{i}\eta}\Big{|}^{2}}\cdot\Bigg{(}\partial_{\theta}\Big{(}R_{k}( \theta)e^{\mathrm{i}\theta}\Big{)}-\partial_{\eta}\Big{(}R_{k}(\eta)e^{i\eta} \Big{)}\Bigg{)}d\eta.\] On the other hand, due to the \(C^{1+\alpha}\) regularity of \(h_{j}\) and \(r\), one has that \[\Bigg{|}\partial_{\theta}\Big{(}R_{k}(\theta)e^{\mathrm{i}\theta}\Big{)}- \partial_{\eta}\Big{(}R_{k}(\eta)e^{\mathrm{i}\eta}\Big{)}\Bigg{|}\lesssim\big{|} \theta-\eta\big{|}^{\alpha}\] and that \[\Big{|}h_{k}(\eta)-h_{k}(\theta)\Big{|}\lesssim\big{|}\theta-\eta\big{|}^{\alpha},\] for any \(\theta,\eta\in\mathbb{T}\). Therefore, combining the preceding bounds with \[\Bigg{|}\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}}{ \Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}\Big{|}^{ 2}}\Bigg{|}\lesssim\frac{1}{\Big{|}R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta) e^{\mathrm{i}\eta}\Big{|}}\lesssim\Big{|}\sin\left(\frac{\theta-\eta}{2} \right)\Big{|}^{-1} \tag{4.23}\] and \[\left|\partial_{\theta}\left(\frac{R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^ {\mathrm{i}\eta}}{\left|R_{k}(\theta)e^{\mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{ i}\eta}\right|^{2}}\right)\right|\lesssim\Big{|}\sin\Big{(}\tfrac{\theta-\eta}{2} \Big{)}\Big{|}^{-2},\] which can be proved by means of a straightforward computations, the desired regularity of \(\mathcal{U}\) and \(\mathcal{W}\) follows then by a direct application of Lemma 2.5. This completes the proof of the fact that the operator \(d_{r}\mathcal{F}[\cdot]\) is well defined from \(\mathbb{R}\times B^{\alpha}_{m,\epsilon}\) to \(C^{\alpha}(\mathbb{T})\). As for its continuity with respect to the \(r\) variable, we emphasize that we can focus on establishing that for the functions \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,k}\big{(}R_{k}(\theta)e^{ \mathrm{i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta}\big{)}\partial_{\eta}\Big{(} R_{k}(\eta)e^{\mathrm{i}\eta}\Big{)}d\eta\] and \[\theta\mapsto\partial_{\theta}\int_{0}^{2\pi}G_{k,k}(R_{k}(\theta)e^{\mathrm{ i}\theta}-R_{k}(\eta)e^{\mathrm{i}\eta})h_{k}(\eta)d\eta,\] as the treatment of the remaining terms in the expression of \(d_{r}\mathcal{F}[h]\) is straitforward. Again, this reduces to proving the continuity, with respect to \(r\), of the functions \(\mathcal{U}\) and \(\mathcal{W}\) defined above. For simplicity, we only outline here the proof of that for \(\mathcal{U}\), for the same arguments apply for \(\mathcal{W}\), as well. To that end, we consider two deformations \(r=(r_{1},r_{2})\) and \(\widetilde{r}=(\widetilde{r}_{1},\widetilde{r}_{2})\) and we are going to use the notation \(\widetilde{f}\), for any given functional \(f\), to precise its dependence on \(\widetilde{r}\) instead of \(r\). Moreover, let us introduce the functions \[M(\theta)\stackrel{{\mathrm{def}}}{{=}}R_{k}(\theta)e^{ \mathrm{i}\theta},\qquad\mathcal{J}_{k}(\theta)\stackrel{{\mathrm{ def}}}{{=}}M^{\prime}(\theta),\] \[\mathcal{H}_{k}(\theta,\eta)\stackrel{{\mathrm{def}} }{{=}}\frac{M(\theta)-M(\eta)}{\Big{|}M(\theta)-M(\eta)\Big{|}^{2}},\] and we recall that \[R_{k}=\sqrt{b_{k}^{2}+2r_{k}}.\] In view of these notations, we recast (4.21) as \[\int_{0}^{2\pi}\mathcal{H}_{k}(\theta,\eta)\cdot\mathcal{J}_{k}(\eta)d\eta=0,\] for all \(\theta\in\mathbb{T}\). Therefore, we write, in view of (4.22), that \[\mathcal{U}(\theta)-\widetilde{\mathcal{U}}(\theta) =\mathcal{J}_{k}(\theta)\cdot\int_{0}^{2\pi}\mathcal{H}_{k}( \theta,\eta)\Big{(}\mathcal{J}_{k}(\eta)-\mathcal{J}_{k}(\theta)\Big{)}d\eta\] \[\quad+\mathcal{J}_{k}(\theta)\int_{0}^{2\pi}\mathcal{H}_{k}( \theta,\eta)\cdot\Big{(}\mathcal{J}_{k}(\theta)-\mathcal{J}_{k}(\eta)\Big{)}d\eta\] \[\quad-\widetilde{\mathcal{J}}_{k}(\theta)\cdot\int_{0}^{2\pi} \widetilde{\mathcal{H}}_{k}(\theta,\eta)\Big{(}\widetilde{\mathcal{J}}_{k}( \eta)-\widetilde{\mathcal{J}}_{k}(\theta)\Big{)}d\eta\] \[\quad-\widetilde{\mathcal{J}}_{k}(\theta)\int_{0}^{2\pi} \widetilde{\mathcal{H}}_{k}(\theta,\eta)\cdot\Big{(}\widetilde{\mathcal{J}}_{ k}(\theta)-\widetilde{\mathcal{J}}_{k}(\eta)\Big{)}d\eta\] \[\quad=\big{(}\mathcal{J}_{k}(\theta)-\widetilde{\mathcal{J}}_{k}( \theta)\big{)}\cdot\int_{0}^{2\pi}\mathcal{H}_{k}(\theta,\eta)\Big{(}\mathcal{ J}_{k}(\eta)-\mathcal{J}_{k}(\theta)\Big{)}d\eta\] \[\quad\quad+\widetilde{\mathcal{J}}_{k}(\theta)\cdot\int_{0}^{2 \pi}\big{(}\mathcal{H}_{k}(\theta,\eta)-\widetilde{\mathcal{H}}_{k}(\theta, \eta)\big{)}\Big{(}\mathcal{J}_{k}(\eta)-\mathcal{J}_{k}(\theta)\Big{)}d\eta\] \[\quad\quad+\widetilde{\mathcal{J}}_{k}(\theta)\cdot\int_{0}^{2\pi} \widetilde{\mathcal{H}}_{k}(\theta,\eta)\Big{(}\big{(}\mathcal{J}_{k}- \widetilde{\mathcal{J}}_{k}\big{)}(\eta)-\big{(}\mathcal{J}_{k}-\widetilde{ \mathcal{J}}_{k}\big{)}(\theta)\Big{)}d\eta\] \[\stackrel{{\mathrm{def}}}{{=}}\sum_{j=1}^{3} \mathcal{A}_{j}(\theta).\] Thus, by virtue of the same argument laid out in the regularity study of \(\mathcal{U}\) in the previous step of the proof, we observe that a direct application of Lemma 2.5 leads to the bound \[\|\mathcal{A}_{1}\|_{C^{\alpha}}+\|\mathcal{A}_{3}\|_{C^{\alpha}}\lesssim\| \mathcal{J}_{k}-\widetilde{\mathcal{J}}_{k}\|_{C^{\alpha}}\lesssim\|r_{k}- \widetilde{r}_{k}\|_{C^{1+\alpha}}\,.\] As for the estimate of \(A_{2}\), in order to apply Lemma 2.5, we first need to show that \[\big{|}\big{(}\mathcal{H}_{k}-\widetilde{\mathcal{H}}_{k}\big{)}(\theta,\eta) \big{|}\lesssim\|r-\widetilde{r}\|_{C^{1+\alpha}}\,\Big{|}\sin\left(\tfrac{ \theta-\eta}{2}\right)\Big{|}^{-1} \tag{4.24}\] and \[\big{|}\partial_{\theta}\big{(}\mathcal{H}_{k}-\widetilde{\mathcal{H}}_{k} \big{)}(\theta,\eta)\big{|}\lesssim\|r-\widetilde{r}\|_{C^{1+\alpha}}\,\Big{|} \sin\left(\tfrac{\theta-\eta}{2}\right)\Big{|}^{-2}, \tag{4.25}\] for all \(\theta\neq\eta\in\mathbb{T}\). To that end, we only need to write, by a direct computation, that \[\big{|}\mathcal{H}_{k}(\theta,\eta) -\widetilde{\mathcal{H}}_{k}(\theta,\eta)\big{|}\] \[\lesssim\big{|}\big{(}M-\widetilde{M}\big{)}(\theta)-\big{(}M- \widetilde{M}\big{)}(\eta)\big{|}\left(\frac{1}{\Big{|}M(\theta)-M(\eta)}+ \frac{1}{\Big{|}\widetilde{M}(\theta)-\widetilde{M}(\eta)\Big{|}}\right)^{2}.\] Therefore, by using the Lipschitz property of the functions \(M\) and \(\widetilde{M}\) combined with (4.23), one deduces that \[\big{|}\mathcal{H}_{k}(\theta,\eta)-\widetilde{\mathcal{H}}_{k}( \theta,\eta)\big{|} \lesssim\|M-\widetilde{M}\|_{C^{1}}\frac{|\theta-\eta|}{\Big{|} \sin\left(\tfrac{\theta-\eta}{2}\right)\Big{|}^{2}}\] \[\lesssim\|r-\widetilde{r}\|_{C^{1+\alpha}}\,\Big{|}\sin\left( \frac{\theta-\eta}{2}\right)\Big{|}^{-1},\] whereby showing (4.24). The justification of (4.25) can be done along the same lines, whence we skip its proof here. This completes the proof of the proposition. ### Spectral analysis of the linearized operator This section is devoted, first, to perform a refine analysis of the eigenvalues associated with the matrix \(M_{n}(\Omega)\), introduced in in Lemma 4.5 and defining the linearized operator \(d_{r}\mathcal{F}(\Omega,0)\), and, second, to establish the remaining prerequisite properties of that operator before we apply Crandall-Rabinowitz's theorem. **Proposition 4.7**.: _For a given \(n\in\mathbb{N}^{*},\) the matrix \(M_{n}(\Omega)\), introduced in Lemma 4.5 above, is not invertible if and only if \(\Omega=\Omega_{n}^{\pm},\) where_ \[\Omega_{n}^{\pm}\overset{\mathrm{def}}{=}\frac{1}{2(\delta+1)}\left(-(A_{n}+B _{n})\pm\sqrt{(A_{n}-B_{n})^{2}+4\delta\left(\frac{b^{n}}{2n}-I_{n}(b_{2}\mu)K _{n}(b_{1}\mu)\right)^{2}}\right), \tag{4.26}\] _where, we set_ \[\begin{split} A_{n}&\overset{\mathrm{def}}{=}( \delta+1)V_{b_{1},b_{2}}+\frac{\delta}{2n}+I_{n}(b_{1}\mu)K_{n}(b_{1}\mu)\\ B_{n}&\overset{\mathrm{def}}{=}(\delta+1)W_{b_{1},b_ {2}}+\frac{1}{2n}+\delta I_{n}(b_{2}\mu)K_{n}(b_{2}\mu),\end{split} \tag{4.27}\] _and we recall that \(V_{b_{1},b_{2}}\) and \(W_{b_{1},b_{2}}\) are defined in (4.1), above. Moreover, for any \(0<b_{2}\leq b_{1}\) and \(\delta>0\), the sequences \((\Omega_{n}^{\pm})_{n\in\mathbb{N}}\) are strictly increasing. In addition to that, there is \(p_{0}\in\mathbb{N}^{*}\) such that, for all \(m,n\geq p_{0}\), any \(0<b_{2}<b_{1}\) and \(\delta\geq\frac{b_{2}}{b_{1}}\), it holds that_ \[\Omega_{n}^{+}>\Omega_{m}^{-}.\] _Furthermore, regarding \(\Omega_{n}^{\pm}\) as functions of \(b_{2}\in(0,b_{1})\), for fixed \(b_{1}\in(0,\infty)\) and \(n\in\mathbb{N}^{*}\), and introducing the set_ \[S_{m,b_{1}}\overset{\mathrm{def}}{=}\left\{b_{2}\in(0,b_{1}):\exists n\in \mathbb{N}^{*},\ \Omega_{m}^{-}(b_{2})=\Omega_{n}^{+}(b_{2})\right\},\] _it then holds, for any \(m\in\mathbb{N}^{*}\), that the set \(S_{m,b_{1}}\) contains at most a finite number of elements._ _At last, in the particular case \(b_{1}=b_{2}\), there exists \(b_{1}\in(0,\infty)\) for which one has that_ \[\Omega_{n}^{+}=\Omega_{m}^{-},\quad\text{for some}\quad n,m\in\mathbb{N}^{*}.\] _Remark 4.4_.: It is readily seen that the set \(S_{m,b_{1}}\) is empty, for any \(m>p_{0}\), where \(p_{0}\) is as per given in the statement of Proposition 4.7, above. However, the arguments in our proof below are not enough to show that this set is empty for low \(m\)-fold symmetries. This remains then unclear for now and can probably be wrong in view to the example of spectral collisions given in the statement the same lemma above in the case \(b_{1}=b_{2}\). Proof.: We split the proof into four steps for a better readability. #### Computing the eigenvalues We first proceed by noticing that the matrix introduced in Lemma 4.5 can be recast as \[M_{n}(\Omega)=\begin{pmatrix}\Omega+\frac{A_{n}}{\delta+1}&\frac{1}{\delta+1} \Big{(}\frac{b^{n}}{2n}-I_{n}(b_{2}\mu)K_{n}(b_{1}\mu)\Big{)}\\ \\ \frac{\delta}{\delta+1}\Big{(}\frac{b^{n}}{2n}-I_{n}(b_{2}\mu)K_{n}(b_{1}\mu) \Big{)}&\Omega+\frac{B_{n}}{\delta+1},\end{pmatrix},\] for any \(n\in\mathbb{N}^{*}\), where \(A_{n}\) and \(B_{n}\) are defined in the statement of Proposition 4.7, above. Accordingly, it is readily seen that \[\det M_{n} =\left(\Omega+\frac{A_{n}}{\delta+1}\right)\left(\Omega+\frac{B_ {n}}{\delta+1}\right)-\frac{\delta}{(\delta+1)^{2}}\left(\frac{b^{n}}{2n}-I_{ n}(b_{2}\mu)K_{n}(b_{1}\mu)\right)^{2}\] \[=\Omega^{2}+\frac{A_{n}+B_{n}}{\delta+1}\Omega+\frac{A_{n}B_{n}}{ (\delta+1)^{2}}-\frac{\delta}{(\delta+1)^{2}}\left(\frac{b^{n}}{2n}-I_{n}(b_{ 2}\mu)K_{n}(b_{1}\mu)\right)^{2}.\] Therefore, noting that the discriminant of the preceding second order polynomial satisfies \[(\delta+1)^{2}\Delta_{n} =\big{(}A_{n}+B_{n}\big{)}^{2}-4A_{n}B_{n}+4\delta\left(\frac{b^{ n}}{2n}-I_{n}(b_{2}\mu)K_{n}(b_{1}\mu)\right)^{2}\] \[=\big{(}A_{n}-B_{n}\big{)}^{2}+4\delta\left(\frac{b^{n}}{2n}-I_{n }(b_{2}\mu)K_{n}(b_{1}\mu)\right)^{2}>0,\] it then follows, for all \(n\geq 1\), that there exist two angular velocities \(\Omega_{n}^{\pm}\) which are as given they are introduced in the lemma, and for which the matrix \(M_{n}(\Omega_{n}^{\pm})\) is singular. #### Monotony of the sequences \((\Omega_{n}^{\pm})_{n\in\mathbb{N}^{*}}\) Now, we provide a more precise analysis of the angular velocities \(\Omega_{n}^{\pm}\). To that end, note first, due to the representations \[A_{n}=-\delta\left(\frac{1}{2}-\frac{1}{2n}\right)-b\left(\frac{b}{2}-I_{1}(b_ {2}\mu)K_{1}(b_{1}\mu)\right)-\Big{(}I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)-I_{n}(b_{ 1}\mu)K_{n}(b_{1}\mu)\Big{)}\] and \[B_{n}=-\left(\frac{1}{2}-\frac{1}{2n}\right)-\frac{\delta}{b}\left(\frac{b}{2} -I_{1}(b_{2}\mu)K_{1}(b_{1}\mu)\right)-\delta\Big{(}I_{1}(b_{2}\mu)K_{1}(b_{2 }\mu)-I_{n}(b_{2}\mu)K_{n}(b_{2}\mu)\Big{)},\] together with Lemma 2.1, that the sequences \((A_{n})_{n\in\mathbb{N}^{*}}\) and \((B_{n})_{n\in\mathbb{N}^{*}}\) are non-positive and non-increasing. Now, we claim that \((\Omega_{n}^{\pm})_{n\in\mathbb{N}^{*}}\), defined in (4.26), are both non-negative, non-decreasing sequences. To see that, we regard these sequences as functions of the real variable \(x\in[1,\infty)\), i.e., we now consider the functions \(x\mapsto\Omega_{x}^{\pm}\) which extend the preceding sequences on \([1,\infty)\). Note that it is easy to check that these functions are differentiable. Therefore, denoting \[M_{x}\stackrel{{\rm def}}{{=}}\frac{A_{x}-B_{x}}{\delta+1}, \qquad\Gamma_{x}\stackrel{{\rm def}}{{=}}\frac{\frac{b^{x}}{2x}-I_ {x}(b_{2}\mu)K_{x}(b_{1}\mu)}{\delta+1},\] we compute, for any \(x\in[1,\infty)\), that \[2(\delta+1)\partial_{x}\Omega_{x}^{\pm}=\left(\pm\frac{M_{x}}{\sqrt{ M_{x}^{2}+4\delta\Gamma_{x}^{2}}}-1\right)\partial_{x}A_{x}-\left(\pm\frac{M_{x}}{ \sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}}+1\right)\partial_{x}B_{x}\] \[\qquad\qquad\qquad\pm\frac{4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4 \delta\Gamma_{x}^{2}}}\partial_{x}\left(\frac{b^{x}}{2x}-I_{x}(b_{2}\mu)K_{x}( b_{1}\mu)\right).\] Then, from the expression of \(A_{x}\) and \(B_{x}\), given by (4.27), we get \[2(\delta+1)\partial_{x}\Omega_{x}^{\pm}=- \left(\delta+1\mp\frac{(\delta-1)M_{x}}{\sqrt{M_{x}^{2}+4\delta \Gamma_{x}^{2}}}\right)\partial_{x}\left(\frac{1}{2x}\right)\] \[-\left(1\mp\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}} \right)\partial_{x}\Big{(}I_{x}(b_{1}\mu)K_{x}(b_{1}\mu)\Big{)}\] \[-\delta\left(1\pm\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{ 2}}}\right)\partial_{x}\left(I_{x}(b_{2}\mu)K_{x}(b_{2}\mu)\right)\] \[\pm\frac{4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2} }}\partial_{x}\left(\frac{b^{x}}{2x}-I_{x}(b_{2}\mu)K_{x}(b_{1}\mu)\right).\] Hence, one deduces that \[2(\delta+1)\partial_{x}\Omega_{x}^{-}=- (\delta+1)\Bigg{(}\frac{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}+M _{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}}\Bigg{)}\partial_{x}\left(\frac{ 1}{2x}\right)\] \[-\Bigg{(}1+\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}} \Bigg{)}\partial_{x}\Big{(}I_{x}(b_{1}\mu)K_{x}(b_{1}\mu)\Big{)}\] \[-\delta\Bigg{(}1-\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{ 2}}}\Bigg{)}\partial_{x}\Big{(}I_{x}(b_{2}\mu)K_{x}(b_{2}\mu)\Big{)}\] \[-\frac{4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}} }\partial_{x}\Big{(}\frac{b^{x}}{2x}-I_{x}(b_{2}\mu)K_{x}(b_{1}\mu)\Big{)}.\] Likewise for \(\Omega_{x}^{+}\), by further employing the straitforward computation \[\delta+1-\frac{(\delta-1)M_{x}+4\delta\Gamma_{x}}{\sqrt{M_{x}^{2} +4\delta\Gamma_{x}^{2}}} =\frac{(\delta+1)\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}-(\delta- 1)M_{x}-4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}}\] \[=\frac{(\delta+1)^{2}(M_{x}^{2}+4\delta\Gamma_{x}^{2})-\big{(}( \delta-1)M_{x}+4\delta\Gamma_{x}\big{)}^{2}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{ x}^{2}}\big{(}(\delta+1)\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}+(\delta-1)M_{x}+4 \delta\Gamma_{x}\big{)}}\] \[=4\delta\left(\frac{M_{x}^{2}+(\delta-1)^{2}\Gamma_{x}^{2}-2( \delta-1)M_{x}\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}\big{(}( \delta+1)\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}+(\delta-1)M_{x}+4\delta\Gamma _{x}\big{)}}\right)\] \[=\frac{4\delta\big{(}M_{x}-(\delta-1)\Gamma_{x}\big{)}^{2}}{\sqrt{ M_{x}^{2}+4\delta\Gamma_{x}^{2}}\big{(}(\delta+1)\sqrt{M_{x}^{2}+4\delta \Gamma_{x}^{2}}+(\delta-1)M_{x}+4\delta\Gamma_{x}\big{)}},\] it is then readily seen that \[2(\delta+1)\partial_{x}\Omega_{x}^{+} =-\bigg{(}\frac{4\delta\big{(}M_{x}-(\delta-1)\Gamma_{x}\big{)}^{2}}{ \sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}\big{(}(\delta+1)\sqrt{M_{x}^{2}+4\delta \Gamma_{x}^{2}}+(\delta-1)M_{x}+4\delta\Gamma_{x}\big{)}}\bigg{)}\partial_{x} \left(\frac{1}{2x}\right)\] \[\quad-\frac{4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^ {2}}}\partial_{x}\left(\frac{1}{2x}-\frac{b^{x}}{2x}\right)\] \[\quad-\bigg{(}1-\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{ 2}}}\bigg{)}\partial_{x}\Big{(}I_{x}(b_{1}\mu)K_{x}(b_{1}\mu)\Big{)}\] \[\quad-\delta\bigg{(}1+\frac{M_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_ {x}^{2}}}\bigg{)}\partial_{x}\Big{(}I_{x}(b_{2}\mu)K_{x}(b_{2}\mu)\Big{)}\] \[\quad-\frac{4\delta\Gamma_{x}}{\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^ {2}}}\partial_{x}\Big{(}I_{x}(b_{2}\mu)K_{x}(b_{1}\mu)\Big{)}.\] At last, the conclusion of the proof of the monotonicity if the function \(x\mapsto\Omega_{x}^{\pm}\) follows from the fact that the functions \[x\mapsto\frac{1}{x},\qquad x\mapsto I_{x}(b_{k}\mu)K_{x}(b_{j}\mu),\qquad x \mapsto\frac{1}{2x}-\frac{b^{x}}{2x}\] are decreasing, for any \(k,j\in\{1,2\}\) as soon as \(b_{k}\leq b_{j}\), and \(b\in(0,1)\), altogether with the fact that \(\Gamma_{x}\geq 0\) and that \[-\sqrt{M_{x}^{2}+4\delta\Gamma_{x}^{2}}\leqslant M_{x}\leqslant\sqrt{M_{x}^{2 }+4\delta\Gamma_{x}^{2}},\] for any \(x\in[1,\infty)\). As a consequence of the preceding computations, we deduce that \[\Omega_{n}^{\pm}\neq\Omega_{m}^{\pm},\] for all \(m\neq n\in\mathbb{N}^{*}\). We will now precise sufficient conditions that will ensure that \[\Omega_{n}^{+}\neq\Omega_{m}^{-}.\] To that end, we first observe that \[\Omega_{n}^{-}<\Omega_{n}^{+},\] for all \(n\in\mathbb{N}^{*}\). Moreover, by denoting \[A_{\infty}\stackrel{{\rm def}}{{=}}\lim_{n\to\infty}A_{n}\quad \text{and}\quad B_{\infty}\stackrel{{\rm def}}{{=}}\lim_{n\to \infty}B_{n},\] it then follows that that \[\Omega_{\infty}^{\pm}\stackrel{{\rm def}}{{=}}-\frac{1}{2( \delta+1)}\Big{(}A_{\infty}+B_{\infty}\mp|A_{\infty}-B_{\infty}|\Big{)}.\] On the other hand, we observe that \[A_{\infty}=-\left(\frac{b^{2}+\delta}{2}+I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)-bI_{1} (b_{2}\mu)K_{1}(b_{1}\mu)\right),\] whereas \[B_{\infty}=-\left(\frac{1+\delta}{2}+\delta I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)- \frac{\delta}{b}I_{1}(b_{2}\mu)K_{1}(b_{1}\mu)\right).\] Therefore, one sees that \[|A_{\infty}-B_{\infty}|=\left|\big{(}\delta-b^{2}\big{)}\left(\frac{1}{2}- \frac{1}{b}I_{1}(b_{2}\mu)K_{1}(b_{1}\mu)\right)-(\delta-1)\left(\frac{1}{2}- I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)\right)\right|.\] Observe that the value of preceding quantity is identically zero when \(b_{1}=b_{2}\). However, we are going to show now that it is strictly positive elsewhere. To see that, we notice, by using the strict monotonicity of the function \[x\mapsto\frac{1}{x}I_{1}(x)\] on \((0,\infty)\), which is proved in Lemma 2.1, that \[\frac{1}{b}I_{1}(b_{2}\mu)-I_{1}(b_{1}\mu)=b_{1}\mu\left(\frac{1}{b_{2}\mu}I_{1}( b_{2}\mu)-\frac{1}{b_{1}\mu}I_{1}(b_{1}\mu)\right)<0,\] as long as \(0<b_{1}<b_{2}\). In view of the preceding inequality, we now observe, thanks to the assumption \(\delta\geq b^{2}\), that \[\left(\delta-b^{2}\right)\left(\frac{1}{2}-\frac{1}{b}I_{1}(b_{2 }\mu)K_{1}(b_{1}\mu)\right)-\left(\delta-1\right)\left(\frac{1}{2}-I_{1}(b_{1} \mu)K_{1}(b_{1}\mu)\right)\\ >(1-b^{2})\left(\frac{1}{2}-I_{1}(b_{1}\mu)K_{1}(b_{1}\mu)\right) >0.\] Thus, we deduce, for any \(\delta>b^{2}\) and \(b_{1}\neq b_{2}\), that \(A_{\infty}\neq B_{\infty}\) and, therefore, that \[\Omega_{\infty}^{+}=-\frac{1}{\delta+1}\min\{A_{\infty},B_{\infty}\}>\Omega_{ \infty}^{-}=-\frac{1}{\delta+1}\max\{A_{\infty},B_{\infty}\}.\] Accordingly, due to the monotonicity of the sequences \((\Omega_{n}^{\pm})_{n\in\mathbb{N}^{*}}\), it is then readily seen that one can find \(p_{0}\in\mathbb{N}^{*}\) such that \[\Omega_{m}^{+}\geq\Omega_{p_{0}}^{+}>\Omega_{\infty}^{-}>\Omega_{n}^{-},\] for all \(m\geq p_{0}\) and \(n\in\mathbb{N}^{*}\), thereby concluding that \[\Omega_{m}^{+}>\Omega_{n}^{-},\] for all \(n,m\geq p_{0}\). _Employing analyticity to study the cardinal of spectral collisions._ Let us now assume that \(1\leq n<p_{0}\) and fix \(m\in\mathbb{N}^{*}\) with \(1\leq n<m\). Also, we now regard \(\Omega_{n}^{\pm}\) as analytic functions of the real variable \(b_{2}\in(0,b_{1})\). We emphasize that this is a consequence of the analytic property of the Bessel functions defining the eigenvalues \(\Omega_{n}^{\pm}\), see [43, Appendix B.2]. Thus, by virtue of the monotony of the eigenvalues \(n\mapsto\Omega_{n}^{\pm}\), we infer, for any \(1\leq n\leq p_{0}\), that there is at most one index \(m>n\) for which the equality \[\Omega_{m}^{-}=\Omega_{n}^{+}\] can probably hold for some \(b_{2}\in(0,b_{1})\). Accordingly, for \(1\leq n\leq p_{0}\), we introduce the (possibly empty) set \[S_{n,b_{1}}\stackrel{{\rm def}}{{=}}\left\{b_{2}\in(0,b_{1}): \exists m=m(n)>n,\quad\text{such that}\quad\Omega_{m(n)}^{-}(b_{2})=\Omega_{n}^{ +}(b_{2})\right\}.\] Therefore, we claim that there is \(q_{0}\in\mathbb{N}^{*}\) and a finite sequence of real numbers \((c_{j})_{1\leq j\leq q_{0}}\) such that \[\bigcup_{1\leq n\leq p_{0}}S_{n,b_{1}}\subset\left\{c_{j}\in(0,1):1\leq j\leq q _{0}\right\}. \tag{4.28}\] In other words, loosely speaking, we claim that the set of values \(b_{2}\in(0,b_{1})\) for which the sequence of eigenvalues of the matrix \(M_{n}(\Omega)\) can match is, in worse cases, negligible for Lebesgue's measure. Again, showing that relies on the analytic property of the function \[b_{2}\mapsto\Omega_{n}^{+}(b_{2})-\Omega_{m(n)}^{-}(b_{2})\] on \((0,b_{1})\), for all \(n\in\mathbb{N}^{*}\), which yields that this non constant function has at most isolated zeros on open connected sets. Consequently, we deduce that \[S_{n,b_{1}}\subset\left\{c_{j,n}\in(0,b_{1}):1\leq j\leq j_{*}\right\},\quad \text{for some}\quad j_{*}\in\mathbb{N}^{*},\] and some real numbers \(c_{j,n}\in(0,b_{1})\), whereby (4.28) follows. At last, we deduce that \[\Omega_{n}^{+}\neq\Omega_{m}^{-},\] for any \(b_{2}\in(0,b_{1})\backslash\{c_{j}\}_{1\leq j\leq q_{0}}\) and all \(n,m\in\mathbb{N}^{*}\). _Example of spectral collisions._ Let us now consider the case of identical initial discs, i.e., \(b_{1}=b_{2}\) and show that, in this particular situation, there exist values of \(b_{1}\in(0,\infty)\) for which the sequence of matrix \(\big{(}M_{n}(\Omega)\big{)}_{n\in\mathbb{N}^{*}}\) possess a simple sequence of eigenvalues, i.e., that \[\Omega_{n}^{+}\neq\Omega_{n}^{-},\quad\text{for all}\quad n\in\mathbb{N}^{*},\] but with possible "counterpart collision", i.e, \[\Omega_{n}^{+}=\Omega_{m}^{-},\quad\text{for some}\quad n,m\in\mathbb{N}^{*}.\] To see that, we emphasize that a direct computation of the angular velocities \(\Omega_{n}^{\pm}\), by setting \(b_{1}=b_{2}\) in (4.26), yields that \[\Omega_{n}^{+}=\frac{1}{2}-I_{n}K_{n}(b_{1}\mu)\qquad\text{and}\qquad\Omega_{ n}^{-}=\frac{1}{2}-\frac{1}{2n}. \tag{4.29}\] Therefore, in view of Lemma 2.1 above, it is easy to see that both sequences \((\Omega_{n}^{\pm})_{n\in\mathbb{N}^{*}}\) are increasing and converge to the same limit \(\frac{1}{2}\). Also, it is readily seen, again by virtue of Lemma 2.1, that \[\Omega_{n}^{+}>\Omega_{n}^{-},\quad\text{for all}\quad n\in\mathbb{N}^{*},\] and, whence, by monotony of the sequence \((\Omega_{n}^{-})_{n\in\mathbb{N}^{*}}\), that \[\Omega_{n}^{+}\neq\Omega_{m}^{-},\quad\text{for all}\quad n\geq m\in\mathbb{N}^{*}.\] However, using the asymptotic properties of the function \(x\mapsto I_{n}K_{n}(x)\) to write that \[\lim_{x\to 0}I_{1}K_{1}(x)=\frac{1}{2}\qquad\text{and}\qquad\lim_{x\to \infty}I_{1}K_{1}(x)=0,\] meaning that \(x\mapsto I_{1}K_{1}(x)\) is a continuous function and takes values in \((0,\frac{1}{2})\), allows us to deduce the existence of \(x_{0}\in(0,\infty)\) and \(m\in\mathbb{N}^{*}\) such that \[I_{1}K_{1}(x_{0})=\frac{1}{2m},\] thereby deducing that there is \(b_{1}=b_{2}\in(0,\infty)\) such that \[\Omega_{1}^{+}=\Omega_{m}^{-},\quad\text{for some}\quad m\in\mathbb{N}^{*}.\] This shows the existence of spectral collisions in the case \(b_{1}=b_{2}\) and concludes the proof of the proposition. ### Applying Crandall-Rabinowitz's theorem and proof of Theorem 1.3 Now, we are in position to establish the last prerequisites before we apply Crandall-Rabinowitz's theorem. In particular, in the following proposition, we prove essential properties of the kernel and image of the linearized operator \(d_{r}\mathcal{F}\) along with the transversality condition. **Proposition 4.8**.: _Let \(m\in\mathbb{N}^{*}\) and \(\alpha\in(0,1)\) be fixed and assume that the radii of the initial discs are such that \(0<b_{2}<b_{1}\). Further assume that \(\delta\geq\frac{b_{2}}{b_{1}}\) and either \(m\geq p_{0}\) or \(b_{2}\notin S_{m,b_{1}}\), where \(p_{0}\) and \(S_{m,b_{1}}\) are introduced in Proposition 4.7 above. Then, the following holds true:_ 1. _The linearized operator_ \(d_{r}\mathcal{F}(\Omega,0)\) _has a non trivial kernel in_ \(\mathcal{X}_{m}^{\alpha}\) _if and only if_ \(\Omega=\Omega_{mn}^{\pm}\) _for some_ \(n\in\mathbb{N}^{*}\)_, where_ \(\Omega_{mn}^{\pm}\) _is given by (_4.26_). In this case, the kernel of_ \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) _is a one-dimensional vector space in_ \(\mathcal{X}_{m}^{\alpha}\) _generated by_ \[\theta\mapsto\begin{pmatrix}\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}\cos(m\theta),\] _where,_ \(B_{m}\) _is defined by (_4.27_) and we set_ (4.30) \[\gamma_{m}\stackrel{{\rm def}}{{=}}\frac{b^{m}}{2m}-I_{m}(b_{2}\mu )K_{m}(b_{1}\mu),\qquad b=\frac{b_{2}}{b_{1}}.\] 2. _The range of_ \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) _is closed in_ \(\mathcal{Y}_{m}^{\alpha}\) _and is of co-dimension one._ _._ 3. _At last, the transversality condition holds, i.e., we have that_ \[\partial_{\Omega}d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\begin{pmatrix}1\\ \sigma_{m}\end{pmatrix}\cos(m\cdot)\notin R\big{(}\partial_{r}\mathcal{F}( \Omega_{m}^{\pm},0)\big{)}.\] Proof.: Let \(\rho=(\rho_{1},\rho_{2})\in\mathcal{X}_{m}^{\alpha}\) with Fourier series expansions \[\rho_{k}(\theta)=\sum_{n\geq 1}c_{nm,k}\cos(nm\theta),\] for some \(c_{n,k}\in\mathbb{R}\) and all \(k\in\{1,2\}\). According to Lemma 4.5 above, one has that \[d_{r}\mathcal{F}(\Omega,0)\begin{pmatrix}\rho_{1}\\ \rho_{2}\end{pmatrix}=-\sum_{n\geq 1}nmM_{nm}(\Omega)\begin{pmatrix}c_{nm,1} \\ c_{nm,2}\end{pmatrix}\sin(nm\theta). \tag{4.31}\] Hence, in view of Proposition 4.7, the determinant of the matrix \(M_{m}(\Omega)\) vanishes if and only if \(\Omega=\Omega_{m}^{\pm}\). Thus, the kernel of \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) is non trivial and it is one-dimensional if and only if for all \(n\geq 2\), \[\det\bigl{(}M_{nm}(\Omega_{m}^{\pm})\bigr{)}\neq 0 \tag{4.32}\] which is equivalent to \[\Omega_{m}^{\pm}\neq\Omega_{nm}^{\pm}\quad\text{for all}\quad n\neq 1.\] The preceding condition is ensured by Proposition 4.7 whenever one the two following conditions are fulfilled: either \(m>p_{0}\) or that \(b_{2}\notin S_{m,b_{1}}\). Now, observe that \(\rho=(\rho_{1},\rho_{2})\) belongs to the kernel of \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) if and only the Fourier coefficients in its Fourier expansion (4.31) vanish, i.e, if \[c_{nm,1}=c_{nm,2}=0,\] for all \(n\neq 1\), and \[(c_{m,1},c_{m,2})\in\ker\bigl{(}M_{m}(\Omega_{m}^{\pm})\bigr{)}.\] Hence, by noticing that \[\begin{pmatrix}\Omega_{m}^{\pm}+\frac{A_{m}}{\delta+1}&\frac{\gamma_{m}}{ \delta+1}\\ &\\ \frac{\delta}{\delta+1}\gamma_{m}&\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}, \end{pmatrix}\cdot\begin{pmatrix}\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}=\begin{pmatrix}0\\ 0\end{pmatrix},\] one then deduces that the generator of \(\ker\bigl{(}d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\bigr{)}\) can be chosen as the pair of functions \[\theta\mapsto\begin{pmatrix}\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}\cos(m\theta).\] This takes care of the first claim in the proposition. Now, we turn our attention to show that the range of \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) coincides with the space of functions \((g_{1},g_{2})\in\mathcal{Y}_{m}^{\alpha}\) such that \[\begin{pmatrix}g_{1}\\ g_{2}\end{pmatrix}=\sum_{n\geq 1}\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta), \tag{4.33}\] where \(g_{nm,1},g_{nm,2}\in\mathbb{R}\), for any \(n\geq 2\), and there exists \(c_{m,1},c_{m,2}\in\mathbb{R}\) such that \[M_{m}(\Omega_{m}^{\pm})\begin{pmatrix}c_{m,1}\\ c_{m,2}\end{pmatrix}=\begin{pmatrix}g_{m,1}\\ g_{m,2}\end{pmatrix}. \tag{4.34}\] First, observe that the range of \(d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\) is obviously included in the space introduced above, which is clearly closed and of co-dimension one in \(\mathcal{Y}_{m}^{\alpha}\). Therefore, it only remains to check the converse inclusion. To see that, fixing \((g_{1},g_{2})\in\mathcal{Y}_{m}^{\alpha}\), we shall prove that the equation \[d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)[(\rho_{1},\rho_{2})]=\begin{pmatrix}g_{1} \\ g_{2}\end{pmatrix} \tag{4.35}\] admits a solution in the space \(\mathcal{X}_{m}^{\alpha}\), as soon as \((g_{1},g_{2})\) satisfies (4.33) and (4.34) for some \(c_{nm,1},c_{nm,2}\in\mathbb{R}\). To that end, we first notice that (4.35) is equivalent to \[-nmM_{nm}(\Omega_{m}^{\pm})\begin{pmatrix}c_{nm,1}\\ c_{nm,2}\end{pmatrix}=\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix},\quad\text{for all}\quad n\geq 1.\] The existence of \(c_{m,1}\) and \(c_{m,2}\) is obviously ensured by (4.34). As for the remaining coefficients, it is readily seen that \[\begin{pmatrix}c_{nm,1}\\ c_{nm,2}\end{pmatrix}=-\frac{1}{nm}M_{nm}^{-1}(\Omega_{m}^{\pm})\begin{pmatrix} g_{nm,1}\\ g_{nm,2}\end{pmatrix},\quad\text{for all}\quad n\geq 2,\] where we have used the fact that the matrix \(M_{nm}(\Omega_{m}^{\pm})\) is invertible for all \(n\geq 2\), which is established at the beginning of this proof. This defines the coefficients \(c_{nm,1}\) and \(c_{nm,2}\). Next, for a given couple of functions \((g_{1},g_{2})\in\mathcal{Y}_{m}^{\alpha}\) as in (4.33), we need to show that the solution of (4.35) belongs to the space \(\mathcal{X}_{m}^{\alpha}\). More precisely, we claim that \[\begin{pmatrix}\rho_{1}\\ \rho_{2}\end{pmatrix}=\begin{pmatrix}c_{m,1}\\ c_{m,2}\end{pmatrix}\cos(m\theta)-\sum_{n\geq 2}\frac{1}{nm}M_{nm}^{-1}(\Omega_{m}^ {\pm})\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\cos(nm\theta)\in C^{1+\alpha}(\mathbb{T})\times C^{1+ \alpha}(\mathbb{T}),\] where, for \(c_{m,1}\) and \(c_{m,2}\) are defined in (4.34). It is clear that we can restrict our selves to showing that \[\sum_{n\geq 2}\frac{1}{nm}M_{nm}^{-1}(\Omega_{m}^{\pm})\begin{pmatrix}g_{nm,1} \\ g_{nm,2}\end{pmatrix}\cos(nm\theta)\in C^{1+\alpha}(\mathbb{T})\times C^{1+ \alpha}(\mathbb{T}),\] which is equivalent to proving that \[\begin{pmatrix}\widetilde{\rho}_{1}\\ \widetilde{\rho}_{2}\end{pmatrix}\stackrel{{\rm def}}{{=}}\sum_{ n\geq 2}M_{nm}^{-1}(\Omega_{m}^{\pm})\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta)\in C^{\alpha}(\mathbb{T})\times C^{\alpha }(\mathbb{T}). \tag{4.36}\] Now, by a direct computation, it is easy to check that \[M_{nm}^{-1}(\Omega_{m}^{\pm})=\frac{1}{\det(M_{nm}(\Omega_{m}^{\pm}))}\begin {pmatrix}\Omega_{m}^{\pm}+\frac{B_{mn}}{\delta+1}&-\frac{\gamma_{nm}}{\delta+1} \\ -\frac{\delta\gamma_{nm}}{\delta+1}&\Omega_{m}^{\pm}+\frac{A_{mn}}{\delta+1} \end{pmatrix},\quad\text{for all}\quad n\geq 2,\] where \(\gamma_{n}\) is defined in (4.30) whereas \(A_{n}\) and \(B_{n}\) are introduced in (4.27). Note that the preceding matrix is well defined thanks to (4.32) which ensures that \[\det\left(M_{nm}(\Omega_{m}^{\pm})\right)\neq 0,\quad\text{for all}\quad n\geq 2.\] We further introduce the matrix \[P_{n,m}\stackrel{{\rm def}}{{=}}M_{nm}^{-1}(\Omega_{m}^{\pm})-M_ {\infty}^{-1}(\Omega_{m}^{\pm}),\quad\text{for all}\quad n\geq 2,\] where \[M_{\infty}^{-1}(\Omega_{m}^{\pm})\stackrel{{\rm def}}{{=}}\lim_ {n\to\infty}M_{mn}^{-1}(\Omega_{m}^{\pm}),\] and we claim that \[|P_{n,m}|=O\left(\frac{1}{n}\right),\quad\text{for all}\quad n\geq 2. \tag{4.37}\] It is easy to see that the preceding claim follows as a consequence of the following asymptotics: \[\left|\frac{1}{\det(M_{nm}(\Omega_{m}^{\pm}))}-\frac{1}{\det(M_{\infty}(\Omega_ {m}^{\pm}))}\right|=O\left(\frac{1}{n}\right)\] and \[\left|\begin{pmatrix}\frac{B_{mn}}{\delta+1}&-\frac{\gamma_{nm}}{\delta+1}\\ -\frac{\delta\gamma_{nm}}{\delta+1}&\frac{A_{mn}}{\delta+1}\end{pmatrix}- \begin{pmatrix}\frac{B_{\infty}}{\delta+1}&0\\ 0&\frac{A_{\infty}}{\delta+1}\end{pmatrix}\right|=O\left(\frac{1}{n}\right),\] as \(n\to\infty\). These asymptotic identities are a direct consequence of the decay properties of Bessel functions, see in particular (2.6). This justifies our claim (4.37). Now, we are ready to conclude the proof of our main claim (4.36). To that end, we first write that \[\begin{pmatrix}\widetilde{\rho}_{1}\\ \widetilde{\rho}_{2}\end{pmatrix}=\sum_{n\geq 2}P_{n,m}(\Omega_{m}^{\pm}) \begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta)-M_{\infty}^{-1}(\Omega_{m}^{\pm})\sum_{n \geq 2}\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta).\] On the one hand, due to the assumption that \((g_{1},g_{2})\in\mathcal{Y}_{m}^{\alpha}\), it is readily seen that \[M_{\infty}^{-1}(\Omega_{m}^{\pm})\sum_{n\geq 2}\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta)\in C^{\alpha}(\mathbb{T})\times C^{\alpha} (\mathbb{T}).\] On the other hand, since we have proved that \[P_{n,m}=O\left(\frac{1}{n}\right),\quad\text{for all}\quad n\geq 2,\] it then follows that \[\theta\mapsto\Theta(\theta)\stackrel{{\text{def}}}{{=}}\sum_{n \geq 2}P_{n,m}\sin(nm\theta)\in L^{2}(\mathbb{T})\subset L^{1}(\mathbb{T}).\] Accordingly, we deduce that \[\sum_{n\geq 2}P_{n,m}(\Omega_{m}^{\pm})\begin{pmatrix}g_{nm,1}\\ g_{nm,2}\end{pmatrix}\sin(nm\theta)=\Theta*\begin{pmatrix}g_{1}\\ g_{2}\end{pmatrix}(\theta)\in C^{\alpha}(\mathbb{T})\times C^{\alpha}(\mathbb{T }),\] where the convolution above is written in a vectorial format. This completes the proof of (4.36). Finally, we are left with the proof of the transversality condition. To that end, we first write, by differentiating (4.31) with respect to \(\Omega\), that \[\partial_{\Omega}d_{r}\mathcal{F}(\Omega,0)\begin{pmatrix}\rho_{1}\\ \rho_{2}\end{pmatrix}=-\sum_{n\geq 1}n\begin{pmatrix}c_{n,1}\\ c_{n,2}\end{pmatrix}\sin(n\theta),\] for any \(\Omega\in\mathbb{R}\). Therefore, we obtain that \[\partial_{\Omega}d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\begin{pmatrix}\Omega_{m }^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}\cos(m\theta)=-m\begin{pmatrix} \Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}\sin(m\theta).\] Hence, in view of (4.34), it follows that \[\partial_{\Omega}d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\begin{pmatrix}\Omega_{ m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}\cos(m\theta)\in\mathrm{R} \big{(}d_{r}\mathcal{F}(\Omega_{m}^{\pm},0)\big{)}\] if and only if there exist two real numbers \(c_{m,1}\) and \(c_{m,2}\) such that \[M_{m}(\Omega_{m}^{\pm})\begin{pmatrix}c_{m,1}\\ c_{m,2}\end{pmatrix}=\begin{pmatrix}\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\\ -\frac{\delta}{\delta+1}\gamma_{m}\end{pmatrix}.\] Since \(\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}\neq 0\), the latter identity is equivalent to finding two real numbers, still denoted by \(c_{m,1}\) and \(c_{m,2}\) for simplicity, such that \[M_{m}(\Omega_{m}^{\pm})\begin{pmatrix}c_{m,1}\\ c_{m,2}\end{pmatrix}=\begin{pmatrix}1\\ \sigma_{m}\end{pmatrix}, \tag{4.38}\] where we set \[\sigma_{m}\stackrel{{\text{def}}}{{=}}\frac{-\frac{\delta}{ \delta+1}\gamma_{m}}{\Omega_{m}^{\pm}+\frac{B_{m}}{\delta+1}}\] and we emphasize that \[\begin{pmatrix}1\\ \sigma_{m}\end{pmatrix}\in\mathrm{Ker}\big{(}M_{m}(\Omega_{m}^{\pm})\big{)}.\] The validity of the identity (4.38) is violated by means of basic linear algebra arguments, for \(M_{m}(\Omega_{m}^{\pm})\) is non-invertible and the vector \((1,\sigma_{m})\) is a non trivial zero of that matrix, with \(\sigma_{m}\neq 0\). More precisely, this is a consequence of the following simple lemma **Lemma 4.9**.: _Let \(M\) be a real two-dimensional matrix with a non trivial kernel. Assume further that there is \(s\in\mathbb{R}\setminus\{0\}\) such that_ \[\begin{pmatrix}1\\ s\end{pmatrix}\in\text{Ker}(M)\cap R(M).\] _Then, \(M\) is trace-free, i.e., it holds that_ \[\text{Tr}(M)=0.\] Let us admit Lemma 4.9 for a moment and first continue the proof of the transversality condition. We get back to the proof of that lemma thereafter. According to Lemma 4.9, in order to prove the non validity of (4.38) for any real numbers \(c_{m,1},c_{m,2}\), it is enough for us to check that \(\text{Tr}(M_{m}(\Omega_{m}^{\pm}))\neq 0\). To that end, we compute that \[\text{Tr}(M_{m}(\Omega_{m}^{\pm}))=2\Omega_{m}^{\pm}+\frac{A_{m}+B_{m}}{1+ \delta}.\] Hence, in view of the definition of \(\Omega_{m}^{\pm}\), given in (4.26), we find that \[\text{Tr}(M_{m}(\Omega_{m}^{\pm}))=\pm(\Omega_{m}^{+}-\Omega_{m}^{-})\neq 0,\] thereby deducing that (4.38) cannot hold true for any real numbers \(c_{m,1}\) and \(c_{m,2}\). This justifies the transversality condition. Let us now prove Lemma 4.9. Generally speaking, a given two-dimensional matrix \(M\) is non-invertible if and only if \[M=\begin{pmatrix}a&\eta a\\ b&\eta b\end{pmatrix},\quad\text{for some}\quad a,b,\eta\in\mathbb{R}.\] Note that, without loss of generality, we can assume that \(a\) and \(b\) are both not trivial, otherwise we emphasize that there will be noting left to prove. Therefore, if \((1,s)\) is in the kernel of \(M\) and satisfies \[M\begin{pmatrix}c_{1}\\ c_{2}\end{pmatrix}=\begin{pmatrix}1\\ s\end{pmatrix},\] for some \(s\in\mathbb{R}^{*}\) and some two real numbers \(c_{1}\) and \(c_{2}\), then one can easily check that \(M\) should have the format \[M=\begin{pmatrix}a&-\frac{a^{2}}{b}\\ b&-a\end{pmatrix}.\] In particular, one deduces from all of this that \(Tr(M)=0.\) This proves Lemma 4.9 and concludes the proof of the proposition. Proof of Theorem 4.1.: The proof of Theorem 2.1 is now achieved as direct application of Crandall-Rabinowitz's Theorem 2.6, whose hypothesis are fully fulfilled due to Proposition 4.8, above. ## 5. Endnote In this paper, it is shown that the multy-layer quasi-geostrophic system (QS2L) admits a unique global weak--Lagrangian--solution as soon as the initial vorticities are Lebesgue-integrable and bounded. In light of that, any couple of initial vortex-patches is transported by the flow of the system and the vortex structure remains unchanged as time goes forward. Moreover, it is also shown here that there are time-periodic patches, close to stationary states (discs), characterized by their rotational movement around their centers of mass with the same angular velocity. Although the first result--existence of global weak solutions of (QS2L)--is shown to hold in the full range of parameters \(\lambda>0\) and \(\delta>0\), the elements of proof of the second result--existence of rotating solutions--does not cover the whole previous range of parameters. Specifically, technically speaking, it turns out that the spectral analysis of the contour dynamics can be affected by the choice of these parameters and this is shown to also depend on the choice of the radii \(b_{1}\) and \(b_{2}\) of the initial discs. Note that this phenomenon is not observable in the Euler equations by virtue of their invariance by dilatation; it is, however, a property that comes from the contribution of the kernel associated with the shallow-water equations. Our spectral analysis of the linearized operator around a steady state (two discs) shows that its kernel has a bi-dimensional structure. Moreover, the sequence of the associated eigenvalues \((\Omega_{m}^{\pm})_{m\in\mathbb{N}^{*}}\) for which this operator is not invertible is, in some sense, a combination of a semblable sequence of eigenvalues from Euler and shallow-water equations. This is well observed in the case \(\delta=1\), see (4.29). Although we are able to show that the linearized operator is not invertible at any value of angular velocities \(\Omega_{m}^{\pm}\), for any given symmetry \(m\in\mathbb{N}^{*}\), it turns out that there are cases (depending on the choice of parameters \(\lambda\), \(\delta\), \(b_{1}\) and \(b_{2}\)) where spectral collisions surely occur and the Crandall-Rabinowitz theorem does not directly apply to construct time-periodic solutions in such situations. Here, it is a remarkable observation that the non-invertibility of the linearized operator at \(\Omega_{m}^{\pm}\), for any given symmetry \(m\in\mathbb{N}^{*}\), stems from the coupling of Euler and shallow-water kernels. This description for small number of symmetries is satisfactory when compared to the doubly connected case of shallow-water equations [47], which, as previously emphasized, shares several aspects of similarity in terms of the two-dimensional structure of the V-states. In conclusion, the present work opens the gate for more interesting questions to be discussed about (QS2L), such as properties of the branches of bifurcation, description of stationary solutions, existence of doubly connected V-states and quasi-time-periodic solutions. ## Acknowledgement The authors would like to thank Taoufik Hmidi for introducing the multy-layer quasi-geostrophic system (QS2L) as well as for several helpful discussions and remarks. The work of Zineb Hassainia has been supported by Tamkeen under the NYU Abu Dhabi Research Institute grant of the center SITE.
2309.13064
InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning
We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset related to financial investment. Inspired by less-is-more-for-alignment (Zhou et al., 2023), we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics, from Chartered Financial Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative finance discussions. InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions. Financial experts, including hedge fund managers and research analysts, rate InvestLM's response as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of financial NLP benchmarks demonstrates strong generalizability. From a research perspective, this work suggests that a high-quality domain specific LLM can be tuned using a small set of carefully curated instructions on a well-trained foundation model, which is consistent with the Superficial Alignment Hypothesis (Zhou et al., 2023). From a practical perspective, this work develops a state-of-the-art financial domain LLM with superior capability in understanding financial texts and providing helpful investment advice, potentially enhancing the work efficiency of financial professionals. We release the model parameters to the research community.
Yi Yang, Yixuan Tang, Kar Yan Tam
2023-09-15T02:59:31Z
http://arxiv.org/abs/2309.13064v1
# InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning ###### Abstract We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset related to financial investment. Inspired by less-is-more-for-alignment (Zhou et al., 2023), we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics, from Chartered Financial Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative finance discussions. InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions. Financial experts, including hedge fund managers and research analysts, rate InvestLM's response as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of financial NLP benchmarks demonstrates strong generalizability. From a research perspective, this work suggests that a high-quality domain specific LLM can be tuned using a small set of carefully curated instructions on a well-trained foundation model, which is consistent with the Superficial Alignment Hypothesis (Zhou et al., 2023). From a practical perspective, this work develops a state-of-the-art financial domain LLM with superior capability in understanding financial texts and providing helpful investment advice, potentially enhancing the work efficiency of financial professionals. We release the model parameters to the research community1. Footnote 1: InvestLM adopts the same licensing terms as LLaMA (Touvron et al., 2023). Link: [https://github.com/AbaciNLP/InvestLM](https://github.com/AbaciNLP/InvestLM) ## 1 Introduction Large language models (LLMs) have significantly changed the paradigm of natural language processing (Brown et al., 2020; Touvron et al., 2023) and hold great potential for artificial general intelligence (Bubeck et al., 2023). Several financial domain LLMs have also been developed with the hope of processing massive financial texts and enhancing investment and financial decision-making for investors and financial professionals. However, three challenges may hinder the broad development and adoption of financial domain LLMs. First, BloombergGPT (Wu et al., 2023), a foundation model with 50 billion parameters trained on Bloomberg's proprietary data, is not publicly available. Thus, the community cannot study its capabilities in financial tasks. Second, while other commercialized LLMs such as Chat-GPT and Claude-2 are accessible via API, their model parameters are not publicly available either, making it expensive to investigate their financial task capability. Third, the research community has released several LLMs fine-tuned on financial NLP tasks, such as FinMA (Xie et al., 2023) and FinGPT (Yang et al., 2023). However, these models are not only smaller in size 2 but also exhibit poor performance when generalizing to financial NLP tasks beyond their instructed tasks. Footnote 2: At the time of writing, the largest FinMA model is 30B and FinGPT is 7B. To address the challenges, in this work, we develop a new financial domain LLM: InvestLM. InvestLM is trained on the LLaMA 65B, using a carefully curated instruction dataset related to finance and investment. We release the parameters of InvestLM and adopt the same licensing terms as LLaMA. We evaluate the InvestLM's utility in providing helpful investment advice by collaborating with financial experts, including hedge fund managers and research analysts. Experts rate InvestLM's responses as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). An example of InvestLM and three commercial models' responses are presented in Table 1. According to expert f The railroad company saw earnings roll a bit off track when it reported July 20. The company slightly missed analysts' revenue expectations and reported earnings per share (EPS) that matched Wall Street's estimates. That EPS performance might be fine for many companies, but it rung warning bells for CSX investors because it was the first time in five years that CSX failed to surpass EPS estimates. The company saw declining volume in a number of key products it transports, including agricultural and food, chemicals, and forest. Volume growth in coal provided some locomotion. Intermodal, which means using two transport modes to move freight (which often means converting freight from highway to intermodal rail), continued to struggle in the latest quarter. Railroads have grappled with worker shortages and supply chain issues. The latest CSX earnings raises questions about overall U.S. economic health. Given the recent performance of the railroad company, is it advisable to buy, hold, or sell stocks in the sector? \begin{tabular}{l|p{284.5pt}} \hline **InvestLM** & Based on the information provided, the stock performance appears to be mixed. While revenue missed expectations, EPS met estimates, and coal volume provided some growth. On the other hand, declining volume in key product categories along with ongoing labor shortages and supply chain issues raise concerns. Overall, the outlook seems uncertain. As a result, a "hold" recommendation may be more appropriate than an outright "buy" or "sell" at this time. Investors should continue to monitor the economic indicators and company fundamentals before making an investment decision. \\ \hline **GPT-3.5** & I'm an AI language model, and I can provide you with some information to consider, but please note that investing decisions should be based on thorough research and analysis. The recent performance of the railroad company, CSX, seems to have raised concerns about its earnings and volume decline in key product categories. The struggles with intermodal transportation and overall supply chain issues may also impact the company's performance. When making investment decisions, it's important to consider factors such as the company's long-term prospects, industry trends, and the overall market conditions. Here are a few points to keep in mind: 1. Research the sector: Look into the railroad industry as a whole and analyze its growth potential, overall trends, and challenges. Understand how other companies within the sector are performing. 2. Evaluate the company's fundamentals: Assess CSX's financial statements, including revenue trends, profit margins, and debt levels. Consider the company's competitive position, management team, and their strategies for addressing the challenges they face. 3. Consider market conditions: Examine the wider economic environment and its impact on the railroad industry. Factors such as macroeconomic trends, government regulations, and consumer demand can influence railroad stocks. 4. Diversification: If you already have holdings in the sector, consider the diversification of your portfolio. Adding more railroad stocks may increase exposure to similar challenges. Diversifying across other sectors can help mitigate risk. Ultimately, it is advisable to consult with a financial advisor or conduct further research to make an informed decision about buying, holding, or selling stocks in the railroad sector. They can provide personalized advice based on your financial goals and risk tolerance. \begin{tabular}{l|p{284.5pt}} \hline **GPT-4** & As an AI language model, I can provide you with a logical analysis based on data provided, but can't provide personalized financial advice. Here are some considerations: 1. **Buy**: If you believe the shortcomings in the CSX earnings report are temporary and that the company can overcome the challenges it faces, you may choose to buy. A potential advantage is that often when companies or sectors are performing relatively poorly, their stocks could potentially be undervalued, offering a buying opportunity for long-term investors. Furthermore, any recovery in the volume of key products could provide a significant boost to the company's earnings in the future. 2. **Hold**: If you already own shares of CSX and believe that the company will weather the difficulties over time, holding could be a viable strategy. If you're optimistic about the long-term prospects of the rail industry or the U.S. economy as a whole, there could be value in waiting for improvements. 3. **Sell**: If you believe the challenges that CSX faces are indicative of persistent issues within the rail industry or the broader economy, it might be a signal to reduce your exposure to this sector. Overall, the decision should be based on your risk appetite, your faith in the company's management to turn things around, and your understanding of the overall industry and economic climate. It may be wise to consult with a financial advisor or conduct further stock analysis before making a decision. \begin{tabular}{l|p{284.5pt}} \hline **Claude-2** & Based on the information provided, here is my assessment of whether it would be advisable to buy, hold, or sell stocks in the railroad sector: - The miss on revenue expectations and meeting EPS expectations rather than beating them could indicate weaker demand for transportation services. Declining volume in key product categories also points to softer economic activity. This raises questions about the overall economic outlook. - Continued struggles in intermodal transportation suggests ongoing supply chain issues are weighing on the sector. Worker shortages are also a challenge railroads are facing. - The recent CSX earnings report as the first time in 5 years they did not beat EPS expectations, suggesting the company is facing more headwinds now compared to the past several years. - Taken together, the mixed results from CSX and the issues around demand, volumes, supply chains and labor point to near-term weakness in the railroad sector. While transportation volumes tend to recover as economies strengthen, the current environment raises uncertainties. Given this analysis, I would say it is not advisable to buy additional stocks in the railroad sector at this time. The recent performance and underlying issues suggest risk that future earnings and growth may disappoint. For current investors, holding positions makes sense if taking a long-term view, but consider trimming or selling portions of positions to reduce risk exposure until the outlook becomes clearer. In summary, a "hold" viewpoint seems most prudent, with a caveat to potentially reduce risk through selective selling. \begin{table} \begin{tabular}{p{284.5pt}} \hline **Claude-2** & Based on the information provided, here is my assessment of whether it would be advisable to buy, hold, or sell stocks in the railroad sector: - The miss on revenue expectations and meeting EPS expectations rather than beating them could indicate weaker demand for transportation services. Declining volume in key product categories also points to softer economic activity. This raises questions about the overall economic outlook. - Continued struggles in intermodal transportation suggests ongoing supply chain issues are weighing on the sector. Worker shortages are also a challenge railroads are facing. - The recent CSX earnings report as the first time in 5 years they did not beat EPS expectations, suggesting the company is facing more headwinds now compared to the past several years. - Taken together, the mixed results from CSX and the issues around demand, volumes, supply chains and labor point to near-term weakness in the railroad sector. While transportation volumes tend to recover as economies strengthen, the current environment raises uncertainties. Given this analysis, I would say it is not advisable to buy additional stocks in the railroad sector at this time. The recent performance and underlying issues suggest risk that future earnings and growth may disappoint. For current investors, holding positions makes sense if taking a long-term view, but consider trimming or selling portions of positions to reduce risk exposure until the outlook becomes clearer. In summary, a ”hold” viewpoint seems most prudent, with a caveat to potentially reduce risk through selective selling. vague responses to investment-related questions, possibly because of its safety requirements. In scenarios where commercial models offer decisive answers, InvestLM typically arrives at a logical investment conclusion similar to these commercial models, but it is often more concise. Case studies also reveal that InvestLM effectively mitigates the hallucination problems often found in LLaMA's responses. In addition, InvestLM shows strong generalizability on several financial NLP tasks - those that are not used in the instruction tuning. In addition to contributing a new financial domain LLM, we are also interested in understanding the behaviors of domain instruction tuning. We find that a diverse set of domain instructions is very effective in "transforming" a high-quality foundation model (such as LLaMA-65B) into a high-quality domain-specific model - suggesting the consistency with the Superficial Alignment Hypothesis (Zhou et al., 2023). Second, we discover that generic instructions, like those used in Alpaca (Taori et al., 2023), can detrimentally impact the performance of instruction-tuned models on domain tasks. This underscores the importance of curating domain-specific instructions. Together, these findings provide insights into how to fine-tune a foundation model for a specific domain. The rest of the paper is organized as follows. First, we describe our manually curated financial domain instruction dataset. This is followed by an overview of the instruction tuning procedures for developing InvestLM. Subsequently, we present expert evaluation results and report the performance of InvestLM, comparing it with other LLMs on a set of financial NLP benchmarks. Lastly, we conduct studies to better understand the behaviors of domain instruction tuning. ## 2 Instruction Dataset Our goal is to construct a financial LLM capable of understanding financial text and providing helpful responses to investment-related questions. The Superficial Alignment Hypothesis (Zhou et al., 2023) posits that _A model's knowledge and capabilities are learned almost entirely during pretraining, while alignment teaches it which subdistribution of formats should be used when interacting with users_. Thus, we hypothesize that a well-trained large language model, such as LLaMA-65B, has already acquired the ability to understand financial text, given that its training corpus might encompass financial content. So, to enable LLaMA to interact with investment-related questions, we should fine tune it with specifically crafted instructions. We manually curate an instruction dataset from the following resources: Stackexchange QFin.We select a set of questions along with their corresponding high-vote answers from the quantitative finance board. CFA questions.We choose a set of questions with detailed answers (including explanations) from the Chartered Financial Analyst (CFA) exams. Academic Journals.We choose articles from top financial economics journals (such as _Journal of Finance_) and manually create questions related to asset pricing and risk management, and solicit answers from the articles. Textbooks.We choose several classic finance textbooks (such as _Investments, 10th Edition_), and select the exercises from these textbooks as instruction/inputs and find their corresponding standard answers as outputs. SEC filings.We select earnings conference call transcripts and SEC filings, curate several questions based on the content of the filings, and then extract the corresponding answers from the text. Financial NLP tasks.We reformat several financial NLP tasks, such as financial sentiment analysis, and numerical reasoning as instructions. It's worth noting that, even for instructions derived from financial NLP tasks, we do not directly use the corresponding label as the output. Instead, we manually provide a response to augment the label. Investment questions.We brainstorm financial and investment-related questions, then use Chat-GPT and Claude-2 to generate answers. Subsequently, we manually verify and paraphrase the best answer to use as the output. For each instruction resource, we gather several hundred examples, with each example being a tuple comprising instruction, input, and output. The detailed breakdown of our financial domain instruction dataset is shown in Table 2. Compared to other general-purpose instruction datasets, such as that used in Alpaca (Taori et al., 2023), our finance-specific instruction dataset has much longer instruction/input and output lengths. Most importantly, our dataset is manually curated to encompass topics related to financial investment. ## 3 Instruction Tuning on LLaMA We train InvestLM on LLama-65B using our financial domain instruction dataset (Touvron et al., 2023). Specifically, we use the Low-rank adaptation (LoRa) method (Hu et al., 2021) to tune the model parameters in order to enhance the training efficiency. We set the rank to 16. We specifically target LoRa modules as "q_proj," "k_proj," "v_proj," "down_proj," "gate_proj," and "up_proj". Moreover, most financial texts such as SEC filings, corporate disclosures, and analyst reports are long text, usually consisting of thousands of tokens. Thus it is important that the financial domain LLM can handle long text. To this end, we utilize Linear Rope Scaling (Chen et al., 2023) on a scale of 4 to increase context size to **8,192**, which improves InvestLM's ability to handle long financial texts. The detailed training prompt format is shown in Appendix A. Following standard fine-tuning hyperparameters, we train the LLaMA-65B model for 15 epochs. The learning rate is set to 3e-4, and the batch size to 16 examples. In the subsequent analysis, we also instruction tune a LLaMA-7B model. For this model, we conduct training for 12 epochs with a learning rate of 3e-3 and a batch size of 32 samples. For simplicity, we denote InvestLM-65B as InvestLM, and LLaMA-65B as LLaMA. ## 4 Expert Evaluation The primary goal of this work is to build a financial LLM capable of understanding financial text and providing helpful responses to investment related questions. To test whether InvestLM can offer helpful responses, we collaborate and conduct interviews with a group of six financial experts, including hedge fund managers and financial analysts. **Baselines**. We compare InvestLM with three state-of-the-art commercial models, **GPT-3.5**, **GPT-4** and **Claude-2**. OpenAI's GPT-3.5 and GPT-4 are large language models tuned with reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022). Anthropic's Claude-2 is a large language model that can take up to 100K tokens in the user's prompt. 3 Responses from all baselines are sampled throughout August 2023. Footnote 3: [https://www.anthropic.com/index/claude-2](https://www.anthropic.com/index/claude-2) We manually write 30 test questions that are related to financial markets and investment. For each question, we generate a single response from InvestLM and the three commercial models. We then ask the financial experts to compare InvestLM responses to each of the baselines and label which response is better or whether neither response is significantly better than the other. In addition to the expert evaluation, we also conduct a GPT-4 evaluation, following the same protocol used in (Zhou et al., 2023). Specifically, we send GPT-4 with exactly the same instructions and data annotations, and ask GPT-4 which response is better or whether neither response is significantly better than the other. The expert evaluation interface and GPT-4 evaluation prompt are presented in Appendix B. The expert evaluation and GPT-4 evaluation results are presented in Figure 1 and Figure 2. These results indicate that financial experts rate InvestLM's responses as either comparable to or better than those of the GPT-3.5 and GPT-4 models. This expert assessment aligns with GPT-4's own evaluation, which also prefers InvestLM's responses. While financial experts tend to favor Claude-2's responses over InvestLM most of the time, GPT-4 shows a preference for InvestLM's responses. Overall, it is encouraging to observe that our domain-specific instruction tuning effectively generates helpful answers to investment-related questions, especially considering that its foundational model, LLaMA, frequently produces hallucinations, as shown in Section 4.1. **Inter-Annotator Agreement**. We compute the Inter-Annotator Agreement of evaluation by the following rules. We compute the tie-discounted accuracy for each example between the majority-vote label and each annotator's label. If they match, \begin{table} \begin{tabular}{l l l l} \hline & Size & Input len. & Resp. len. \\ \hline All & 1,335 & 152.9 & 145.5 \\ \hline Stackexchange & 205 & 19.4 & 296.2 \\ CFA & 329 & 125.6 & 157.4 \\ Academic Journals & 200 & 169.3 & 74.8 \\ Textbooks & 200 & 128.9 & 136.6 \\ SEC Filings & 80 & 316.2 & 88.2 \\ Financial NLP tasks & 200 & 325.9 & 74.5 \\ Investments & 119 & 72.7 & 144.3 \\ \hline \end{tabular} \end{table} Table 2: Detailed breakdown of our financial domain instruction dataset. Input len. and Resp. len. denotes the average number of tokens of instruction/input and output response respectively. it scores 1; if one labels it as 'Tie', it scores 0.5; for all other scenarios, the score is 0. Among all the annotations, we find the expert-expert score is 72.5%. Despite the subjectivity involved in classifying responses, a reasonable level of agreement is maintained among the annotators. ### InvestLM vs. LLaMA response In our expert evaluation, we do not compare InvestLM with LLaMA because LLaMA is a foundation model that is not instruction tuned, and thus we believe such a comparison would be unfair. As a case study, we explore the difference between InvestLM and LLaMA generated response. Take the following financial news and instructions as an example. **Input**: _U.S. investors now bake in about a 60% chance that the Fed will raise interest rates at least another quarter point by the end of the year, pricing in relatively high probabilities of a move at the November meeting. The likelihood of a rate pause in September, however, remains near 80%. The Fed last paused in June before raising rates in July.._**Instruction**: _Based on the news, should I buy some stocks today?_ **LLaMA's response**: _"No! It is not time to buy yet. We are still waiting for the market to go down more and then we can start buying again. The S&P50000 closed yesterday was -194.372 points or -0.37%, Dow Jones Industrial Average dropped 131058points-0.7%, Nasdaq Composite lost 10.8% and Russell 20000000 fell 1.2.1%."_ **InvestLM's response**: _Based on the news, the answer is not clear. While investors are anticipating more rate hikes, they are also expecting a high probability of a pause in September. The uncertainty around the Fed's next moves makes it difficult to predict how stocks will react in the near-term. There are a few factors that could influence stock prices in the coming months, including inflation levels, geopolitical events, and earnings reports. Without knowing more about your individual situation and risk tolerance, it is impossible to say if investing in stocks today is a good idea. However, it is important to do your research and understand the risks involved before making any investment decisions._ It is evident that LLaMA's response is riddled with hallucinations, introducing numbers that are not even mentioned in the news. In contrast, InvestLM's response is grounded in the information presented in the news and reflects logical reasoning with consideration of risks. This underscores the value of domain instruction tuning. ## 5 How does InvestLM generalize to financial NLP tasks? We further evaluate InvestLM's performance on financial NLP benchmarks, following the evaluation protocol used in Shah et al. (2022). For the financial sentiment analysis task, we consider Financial PhraseBank (FPB) dataset Malo et al. (2014), FOMC dataset Shah et al. (2023), FiQA dataset (FiQA), and FinSent Huang et al. (2023); Yang et al. (2020). For the financial document classification task, we consider ESG, FLS, and QA datasets. For the financial numerical reasoning task, we consider the FinQA dataset Chen et al. (2021). For the financial text summarization task, we consider the ECTSum dataset Mukherjee et al. (2022). Among the datasets, **FinSent**, **ESG**, **FLS** and **QA** are proprietary datasets that we developed for financial NLP model evaluation 4. Other datasets, including Figure 1: Expert evaluation. Figure 2: GPT-4 evaluation. **Fpb**, **Fomc**, **FiQA**, **FinQA** and **ECTSum** are publicly available. Footnote 1: [https://github.com/franran/fran/f](https://github.com/franran/fran/f) the model's capability in financial NLP tasks. To answer this question, we incorporate the instruction-following data used in the fine-tuning of the Alpaca model,5, comprising 52K instructions, into our domain instruction dataset. Using this augmented dataset, we train an InvestLM-7B+Alpaca-Instructions model. We then evaluate the utility of generic instructions on the financial NLP benchmarks, with the results detailed in Table Table 5. Footnote 5: [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) The results lead to an interesting finding that the inclusion of generic instructions appears to negatively impact the model's generalizability on domain-specific NLP tasks. When comparing InvestLM-7B+Alpaca-Instructions (trained on the combined instruction dataset) to InvestLM-7B (trained solely on the domain instruction dataset), it's evident that InvestLM-7B consistently outperforms InvestLM-7B+Alpaca-Instructions across all tasks. This underscores the value of our carefully curated domain instructions. This finding suggests that rather than generating a large volume of general-purpose instructions, creating a set of high-quality, domain-specific instructions can be more effective in tapping into a model's capabilities for domain tasks. ## 6 Related Work Financial Domain LLMs.Both industry and academia are closely examining the use of LLMs to analyze vast volumes of financial text, aiming to enhance investment decision-making and risk management capabilities. Before the release of Chat-GPT, several financial domain language models, such as FinBERT Huang et al. (2023), were developed based on BERT's architecture. Inspired by the remarkable performance of ChatGPT, efforts have been made to build financial domain large language models. Among these, BloombergGPT stands out as the first foundation model with 50B parameters, trained using Bloomberg's internal corpora Wu et al. (2023). However, BloombergGPT is not publicly accessible. FinMA tunes LLaMA using publicly available NLP benchmarks formatted as instructions Xie et al. (2023). InvestLM differs from BloombergGPT in that we use instruction tuning, whereas, to our knowledge, BloombergGPT serves as a foundation model. InvestLM also diverges from FinMA, while we instruction-tune LLaMA using a carefully curated dataset that spans a wide range of financial and investment topics, FinMA relies on publicly available NLP benchmarks formatted as instructions, resulting in a more limited instruction coverage. Similarly, FinGPT is a LLM fine-tuned on the News and Tweets sentiment analysis dataset Yang et al. (2023), which likely limits its generalizability to other financial NLP tasks. To our knowledge, InvestLM is the first open-source financial domain LLM that can provide insightful responses to investment-related questions, as corroborated by financial professionals. Instruction Tuning.Prior work has shown that instruction tuning -- finetuning large language models on a collection of NLP tasks formatted with instructions -- can enhance the language model's capability to perform an unseen task from an instruction Wei et al. (2021); Sanh et al. (2021). Several general-purpose instruction datasets have since been developed, including Natural Instructions Mishra et al. (2021), the Flan collection Longpre et al. (2023), the OIG dataset (LAION.ai), among others. Recent studies have proposed automated methods for constructing instruction datasets Wang et al. (2022). Using various instruction datasets for training on LLaMA, a series of LLaMA-based LLMs have been developed, including Alpaca Taori et al. (2023) and Vicuna Chiang et al. (2023). However, most of these instruction datasets are tailored for training general-purpose LLMs. Motivated by the less-is-more-for-alignment (LIMA) Zhou et al. (2023), which posits that limited instruction tuning data is sufficient to guide models towards generating high-quality output, we carefully assemble a financial domain instruction dataset, specifically designed to provide helpful responses for investors and financial professionals. \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & Metric & LLAMA-7B & InvestLM-7B & InvestLM-7B+Alpaca-Instructions \\ \hline FinSent & Micro-F1 & 0.53 & **0.69** & 0.64 \\ FPB & Micro-F1 & 0.12 & **0.74** & 0.42 \\ FOMC & Micro-F1 & 0.25 & **0.40** & 0.32 \\ FQQA ABSA & Micro-F1 & 0.31 & **0.76** & 0.40 \\ \hline ESG & Micro-F1 & 0.19 & **0.61** & 0.48 \\ PLS & Micro-F1 & 0.34 & **0.53** & 0.17 \\ QA & Micro-F1 & 0.72 & **0.84** & 0.40 \\ \hline FinQA & Acc & 0.07 & **0.07** & 0.03 \\ \hline & Rouge-1 & 0.06 & **0.24** & 0.14 \\ EeSUM & Rouge-2 & 0.06 & **0.10** & 0.05 \\ & Rouge-L & 0.06 & **0.15** & 0.09 \\ & Bert Score & 0.73 & **0.78** & 0.75 \\ & CHRF++ & 12.90 & **29.18** & 19.48 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of InvestLM-7B trained using different instruction dataset. Conclusion In this work, we present a new financial domain LLM, InvestLM, instruction tuned on the LLaMA-65B foundation model, using a set of manually crafted instruction datasets covering diverse financial and investment related topics. InvestLM shows strong capabilities in understanding financial text and offers helpful insights in response to investment-related inquiries. Expert rates InvestLM's responses as comparable to those of state-of-the-art commercial LLMs. Moreover, our work sheds light on using a small yet high-quality instruction dataset to fine-tune a large foundational model, suggesting a promising approach for crafting domain-specific LLMs. ## Ethics Statement It is essential to recognize that the responses from InvestLM should not be construed as definitive investment advice. All investment strategies and decisions should be made after careful consideration of one's financial situation, risk tolerance, and investment goals. We strongly advocate for potential investors to consult with professional financial advisors and consider multiple information sources before making any investment decisions. Relying solely on InvestLM's outputs without thorough due diligence can lead to unintended financial consequences. Thorough analyses should be conducted to understand the strengths and weaknesses of InvestLM.
2309.09014
Electronic and Topological Properties of a Topological Insulator Thin Film Sandwiched between Ferromagnetic Insulators
We consider a thin film of a topological insulator (TI) sandwiched between two ferromagnetic (FM) layers. The system is additionally under an external gate voltage. The surface electron states of TI are magnetized due to the magnetic proximity effect to the ferromagnetic layers. The magnetization of ferromagnetic layers can be changed by applying an external magnetic field or by varying thickness of the topological insulator (owing to the interlayer exchange coupling). The change in the magnetic configuration of the system affects the transport properties of the surface electronic states. Using the Green function formalism, we calculate spin polarization, anomalous Hall effect, and magnetoresistance of the system. We show, among others, that by tuning the gate voltage and magnetizations of the top and bottom FM layers, one can observe the topological transition to the anomalous quantum Hall state.
Piotr Pigoń, Anna Dyrdał
2023-09-16T14:53:44Z
http://arxiv.org/abs/2309.09014v1
Electronic and Topological Properties of a Topological Insulator Thin Film Sandwiched between Ferromagnetic Insulators ###### Abstract We consider a thin film of a topological insulator (TI) sandwiched between two ferromagnetic (FM) layers. The system is additionally under an external gate voltage. The surface electron states of TI are magnetized due to the magnetic proximity effect to the ferromagnetic layers. The magnetization of ferromagnetic layers can be changed by applying an external magnetic field or by varying thickness of the topological insulator (owing to the interlayer exchange coupling). The change in the magnetic configuration of the system affects the transport properties of the surface electronic states. Using the Green function formalism, we calculate spin polarization, anomalous Hall effect, and magnetoresistance of the system. We show, among others, that by tuning the gate voltage and magnetizations of the top and bottom FM layers, one can observe the topological transition to the anomalous quantum Hall state. ## I Introduction The first papers on topological insulators were published in the last century, with the observation of quantum Hall effect [1]. A few years later, Haldane proposed a mathematical model of two-dimensional topological insulator [2]. This model was then extended by Kane and Mele [3; 4], who included spin in the model, making a more complete picture of topological insulators. These materials act like trivial insulators in bulk, but they are conducting at the edges [5], where the topologically protected surface states appear [6]. The importance of topological insulators results from their potential application in spintronics devices or in quantum information processing - mainly because of dissipationless transport and/or possibility to control electron's spin. Nowadays, they are used in electronic and semiconducting devices, e.g., in photodetectors, magnetic devices, and FET transistors [7]. A special class of topological insulators is that of magnetic topological insulators, where the quantum anomalous Hall effect can be observed. [8]. There are four ways for a topological insulator to become a magnetic topological insulator [9]. The most popular method is doping the bulk of topological insulators with transition metal elements, e.g., Bi\({}_{2}\)Te\({}_{3}\), Bi\({}_{2}\)Se\({}_{3}\), and Sb\({}_{2}\)Te\({}_{3}\) doped with Cr or Fe [10]. But, controlling the distribution and magnetic order of dopants is rather difficult task [11]. These materials can have many applications, e.g. in information storage and in dissipationless spin and current transport for low-power-consuming electronics [8; 12]. In this work, we consider a thin film of a topological insulator sandwiched between two ferromagnetic insulators. In such a system, the topological surface states are affected by the magnetic proximity effect. The magnetic layers are magnetized parallel either along or opposite to the \(z\)-axis (which is normal to the plane of the heterostructure). However, this magnetization orientation can be changed by an external magnetic field. Here we present the topological and transport properties of such a hybrid structure. In principle we focus on the anomalous Hall effect and on current-induced spin polarization (a.k. Edelstein effect). The anomalous Hall conductivity will give us a direct information about the topological properties of electronic surface states of TI. In turn, the current-induced spin polarization is one of possible spin-to-charge interconversion effects, that can be responsible for various magnetotransport phenomena and for spin-orbit torque in the considered systems. ## II Model and Method We consider a thin film of the topological insulator sandwiched between two ferromagnetic insulators with easy axis magnetic anisotropy along the z-direction (Fig. 1). The magnetization of the bottom layer is fixed Figure 1: A thin film of a topological insulator sandwiched between two ferromagnets. A red color marks the bottom layer, and the top layer is in blue. The layers are in the \(x-y\) plane, while the axis \(z\) is normal to the layers. and oriented parallel to the z-axis, whereas the magnetization of the top layer is free to rotate (in an external magnetic field or due to interlayer exchange coupling) from antiparallel to parallel (or vice-versa) orientation to the z-axis. The Hamiltonian \(\hat{H}\) of the electronic surface states of 3D TI, written in the so-called top-bottom representation, takes the form [13] \[\hat{H}=v\hat{\tau}_{z}\otimes(k_{y}\sigma_{x}-k_{x}\sigma_{y})+V_ {AS}\hat{\tau}_{z}\otimes\hat{\sigma}_{0}\] \[+(m_{z}^{t}\hat{\tau}_{+}+m_{z}^{b}\hat{\tau}_{-})\otimes\hat{ \sigma}_{z}, \tag{1}\] where \(\hat{\sigma}_{i}\), \(\hat{\tau}_{i}\) (\(i=x,y,z\)) are Pauli matrices acting in the spin and layer subspace, respectively, \(\tau_{\pm}=\frac{1}{2}(\tau_{0}\pm\tau_{z})\), \(\mathbf{k}=\{k_{x},k_{y}\}\) is the two-dimensional wave vector, \(V_{AS}\) is the asymmetric part of the gate voltage, while \(m_{z}^{t/b}\) denote magnetizations of the top/bottom layer and \(\mathbf{n}_{z}=(0,0,1)\). Here we assume that the layer of TI is thick enough to neglect the effect of hybridization between top and bottom surface states (the effect of hybridization as well as other orientatations of the magnetizations will be published elsewhere). To characterise the topological properties of the system, we calculate the Berry curvature for each energy band based on the following equation: \[\Omega_{n}=i\nabla_{\mathbf{k}}\times\left\langle\psi_{n}\right|\nabla_{ \mathbf{k}}\left|\psi_{n}\right\rangle, \tag{2}\] where \(\left|\psi_{n}\right\rangle\) is the \(n\)-th eigenstate of the Hamiltonian \(\hat{H}\). Figure 2 presents the band structure and the corresponding Berry curvatures for all electronic bands for two specific cases, i.e., for parallel and antiparallel configurations of the magnetic moments of the two FM layers (in the following referred to shortly as parallel and antiparallel magnetic configurations). In both cases one can see symmetric valence and conduction bands formed by the surface states. Importantly, due to the magnetization and gate voltage, there is a gap in the surface states. For the fixed magnetization of the top and bottom layers, the width of the energy gap can become shrunken, closed, and then reopened depending on the value of \(V_{AS}\). Moreover, depending on the magnetic configuration, the sign of the integrated over the angle Berry curvature changes. In Fig. 2 one can see that the Berry curvatures (integrated over the angle) for bands, \(E_{1}\), and \(E_{3}\), are the same in antiparallel magnetic configuration and opposite in the parallel one. This, in turn, leads to the two different topological phases, with Chern number equal \(\pm 2\) or \(0\), respectively. The topological phase transition will be seen in the anomalous Hall conductivity, that is directly related to the Berry curvature [14; 15]: \[\sigma_{xy}=\frac{e^{2}}{\hbar}\sum_{n=1-4}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{ 2}}\Omega_{n}f(E_{n}) \tag{3}\] Note that the above expression gives only the intrinsic contribution to the anomalous Hall effect. The scattering effects are neglected here, as we consider quasi-ballistic limit and focus on the topological properties of the system, i.e., when chemical potential is located inside the energy gap. In turn, for chemical potential located in valence or conduction bands, we focus on the current-induced spin polarization [16; 17] \[S_{y}=\frac{e\hbar}{2\pi}E_{x}\int\frac{\mathrm{d}^{2}\mathbf{k}}{(2\pi)^{2}} \mathrm{Tr}\left\{\hat{s}_{y}G_{k}^{R}\hat{v}_{z}G_{k}^{A}\right\}, \tag{4}\] where \(\hat{s}_{y}=\frac{\hbar}{2}\tau_{0}\otimes\sigma_{y}\) is the spin operator, \(\hat{v}_{i}=\frac{1}{\hbar}\frac{\partial\hat{H}}{\partial k_{i}}\) is the velocity operator, and \(G_{k}^{R/A}=\left[\left(\mu\pm i\Gamma\right)\hat{\sigma}_{0}\otimes\hat{\tau }_{0}-\hat{H}\right]^{-1}\) defines the retarded (R) and advanced (A) Green function, respectively, with \(\Gamma\) denoting the relaxation rate. Note that the surfaces of TI are spatially separated. Accordingly, apart from the spin operator \(\hat{s}_{y}\), that gives the information about the net current-induced spin polarization in the system, one can define the spin polarization induced at a specific surface. The spin polarization at the top/bottom surface is defined by the operator: \[\hat{s}_{y}^{t/b}=\frac{\hbar}{2}\tau_{\pm}\otimes\sigma_{y} \tag{5}\] and therefore \[S_{y}^{t/b}=\frac{e\hbar}{2\pi}E_{x}\int\frac{\mathrm{d}^{2}\mathbf{k}}{(2\pi) ^{2}}\mathrm{Tr}\left\{\hat{s}_{y}^{t/b}G_{k}^{R}\hat{v}_{x}G_{k}^{A}\right\}. \tag{6}\] Figure 2: Band structure and the corresponding Berry curvatures for a fixed magnetization of the bottom layer, \(m_{z}^{t}=0.1\) eV and asymmetric potential \(V_{AS}=0.05eV\). (a) and (b) correspond to the parallel magnetic configuration, i.e., \(m_{z}^{t}=0.1\) eV, while (c) and (d) to the antiparallel magnetic configuration, i.e., \(m_{z}^{t}=-0.1\) eV. ## III Results ### Anomalous Hall effect Figure 3, presents the anomalous Hall conductivity plotted as a function of the certain parameters defining Hamiltonian 1. The most intriguing phenomena happen when the chemical potential lies in the energy gap. The surface states of TI in our model are gapped due to the interplay of gate voltage, \(V_{AS}\), and magnetizations of the top and bottom layers, \(m_{z}^{t/b}\). Figure 3(a) shows the transverse conductivity as a function of the chemical potential \(\mu\) and the asymmetric potential \(V_{AS}\), for fixed values of the magnetization of both layers \(m_{z}^{t}=m_{z}^{b}=0.1\) eV (parallel magnetic configuration). In the middle of the plot, there is a region of the diamond shape, where the conductance is quantized, \(\sigma_{xy}=-2~{}e^{2}/h\). This region determines the range of the energy gap. With increasing absolute value of the asymmetric potential, \(|V_{AS}|\), this region becomes narrower. In addition, when the chemical potential \(\mu\) is far from the bandgap, the Hall conductance diminishes. Figure 3(b) shows the conductivity as a function of the chemical potential, \(\mu\), and the asymmetric potential, \(V_{AS}\). The magnitude and direction of the magnetization of the top layer are changed in comparison to those in Fig. 3(a), i.e., \(m_{z}^{t}=-0.05\) eV. This corresponds to antiparallel magnetic configuration, with asymmetric magnitudes of the magnetizations. Now, one can distinguish four particular regions. The dark areas, where the conductance reaches a maximum negative value (bands \(E_{1}\) and \(E_{3}\) are fully occupied). Thus, by changing the asymmetric potential, one can control the anomalous Hall conductance of the system. However, for this magnetic configuration, there is no area of quantized conductance. In turn, in Fig. 3(c) we show the conductivity of the system as a function of the chemical potential, \(\mu\), magnetization of the top layer, \(m_{z}^{t}\), and for the fixed values of the magnetization of the bottom layer \(m_{z}^{t}=0.1\) eV and for \(V_{AS}=5\) meV. When \(m_{z}^{t}\) changes from \(m_{z}^{t}=-0.1\) eV to \(m_{z}^{t}=0.1\) eV, the magnetic configuration of the system changes from the antiparallel magnetic configuration to the parallel one. Here the changes of amplitude and orientation of \(m_{z}^{t}\) determine the width of the energy band gap and the topological phase transition. For \(m_{z}^{t}<0\) (the antiparallel magnetic configuration) and chemical potential located in the gap, one observes the trivial phase with zero anomalous Hall conductivity (yellow triangle). In turn, for \(m_{z}^{t}>0\) (the parallel magnetic configuration) and chemical potential in the gap, the quantum anomalous Hall phase occurs (dark triangle). The possibility of tuning the gap width and topological phase behaviour externally seems to be promising for application in spintronics and low-energy consumption electronics. Figure 3(d) presents the variation of the transverse conductance with magnetizations of both layers, for fixed values of the chemical potential, i.e., \(\mu=0\), and \(V_{AS}=5\) meV. When the magnetic configuration is parallel, the system is in the quantum anomalous Hall state with \(\sigma_{xy}=\mp 2~{}e^{2}/h\) (black and yellow areas in Fig 3(d)). The negative sign of anomalous Hall conductance corresponds to the magnetizations along \(z\)-axis while positive sign of anomalous Hall conductivity refers to the magnetizations oriented antiparallel to the \(z\)-axis. For the antiparallel configuration of magnetic layers, the system is a trivial insulator state (the purple areas in Fig 3(d)), where the anomalous Hall conductance is zero. Based on the above results one can clearly see the characteristic areas where the conductance is quantized. These areas correspond to Chern number equal \(\pm 2\) depending on whether the magnetic configuration is parallel or antiparallel. Accordingly, when chemical potential lies in the energy gap (fully occupied valence bands) one can observe a topological phase transition to the quantum anomalous Hall insulator state - the state corresponding Figure 3: Density plots of the anomalous Hall conductivity as a function of asymmetric potential \(V_{AS}\) and chemical potential \(\mu\) (a) for parallel magnetization direction (\(m_{z}^{t}=m_{z}^{b}=0.1\) eV) and (b) for parallel magnetization direction (\(m_{z}^{t}=-0.1\) eV and \(m_{z}^{b}=0.1\) eV) and as a function of (c) magnetization of the top layer \(m_{z}^{t}=0.1\) eV and chemical potential \(\mu\) with the fixed magnetization of bottom layers \(m_{z}^{b}=0.1\) eV, \(V_{AS}=5\) meV (d) magnetization of the layers with fixed \(\mu=0\) and \(V_{AS}=5\) meV. to the zero longitudinal charge current and fully quantized Hall current. ### Current-induced spin polarization Now, we consider the spin-to-charge interconversion effects in the system under consideration, i.e. the spin polarization induced by a charge current. This phenomenon appears as a result of spin-orbit coupling in the system (in the TI layer). In Fig. 4, we present the current-induced spin polarization due to the electric current flowing along the \(x\)-axis. The nonequilibrium polarization is shown here as a function of \(\mu\) and \(V_{AS}\) for a fixed magnetization of the bottom layer, \(m_{z}^{b}=0.1\) eV. Here the orientation of magnetization in the top layer doesn't affect the results (solid and dashed lines overlap in Fig. 4). Figure 4(a) shows the spin polarization at the top surface of TI,\(S_{yx}^{t}\), while Fig. 4(b) presents the spin polarization at the bottom surface of TI, \(S_{yx}^{b}\). The neat spin polarization of the system, i.e. sum of the polarizations of top and bottom surface is shown in Fig. 4(c). As only states that are at the Fermi level contribute to the polarization, and there are no states at the Fermi level in the energy gap region, then the spin polarization vanishes there, as clearly visible in the figures for chemical potentials \(|\mu|<0.2\) eV. As one can note, the spin polarizations of the topological electron states at the top and bottom layers have opposite orientations. Nevertheless, there is some asymmetry between top and bottom polarizations, what can be seen in Fig 4(a) and 4(b). As a result, the net spin polarization of the system is nonzero, as shown explicitly in Fig. 4(c). One should emphasize that pairs of the plots: \(S_{yx}^{b}\), \(S_{xy}^{t}\), \(S_{yx}^{t}\), \(S_{xy}^{b}\) and \(S_{yx},\ S_{xy}\) are its mirror reflection regarding to the line passing through the point \(\mu=0\). ## IV Discussion In this work, we considered the electronic and topological properties of the thin film of a topological insulator sandwiched between two layers of a ferromagnetic insulator. Each layer can be characterized by magnetization oriented along the \(z\)-axis, and we assume that an external magnetic field can control the magnetic configuration of the magnetic layers. The presented results show that magnetization direction has a significant impact on the transport properties of the system. Firstly, for the parallel orientation of the layer magnetizations, two of the Berry curvatures are positive, and two are negative. If we switch the magnetization direction of the top layer, four of Berry curvatures are negative. This results in a change of Chern numbers of each band, and results in the change of the behaviour of anomalous Hall conductivity. Accordingly, depending on the system parameters, \(m_{z}^{t/b}\) and \(V_{A}S\), one can switch our system between a trivial insulator state (zero Hall conductance in the energy gap) and a topological insulator state (quantized anomalous conductance in the energy gap). Moreover, one can induce spin polarization in the system through an external electric field. We showed that although the chemical potential is in the gap, the current-induced spin polarization appears at each interface and is oriented in a particular direction. ## Acknowledgement This work has been supported by the National Science Centre in Poland under the project Sonata-14 no. 2018/31/D/ST3/02351. P. Pigon acknowledges B. Spisak for fruitful discussions.
2305.00483
Sensitivity study of the charged lepton flavor violating process $τ \to γμ$ at STCF
A sensitivity study for the search for the charged lepton flavor violating process $\tau \to \gamma\mu$ at the Super $\tau$-Charm Facility is performed with a fast simulation. With the expected performance of the current detector design and an integrated luminosity of \SI{1}{ab^{-1}} corresponding to one-year of data taking, the sensitivity on the branching fraction (BF) of $\tau \to \gamma\mu$ is estimated to be at the level of \num{e-8}. The sensitivity under different detector performances are also studied. With ideal performance, the BF could be probed to be \num{2.8e-8} at \SI{90}{\percent} confidence level. The sensitivity is expected to scale with the square root of the luminosity, therefore with a total luminosity of \SI{10}{ab^{-1}} corresponding to ten-year of data taking, the sensitivity could reach \num{8.8e-9}, which is about one order of magnitude improvement upon the current best upper limit.
Teng Xiang, Xiao-Dong Shi, Da-Yong Wang, Xiao-Rong Zhou
2023-04-30T13:55:36Z
http://arxiv.org/abs/2305.00483v2
Sensitivity study of the charged lepton flavor violating process \(\tau\rightarrow\gamma\mu\) at STCF ###### Abstract A sensitivity study for the searching of the charged lepton flavor violating process \(\tau\rightarrow\gamma\mu\) at the Super \(\tau\)-Charm Facility is performed with a fast simulation. With the expected performance of current detector design and the integrated luminosity of \(1\,\mathrm{ab}^{-1}\) in one year, the sensitivity on the branching fraction (BF) of \(\tau\rightarrow\gamma\mu\) is estimated to be at the level of \(10^{-8}\). The sensitivity under different detector performances are also studied. With ideal performance, the BF could be probed to be \(1.8\times 10^{-8}\) at \(90\,\%\) confidence level. The sensitivity is expected to scale with the square root of the luminosity, therefore with a total luminosity of \(10\,\mathrm{ab}^{-1}\) corresponding to ten-year of data taking, the sensitivity could reach \(5.7\times 10^{-9}\), which is about one order of magnitude improvement upon the current best upper limit. + Footnote †: journal: Eur. Phys. J. C e1e2e-mail: [email protected] ## 1 Introduction In the Standard Model (SM), the charged lepton flavor violating (cLFV) processes can occur through neutrino oscillation, but are highly suppressed due to the small mass of neutrino [1], with the branching fraction (BF) to be, for example \[\mathcal{B}(\ell_{1}\rightarrow\ell_{2}\gamma)=\frac{3\alpha_{e}}{32\pi} \left|\sum_{i}U_{i1}^{*}U_{2i}\frac{m_{i}^{2}}{M_{W}^{2}}\right|^{2}\approx 10^{ -50}\sim 10^{-54}, \tag{1}\] where \(U\) is the PMNS matrix [2], \(i\) runs over the three neutrinos, \(m_{i}\) and \(m_{W}\) are the masses of neutrinos and \(W\) boson. The BF in the SM is well beyond the sensitivity of current experiments, thus the observation of cLFV processes would be an unambiguous signature of new physics. On the other hand, the lepton flavor conservation, differing from other conservation laws in the SM, is not associated with an underlying conserved current, therefore many theoretical models beyond the SM naturally introduce cLFV processes, such as the Minimal Supersymmetric SM [3], the Grand Unified Theories [4; 5], and seesaw mechanisms [6]. Some of them predict BFs that are close to the current experimental sensitivity. As the heaviest lepton, tau has many possible cLFV decay modes, amongst them \(\tau\rightarrow\gamma\mu\) is regarded as one of the best probes, and is predicted in a wide variety of models with rates enhanced to observable level. For example, the BF is predicted to be up to \(10^{-9}\) in seesaw models [7], \(10^{-10}\) in Higgs-mediated SUSY models [8; 9], \(10^{-8}\) in SO(10) SUSY models [10; 11], and \(10^{-9}\) in non-universal \(Z^{\prime}\) models [12]. Experimentally, the most stringent upper limit (UL) on the BF of this channel is given by BABAR to be \(4.4\times 10^{-8}\) at \(90\,\%\) confidence level (C.L.) [13] and Belle to be \(4.2\times 10^{-8}\) at \(90\,\%\) C.L. [14]. Next generation of experiments are aiming at pushing the sensitivity down for another one order of magnitude or even further [15]. The proposed Super \(\tau\)-Charm Facility (STCF) [16] in China, which is an electron-positron collider at \(\tau\)-charm region, is one of such next generation of experiments. In this paper, the sensitivity of searching for \(\tau\rightarrow\gamma\mu\) at STCF is studied to explore the physics potential of STCF and guide the design of the experiment. STCF has several advantages on searching for \(\tau\rightarrow\gamma\mu\). As an electron-positron collider, the total four-momentum is known and the final state is fully reconstructed, leading to higher efficiency and lower background. The energy of STCF can be adjusted to be just above the threshold of tau pair production \(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}\), where the cross-section reaches the maximum [17], and the energy of radiative photons which is one of the main sources of fake signal photons in background is low thus can be well separated from real signal. ## 2 Detector design and Monte Carlo simulation The proposed STCF is a symmetric electron-positron collider operating at center-of-mass frame energy \(\sqrt{s}\) from 2.0 GeV to 7.0 GeV with a designed peaking luminosity over \(0.5\times 10^{35}\) cm\({}^{-2}\) s\({}^{-1}\)[16]. Such an environment will serve as an important high statistics and low background platform to test the SM and probe possible new physics beyond the SM, such as cLFV decays of tau. Assuming that it runs at \(\sqrt{s}=4.26\) GeV, STCF will accumulate \(3.5\times 10^{9}\) tau pairs [18; 19] per year with an expected integrated luminosity of 1 ab\({}^{-1}\). As a general purpose detector designed for the electron-positron collider, the STCF detector consists a tracking system composed of the inner and outer trackers, a particle identification (PID) system with charged kaon/pion misidentification rate less than 2 % up to 2 GeV/\(c\), an electromagnetic calorimeter (EMC) with an excellent energy resolution and a good time resolution, a super-conducting solenoid, and a muon detector that provides good charged pion/muon separation. The design and the expected performances for each sub-detector are detailed in the conceptual design report [16]. At present, the STCF detector and the corresponding offline software system are under research and development. A fast simulation software is therefore developed to access the physics study [20], which takes the most common event generators as input to perform a fast and realistic simulation. The simulation includes resolution and efficiency responses for tracking of final state charged particles and photons, PID system and kinematic fit related variables. Besides, the fast simulation also provides flexible interface for adjusting performance of each sub-system, which can be used to optimize the detector design according to the physics requirements. This sensitivity study is performed based on Monte Carlo (MC) samples with 1 ab\({}^{-1}\) integrated luminosity at 4.26 GeV. Samples with all the possible processes are generated, including \(e^{+}e^{-}\rightarrow\ell^{+}\ell^{-}(\ell=e,\mu)\) (Bhabha and dimu) and \(e^{+}e^{-}\rightarrow\gamma\gamma\) (digamma) generated with Babayaga[21; 22], hadronic processes \(e^{+}e^{-}\to q\bar{q}(q=u,d,s,c)\) generated with LundArlw[23] and \(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}\) (ditau) where the production of tau pair is generated with KKMC [18; 19] and the decay of tau is generated with Tauola[24]. Considering the computing power requirements and potential background levels, the background MC samples are generated with different statistics and then scaled to 1 ab\({}^{-1}\). The statistics for Bhabha, dimu, digamma, ditau and hadronic MC samples correspond to the effective luminosity of 0.01 ab\({}^{-1}\), 1 ab\({}^{-1}\), 0.1 ab\({}^{-1}\), 5 ab\({}^{-1}\) and 0.1 ab\({}^{-1}\), respectively. The signal MC sample is simulated with process \(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}\) with one tau goes to SM decay modes and the other decays to \(\gamma\mu\). The decay \(\tau\rightarrow\gamma\mu\) is generated with pure phase space model since the dynamics is unknown. ## 3 Analysis procedure At STCF, \(\tau^{+}\) and \(\tau^{-}\) are produced in pairs, so we can tag \(\tau^{+}\) (denoted as tag side) by its SM decay modes and search for cLFV decay of its partner \(\tau^{-}\) (denoted as signal side). The charge-conjugated channels are always implied throughout the paper. For the tag side, amongst the five main 1-prong decay modes of \(\tau^{+}\), \(e^{+}\nu_{e}\nu_{\tau}\), \(\pi^{+}\nu_{\tau}\) and \(\pi^{+}\pi^{0}\nu_{\tau}\) are selected as tag modes, which account for 54 % of the total BF of tau decays [25]. \(\tau^{+}\rightarrow\mu\nu_{\mu}\bar{\nu}_{\tau}\) mode is not used due to high \(e^{+}e^{-}\rightarrow(\gamma)\mu^{+}\mu^{-}\) background. As for \(\tau^{+}\rightarrow\pi^{+}\pi^{0}\pi^{0}\bar{\nu}_{\tau}\) mode, due to the high photon multiplicity, the reconstruction efficiency is low and combinatorial background is high, and the tag photons can be easily misidentified as signal photon. The signal side consists of a signal photon and a signal muon and is featured by a peak around the beam energy on the total energy distribution and a peak around the tau mass on the invariant mass distribution of signal photon and muon, as shown in Fig. 1. Events with two reconstructed charged tracks and zero net-charge are selected. Charged tracks are selected after passing the criteria in fast simulation. PID is also performed for charged tracks by the fast simulation and one of the tracks is required to be identified as electron or pion, which is tag charged particle, and the other is required to be identified as muon, which is signal muon. \(E/p\) information is then used for further identification, where \(E\) is the deposited energy in EMC and \(p\) is the momentum of the track. \(E/p>0.8\) is required for electron and \(E/p<0.5\) for muon and pion. Neutral pions are reconstructed with two-photon combinations with invariant masses within \(\pi^{0}\) mass window of around 0.12 GeV/\(c^{2}\) to 0.14 GeV/\(c^{2}\) determined by fitting. It is required that the number of remaining photons after neutral pions reconstruction is equal to one and the photon is denoted as signal photon. To further suppress background, the momentum of signal muon and energy of signal photon are both required to be in \([0.4,\,1.7]\) GeV, and the angle between them is required to satisfy \(\cos\theta_{\gamma\mu}<-0.35\), all of which are constrained by the kinematics (shown in Fig. 2). Finally, a two-dimensional signal region is chosen on the total energy \(E(\gamma\mu)\) and invariant mass \(M(\gamma\mu)\) distributions of signal photon and signal muon. Since the two distributions are asymmetric and correlated, the signal region is an asymmetric oblique ellipse, as shown in Fig. 1. The tag mode for each event is then assigned based on the event selection result. The event is classified into \(e^{+}\nu_{e}\nu_{\tau}\) mode if one electron is identified, and it is required that there are no neutral pions reconstructed. If one charged pion is identified, the event is further classified into \(\pi^{+}\nu_{\tau}\) or \(\pi^{+}\pi^{0}\bar{\nu}_{\tau}\) mode based on whether the number of neutral pions is 0 or 1. Events that do not fit into the classifications are discarded. After above initial selections, there are mainly four kinds of background. For \(e^{+}\nu_{e}\bar{\nu}_{\tau}\) tag mode, the main background is ditau process where tag tau radiative decays to electron, signal tau SM decays to mu, and signal photon is misidentified from radiative photon of tag side. For \(\pi^{+}\bar{\nu}_{\tau}\) tag mode, the main background is ditmu and ditau processes. In dimu process, one muon is misidentified as tag pion, the other muon is regarded as signal muon, and signal photon is misidentified from radiative photon. For ditau process, signal tau SM decays to muon, tag tau decays to \(\pi\pi^{0}\) with \(\pi^{0}\) not successfully reconstructed and the daughter photons are regarded as signal photon or not detected. For \(\pi^{+}\pi^{0}\bar{\nu}_{\tau}\) tag mode, the main background is ditau process which has similar reason as the same background process in \(\pi^{+}\bar{\nu}_{\tau}\) tag mode, and hadronic process, mainly \(\pi^{+}\pi^{-}+(n)\pi^{0}\), where signal muon is misidentified from charged pion and signal photon is from \(\pi^{0}\)(s) that are not successfully reconstructed. Further event selection criteria are determined based on the characteristics of background. For ditau background in \(e^{+}\nu_{e}\bar{\nu}_{\tau}\) tag mode, the signal photon is from radiative leptonic decay of tag tau, thus collinear with the tag charged track, as shown in Fig. 3(a). Moreover, the momentum of tag charged track is lowered due to the existence of radiative photon (Fig. 3(b)). Since neutrinos in the final states are not detected, there will be missing four-momentum defined as the total initial four-momentum subtracted by the four-momenta of all detected final state particles. There are more neutrinos in background than in signal, so the missing energy \(E_{\rm miss}\) in background is higher (Fig. 3(c)), where \(E_{\rm miss}\) is defined as energy component of missing four-momentum. For radiative dimu background in \(\pi^{+}\bar{\nu}_{\tau}\) tag mode, \(E_{\rm miss}\) is lower since there are no neutrinos (Fig. 4(a)), and the direction of missing momentum accumulates at beam direction since missing momentum is mainly due to radiative photons which are collinear with beam and can escape in the beam direction which is beyond detector coverage (Fig. 4(b)). Furthermore, the energy of signal photon is lower since it is from radiation (Fig. 4(c)). For ditau background in \(\pi^{+}\bar{\nu}_{\tau}\) tag mode, in contrast to the signal which has only one neutrino, there are more neutrinos, so the missing mass squared \(M^{2}_{\rm miss}\) of signal is zero while this background not (Fig. 4(d)). The \(M^{2}_{\rm miss}\) is defined as the square of invariant mass of the missing four-momentum. For ditau background in \(\pi^{+}\pi^{0}\bar{\nu}_{\tau}\) tag mode, the \(M^{2}_{\rm miss}\) distribution has similar characteristic with \(\pi^{+}\bar{\nu}_{\tau}\) tag mode (Fig. 5(a)). The distribution of helicity angle of signal muon is also different in signal and background due to different decay dynamics of signal tau (Fig. 5(b)), where the helicity angle is defined as the angle between the direction of signal muon in signal tau rest frame and the direction of signal tau in center-of-mass frame. For hadronic background in \(\pi^{+}\pi^{0}\bar{\nu}_{\tau}\) tag mode which is mainly \(\pi^{+}\pi^{-}+(n)\pi^{0}\), the missing momentum is due to photons escaping in beam direction (Fig. 5(c)). To determine the concrete selection criteria, Punzi significance \(\epsilon/(1.5+\sqrt{N_{\rm bkg}})\)[26] is used as the figure of merit, where \(\epsilon\) is signal efficiency and \(N_{\rm bkg}\) is the number of background events. A multidimensional optimization is performed for all the criteria simultaneously. Table 1 summarizes the further selection criteria and background levels and signal efficiencies before and after further selection. The final background level is suppressed to be only a few with signal efficiency of several percents. A Bayesian-based maximum likelihood estimator, extended from the profile likelihood approach [27], is used to determine the UL on BF of \(\tau\rightarrow\gamma\mu\) with statistical fluctuations taking info account. The likeli Figure 1: The box plot is the two-dimensional distribution of \(E(\gamma\mu)\) and \(M(\gamma\mu)\) where larger box indicates higher density, and the red line shows the signal region. Figure 2: The kinematic distributions of (a) momentum of muon (the distribution of energy of photon is similar since they are both light compared to tau) and (b) cosine of angle between photon and muon for \(\tau\rightarrow\gamma\mu\) in \(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}\) at \(\sqrt{s}=4.26\,\mathrm{GeV}\). \[\begin{split}\mathcal{L}&=\text{Poisson}(N_{\text{obs}},2N _{\tau^{+}\tau^{-}}\times\mathcal{B}\times\varepsilon+\sum_{i}N_{\text{bkg},i })\\ &\times\prod_{i}\text{Poisson}(N_{\text{bkg},i}^{\text{obs}},N_{ \text{bkg},i}/f_{i}),\end{split} \tag{2}\] where \(N_{\text{obs}}\) is the observed number of events, \(N_{\tau^{+}\tau^{-}}\) is the number of tau pairs, \(\mathcal{B}\) is BF of \(\tau\to\gamma\mu\) and is the parameter of interest, \(\varepsilon\) is signal efficiency, \(N_{\text{bkg},i}\) with \(i\) runs over all the background samples are the _true_ values of number of background events which are nuisance parameters, \(N_{\text{bkg},i}^{\text{obs}}\) and \(f_{i}\) are the observed number of events and scale factors \begin{table} \begin{tabular}{c|c|c|c} \hline \hline tag mode & selection criteria & \(N_{\text{bkg}}\) (\(\varepsilon\)) before & \(N_{\text{bkg}}\) (\(\varepsilon\)) after \\ \hline \multirow{3}{*}{\(e^{+}\nu_{\tau}\nu_{\tau}\)} & \(\cos\theta_{\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}}<-0.2\) & \multirow{3}{*}{\(1.5\times 10^{2}\) (2.6 \%)} & \multirow{3}{*}{\(0\) (1.1 \%)} \\ & \(P_{\text{sig\_t},\text{sig\_t}}>0.5\,\text{GeV}/\) & & \\ & \(E_{\text{miss}}<1.7\,\text{GeV}/\) & & \\ \hline \multirow{3}{*}{\(\pi^{+}\nu_{\tau}\)} & \(E_{\text{miss}}=0.7\,\text{GeV}/\) & \multirow{3}{*}{\(1.4\times 10^{4}\) (4.0\%)} & \multirow{3}{*}{\(0\) (3.0 \%)} \\ & \(N_{\text{bkg},\gamma}^{\text{miss}}<0.05\,\text{GeV}^{2}/\)c & & \\ \hline \multirow{3}{*}{\(\pi^{+}\pi^{0}\nu_{\tau}\)} & \(M_{\text{miss}}^{\text{obs}}<0.075\,\text{GeV}^{2}/\)c & \multirow{3}{*}{\(1.2\times 10^{3}\) (2.6 \%)} & \multirow{3}{*}{\(1.2\) (1.5 \%)} \\ & \(|\cos\theta_{\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{ sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t},\text{sig\_t}, \text{sig\_t},\text for each background samples. The likelihood is then taken as probability distribution of parameters, and the posterior distribution of BF is obtained by integrating over nuisance parameters. Finally, the UL on BF is determined by integrating the posterior distribution, as shown in Fig. 6. Since this is a sensitivity study based on pure MC samples without real data, the MC samples are taken as fake data, namely assuming that the observed number of events is equal to the number of background events estimated with MC samples. With total luminosity of 1 ab\({}^{-1}\), the sensitivity of UL of BF is estimated to be at the level of \(10^{-8}\) at 90 % C.L. A full systematic uncertainty evaluation which requires both experimental data and full MC simulation is not possible at this stage, so it will be qualitatively discussed. Referring to Eq. 2, the possible sources of systematic uncertainties include the number of tau pairs, the event selection efficiency, and the estimation of background. The uncertainty of number of tau pairs comes from the determination of luminosity and cross-section of \(e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}\). For efficiency, the statistical uncertainty can be negligible with large MC samples; the uncertainty of tracking and PID of tracks and reconstruction of neutral pions, which is evaluated with difference between data and MC, can be studied with pure and high statistical control samples; uncertainties related with other selection criteria can be evaluated by control samples or varying the criteria and performing the Barlow test [28]; for the modeling of \(\tau\rightarrow\gamma\mu\) decay, although the LFV interaction structure in unknown, the uncertainty can be estimated by assuming extreme cases such as pure \(V-A\) and \(V+A\) forms. The uncertainty of background estimation can be evaluated with side band of data or control samples and verifying that the estimated background is consistent with that in real data. The total systematic uncertainties at STCF are expected to be at the level of several percents or less, which only have minor impact on the sensitivity. ## 4 Optimization of detector performance The performance of STCF detector is tunable in the fast simulation, and the sensitivity of \(\tau\rightarrow\gamma\mu\) under different performances are studied to guide the design of the detector. The detector performance properties that are crucial to this analysis are determined based on the main background channels where photons and muons are misidentified as signals. One of the main origins of the signal muon is misidentification of pion, while the signal photon is misidentified from photon with other origins. So, both pion/muon separation capability and photon detection resolution are relevant to this analysis. Better pion/muon separation capability can efficiently suppress background caused by pion/muon misidentification, and better photon detection resolution will improve the resolution of signal region thus exclude more background. With fast simulation, three kinds of detector responses can be studied: pion/muon separation, photon energy resolution and photon position resolution. Considering the feasibility of detector, a set of different values is assumed for each of the performance properties, from conservative to aggressive. The best detector performance is taken as benchmark, and the dependence of the sensitivity upon each performance property is checked by fixing the other properties and only varying the one under study. The benchmark result is sensitivity of \(1.8\times 10^{-8}\) with 1 ab\({}^{-1}\) luminosity under the detector performance of 1 % in pion/muon misidentification rate and 3 mm and 2 % in photon position and energy resolutions. pion/muon separationOn the one hand, better pion/muon separation can suppress background caused by pion misidentified as signal muon. On the other hand, better separation means more strict selection of muon, which will cause lower signal efficiency. Three levels of pion/muon separation capability is assumed with overall misidentification rates of 3 %, 1.7 % and 1 %, which corresponding to the muon identification efficiency of 85 %, 92 % and 97 % at a momentum of 1 GeV\(/c\), respectively. Table 2 summarizes the efficiency of noun identification and the sensitivity of \(\tau\rightarrow\gamma\mu\) with respect to pion/muon misidentification rate. The sensitivity is also shown in Fig. 9. The result shows that the sensitivity improves with better pion/muon separation. photon position resolution is 6 mm and an improvement of 30 % and 50 % is assumed for optimization. The signal resolution under different photon position resolutions is shown in Fig. 7, and the sensitivity is summarized in Table 3 and Fig. 9. It can be seen that better photon position resolution will result in better sensitivity, but the influence is rather small, this is because the baseline resolution is already quite good. photon position and energy resolutions, the best sensitivity of \(1.8\times 10^{-8}\) at 90 % confidence level is achieved. Since background-free can not be achieved, the sensitivity is expected to scale with the square root of the luminosity, and could reach \(5.7\times 10^{-9}\) with ten-year of data taking, which is about one order of magnitude improvement upon the current best result. ###### Acknowledgements. We thank the Hefei Comprehensive National Science Center for their strong support. This work is supported by the Joint Large-Scale Scientific Facility Funds of the National Natural Science Foundation of China and Chinese Academy of Sciences (CAS) under Contract No. U1832207, the National Key R&D Program of China under Contracts No. 2020YFA0406400, and the international partnership program of the CAS Grant No. 211134KYSB20200057.
2309.06327
Toward Consistent High-fidelity Quantum Learning on Unstable Devices via Efficient In-situ Calibration
In the near-term noisy intermediate-scale quantum (NISQ) era, high noise will significantly reduce the fidelity of quantum computing. Besides, the noise on quantum devices is not stable. This leads to a challenging problem: At run-time, is there a way to efficiently achieve a consistent high-fidelity quantum system on unstable devices? To study this problem, we take quantum learning (a.k.a., variational quantum algorithm) as a vehicle, such as combinatorial optimization and machine learning. A straightforward approach is to optimize a Circuit with a parameter-shift approach on the target quantum device before using it; however, the optimization has an extremely high time cost, which is not practical at run-time. To address the pressing issue, in this paper, we proposed a novel quantum pulse-based noise adaptation framework, namely QuPAD. In the proposed framework, first, we identify that the CNOT gate is the fidelity bottleneck of the conventional VQC, and we employ a more robust parameterized multi-quit gate (i.e., Rzx gate) to replace the CNOT gate. Second, by benchmarking the Rzx gate with different parameters, we build a fitting function for each coupling qubit pair, such that the deviation between the theoretic output of the Rzx gate and its on-device output under a given pulse amplitude and duration can be efficiently predicted. On top of this, an evolutionary algorithm is devised to identify the pulse amplitude and duration of each Rzx gate (i.e., calibration) and find the quantum circuits with high fidelity. Experiments show that the runtime on quantum devices of QuPAD with 8-10 qubits is less than 15 minutes, which is up to 270x faster than the parameter-shift approach. In addition, compared to the vanilla VQC as a baseline, QuPAD can achieve 59.33% accuracy gain on a classification task, and average 66.34% closer to ground state energy for molecular simulation.
Zhirui Hu, Robert Wolle, Mingzhen Tian, Qiang Guan, Travis Humble, Weiwen Jiang
2023-09-12T15:39:06Z
http://arxiv.org/abs/2309.06327v1
Toward Consistent High-fidelity Quantum Learning on Unstable Devices via Efficient In-situ Calibration ###### Abstract In the near-term noisy intermediate-scale quantum (NISO) era, high noise will significantly reduce the fidelity of quantum computing. What's worse, recent works reveal that the noise on quantum devices is not stable, that is, the noise is dynamically changing over time. This leads to an imminent challenging problem: At run-time, is there a way to efficiently achieve a consistent high-fidelity quantum system on unstable devices? To study this problem, we take quantum learning (a.k.a., variational quantum algorithm) as a vehicle, which has a wide range of applications, such as combinatorial optimization and machine learning. A straightforward approach is to optimize a variational quantum circuit (VQC) with a parameter-shift approach on the target quantum device before using it; however, the optimization has an extremely high time cost, which is not practical at run-time. To address the pressing issue, in this paper, we proposed a novel quantum pulse-based noise adaptation framework, namely QuPAD. In the proposed framework, first, we identify that the CNOT gate is the fidelity bottleneck of the conventional VQC, and we employ a more robust parameterized multi-qubit gate (i.e., Rzx gate) to replace CNOT gate. Second, by benchmarking Rzx gate with different parameters, we build a fitting function for each coupling qubit pair, such that the deviation between the theoretic output of Rzx gate and its on-device output under a given pulse amplitude and duration can be efficiently predicted. On top of this, an evolutionary algorithm is devised to identify the pulse amplitude and duration of each Rzx gate (i.e., calibration) and find the quantum circuits with high fidelity. Experiments show that the runtime on quantum devices of QuPAD with 8-10 qubits is less than 15 minutes, which is up to 270\(\times\) faster than the parameter-shift approach. In addition, compared to the vanilla VQC as a baseline, QuPAD can achieve 59.33% accuracy gain on a classification task, and average 66.34% closer to ground state energy for molecular simulation. Quantum Learning, Quantum Noise, Unstable Noise, Noise Adaptation, Pulse Calibration. ## I Introduction Quantum learning has a wide range of applications with theoretic proof of quantum speedup over classical computing [1, 2, 3, 4], such as molecular simulation [5, 6, 7], combinatorial optimization [8, 9, 10, 11], and machine learning [12, 13, 14, 15, 16]. However, such quantum speedup can dismiss when deploying to actual quantum devices. Specifically, as We are currently in the Noisy Intermediate-Scale Quantum (NISQ) era, the high level of noise in quantum computing is a roadblock to unleashing the power of quantum learning for real-world applications. One promising solution is quantum error correction (QEC) [17, 18] for fault-tolerant quantum computing, which however requires thousands or even more physical qubits for one perfect qubit; this is not practical in the near-term NISQ systems. Another possible solution is Quantum Error Mitigation (QEM) [19, 20, 21]; however, a recent work [22] reveals the scalability issue of QEM, which has exponentially growing overhead with circuit depth. The scalability issue is magnified in near-term quantum devices, where the noise is not stable [23]. It, therefore, calls for a more lightweight noise suppression technique to deal with temporal varied quantum noise, called **Quantum Error Adaptation (QEA)** in this paper. Recently, there are growing research works [23, 24, 25, 26, 27, 28] noticed that the quantum noise exhibits considerable variability over time, which can result in the system fidelity being inconsistent at different times. As an example, we trace the error rate on IBM's actual quantum processors for one entire year and tested the performance of a quantum circuit for a classification task. Figure 1(a) show the error rate of a pair of qubits, which varies from 0.005 to 0.13 in a year. As a result, the accuracy of the classification task changed from 0.82 to 0.2, as shown in Figure 1(b). In realistic applications, users are usually blind to the application's performance caused by noise, since quantum vendors only provide noise data. Therefore, it is critical to have a systematic approach to transparently ensure a stable performance of a quantum learning system. Figure 1: Illustration of unstable noise, its influence, and solutions: (a) unstable noise leads to fluctuating error rates of the two-qubit gate over one year; (b) influence of unstable noise on quantum learning-based classification; (c) straightforward applying the existing technique to address unstable noise; (d) the proposed lightweight adaptation using in-situ calibration.
2302.14451
Hierarchical Reinforcement Learning in Complex 3D Environments
Hierarchical Reinforcement Learning (HRL) agents have the potential to demonstrate appealing capabilities such as planning and exploration with abstraction, transfer, and skill reuse. Recent successes with HRL across different domains provide evidence that practical, effective HRL agents are possible, even if existing agents do not yet fully realize the potential of HRL. Despite these successes, visually complex partially observable 3D environments remained a challenge for HRL agents. We address this issue with Hierarchical Hybrid Offline-Online (H2O2), a hierarchical deep reinforcement learning agent that discovers and learns to use options from scratch using its own experience. We show that H2O2 is competitive with a strong non-hierarchical Muesli baseline in the DeepMind Hard Eight tasks and we shed new light on the problem of learning hierarchical agents in complex environments. Our empirical study of H2O2 reveals previously unnoticed practical challenges and brings new perspective to the current understanding of hierarchical agents in complex domains.
Bernardo Avila Pires, Feryal Behbahani, Hubert Soyer, Kyriacos Nikiforou, Thomas Keck, Satinder Singh
2023-02-28T09:56:36Z
http://arxiv.org/abs/2302.14451v1
# Hierarchical Reinforcement Learning in Complex 3D Environments ###### Abstract Hierarchical Reinforcement Learning (HRL) agents have the potential to demonstrate appealing capabilities such as planning and exploration with abstraction, transfer, and skill reuse. Recent successes with HRL across different domains provide evidence that practical, effective HRL agents are possible, even if existing agents do not yet fully realize the potential of HRL. Despite these successes, visually complex partially observable 3D environments remained a challenge for HRL agents. We address this issue with Hierarchical Hybrid Offline-Online (H2O2), a hierarchical deep reinforcement learning agent that discovers and learns to use options from scratch using its own experience. We show that H2O2 is competitive with a strong non-hierarchical Muesli baseline in the DeepMind Hard Eight tasks and we shed new light on the problem of learning hierarchical agents in complex environments. Our empirical study of H2O2 reveals previously unnoticed practical challenges and brings new perspective to the current understanding of hierarchical agents in complex domains. Hierarchical Reinforcement Learning, Partially Observable Markov Decision Processes, Deep Reinforcement Learning ## 1 Introduction Hierarchical Reinforcement Learning (HRL; Barto and Mahadevan, 2003; Hutsebaut-Buysse et al., 2022; Pateria et al., 2021; Sutton and Barto, 2018) is a framework that could provide us with general and reusable agent representations and behaviors that can exhibit improved exploration and temporal abstraction (Nachum et al., 2019). The inspiration comes from humans' ability to break down novel tasks into a sequence of simpler sub-tasks they know how to solve (Solway et al., 2014). This hierarchical approach enables us to transfer our knowledge and reuse our skills to solve new problems. Contributions.In this work we introduce Hierarchical Hybrid Offline-Online (H2O2), a hierarchical deep reinforcement learning agent that discovers and learns to use options from scratch using its own experience. We show that H2O2 is competitive with a strong non-hierarchical Muesli baseline (Hessel et al., 2021) in the Hard Eight task suite Gulcehre et al. (2019); Ward et al. (2020). These are challenging sparse-reward tasks in a complex partially observable, first-person 3D environment. H2O2 employs a combination of primitive actions and temporally-extended options selected from a continuous option space1. To the best of our knowledge, this is the first hierarchical agent that can be competitive with a strong flat baseline in tasks as complex as the Hard Eight suite, while demonstrably using options to solve tasks. Footnote 1: We also provide videos of the agent (see Appendix C.4 for details) Our work also sheds new light on the problem of learning hierarchical agents and learning options in complex environments. We use H2O2 to test a number of hypotheses about its learning and performance in response to changes in its hierarchical design, and our results reveal previously undetected practical challenges. While some of our experiments support conclusions in line with the conventional understanding of HRL, others challenged our understanding of hierarchical agents. For instance, we observed that seemingly beneficial actions such as increasing the agent's option space or allowing it to learn longer options, can actually hurt its performance. ## 2 Background and Related Work A common HRL approach is to add options to the MDP, turning it into a semi-MDP (**SMDP**; Sutton et al., 1999), and then use a general-purpose RL algorithm to solve the SMDP (Barto and Mahadevan, 2003; Dayan and Hinton, 1992). _SMDP agents_ decompose into a low-level controller (**LLC**, which executes the options in the original MDP) and a high-level controller (**HLC**, which learns to solve the SMDP). The options may include the actions of the MDP (the _primitive actions_) in addition to temporally extended behaviors _per se_, so that no generality is lost when solving the SMDP instead of solving the MDP "directly". The SMDP strategy effectively changes the problem for the general-purpose RL algorithm. It is possible to add various capabilities to the general-purpose RL algorithm by augmenting the SMDP with expressive, diverse options (Barreto et al., 2019), additional/alternative state representations (Dayan and Hinton, 1992; Shah et al., 2022), and even actions to exert fine-grained control over the options (Barto and Mahadevan, 2003; Precup, 2000). Options can be learned from experience, which can be generated by the hierarchical agent itself ("learned from scratch"; Ajay et al., 2020; Bacon et al., 2017; Eysenbach et al., 2018; Hafner et al., 2022; Harutyunyan et al., 2019; Wulfmeier et al., 2021), or by another agent (for example, an expert; Lynch and Sermanet, 2020; Lynch et al., 2020; Merel et al., 2019). The agent can be learned as one unit (Ajay et al., 2020; Bacon et al., 2017; Merel et al., 2019; Wulfmeier et al., 2021), but one can also decouple the LLC's option-learning and the HLC's general-purpose RL algorithm (Dayan and Hinton, 1992; Hafner et al., 2022; Vezhnevets et al., 2017). There have been recent successes in learning _goal-conditioned_ options (Ajay et al., 2020; Khazatsky et al., 2021; Lynch and Sermanet, 2020; Lynch et al., 2020; Machado and Bowling, 2016; Mendonca et al., 2021). These behaviors are trained to produce specific outcomes (observations or states) in the environment, and they are commonly learned in hindsight from trajectories of agent (Andrychowicz et al., 2017). The idea is to identify goals achieved in each trajectory, and use trajectories as demonstrations of behavior that achieves the goal. The policy can be trained, for example, using behavior cloning (BC; Pomerleau, 1989) or offline RL (Fu et al., 2020; Fujimoto et al., 2019; Gulcehre et al., 2020; Lange et al., 2012; Levine et al., 2020; Nachum et al., 2018). The effectiveness of the learned options is largely affected by the choice of policy-learning algorithm, the data available, and how goals are discovered. For example, BC has been shown to yield effective goal-conditioned policies when used on data generated by experts (Lynch et al., 2020), but not on non-expert data (Ajay et al., 2020), whereas offline RL has shown more promise in the latter case. Discovering both which sub-behaviors to learn and how to combine them can be tackled by pre-learning skills/behaviors with a variety of signals such as expert demonstrations (Gupta et al., 2020), pseudo-rewards (Barreto et al., 2019), state-space coverage (Eysenbach et al., 2018; Islam et al., 2019; Lee et al., 2019; Pong et al., 2020), empowerment (Gregor et al., 2017), among many others. Alternatively, the agent can learn its sub-behaviors from its own data, that is, "from scratch" (Hafner et al., 2022; Wulfmeier et al., 2021). This approach has the appeal of being end-to-end, and is philosophically aligned with mainstream deep RL, where agents learn on data that is relevant to the task they are expected to perform (Fu et al., 2020; Gulcehre et al., 2020; Mnih et al., 2015; Silver et al., 2018). It is justified on the grounds that a learning agent will ultimately have to collect novel experience and learn novel sub-behaviors on that experience. The set of options added to the SMDP can also vary. If there are only a few options, they can be learnt as separate entities (Bacon et al., 2017; Wulfmeier et al., 2021). A much larger set of options (and, more specifically, goal conditioned policies), on the other hand, can be learned implicitly by encoding the options (goals) in latent space, and treating any element of the latent space as a valid option (goal) (Ajay et al., 2020; Hafner et al., 2022; Lynch and Sermanet, 2020; Lynch et al., 2020; Merel et al., 2019). In this case the whole latent space is part of the action space for the SMDP, and the HLC needs to learn to select elements of this latent space. The complexity of the set of options can be, to a certain extent, limited, by regularizing or constraining the latent output of the option encoders (Ajay et al., 2020; Hafner et al., 2022; Lynch and Sermanet, 2020; Lynch et al., 2020; Merel et al., 2019). We are not aware of any successful HRL approaches that encode goals in latent space but do not constrain the latent output of the option encoders in some way. This suggests that some manner of latent space regularization is essential for deep RL SMDP agents, and this hypothesis is consistent with the empirical findings we present in this work. We will see in our experiments that not constraining the latent output of the encoder is detrimental H2O2's performance. We are primarily interested in partially observable environments, and we adopt the typical deep RL setup for this type of domain: The agent, at each timestep \(t\), must select the action \(a_{t}\) according to a stochastic policy that can depend on the _history_ (the sequence of past observations \(o_{1},\ldots,o_{t}\) and actions \(a_{1},\ldots,a_{t-1}\)). This reduces the POMDP to an MDP whose states are the histories of past observations and actions (Cassandra et al., 1994). This design choice burdens the deep RL agent with learning to represent histories effectively, but it allows us to use general-purpose deep RL algorithms on both MDPs and POMDPs. For SMDP agents, both the MDP and the SMDP are treated as above: The LLC acts in a partially observable environment reduced to an MDP over histories, and the HLC acts in a partially observable environment reduced to an SMDP over histories. ## 3 Agent Design H2O2 is an SMDP agent with options learned from scratch, in offline fashion, that is _decoupled_ from the HLC. The options are goal-conditioned: The goals are selected in hindsight from experience and encoded into a latent space; then we use offline RL to train an LLC to attain these goals. The general-purpose deep RL algorithm used for the HLC is Muesli (Hessel et al., 2021), which has state-of-the-art performance in the Atari benchmark (Bellemare et al., 2013). We train the HLC online, through interaction with the environment, as usual for deep RL agents. Due to our agent's hierarchical design and how its components are trained, we call it Hierarchical Hybrid **O**ffline-**O**nline (**H2O2**). Figure 1 gives an overview of H2O2, and how its components interact with each other and the environment. We outline H2O2's main components in the rest of this section, and we give details in Appendix B. Figure 1: H2O2 component diagram. The dotted boxes indicate how components (in blue) are trained. The arrows indicate information (inputs, outputs and gradients) passed between components. ### Low-Level Controller Design **Training data.** The experience generated by H2O2 is inserted into a replay buffer (Cassirer et al., 2021; Horgan et al., 2018) as the agent interacts with the environment. The LLC learner processes minibatches of trajectories sampled (uniformly at random) from the replay (Horgan et al., 2018), where each trajectory has the form \((o_{1},a_{1},\dots,o_{n},a_{n})\). We sample _start_ and _end_ timesteps \(t_{s},t_{e}\) from the set \(\{(i,j):1\leq i<j\leq n\}\), encode \(o_{t_{e}}\) with the Goal Encoder (see Fig. 1) to obtain a _latent goal_\(g\in[-1,1]^{d}\). The sampled goal is fixed for the sampled subtrajectory (\(g_{t_{s}}=g_{t_{e}+1}=\dots=g_{t_{e}}=g\)) and each subtrajectory is treated as a separate episodic task terminating on timestep \(t_{e}\) (when the goal \(g\) is attained). The reward is \(r_{t}\doteq\mathbb{I}\{t=t_{e}\}\) (\(\mathbb{I}\) denotes the indicator function), that is, one for attaining the goal, and zero otherwise. During training we sample multiple pairs \(t_{s},t_{e}\) per trajectory in the minibatch, so we train the LLC on multiple tasks (goals) at once. The LLC's policy and value function are conditioned on the latent goal \(g_{t}\) and on the agent's _representation_\(b_{t}\)(the "agent state" Sutton and Barto, 2018). To compute \(b_{t}\), we process the observations and actions using a recurrent IMPALA-like network (Espeholt et al., 2018) (the Observtion Encoder and the RNN Core in Fig. 1). For each \(t\), the representation depends only on the previous observations and actions, and on the recurrent state of the neural network before \(o_{1}\). **Goal Sampling during Training.** During goal sampling, we reject subtrajectories that are too short or too long (determined by hyperparameters), as well as goals that are too "similar" to other goals (details in Appendix B.1). We also increase the relative frequency of pairs \((t_{s},t_{e})\) where the reward from the environment is non-zero for some timestep \(t_{s}\leq t\leq t_{e}\). Increasing the frequency of reward-related goals allowed us to direct the behavior of the LLC to be meaningful for the RL problem, without introducing too much of a dependency on task rewards, rather than learning options that directly optimize for the environment reward (Hafner et al., 2022). Controlling the goal sampling distribution also provides a direct way to study the option discovery problem using H2O2. **Offline RL.** To learn the LLC policy \(\pi_{\text{LLC}}\), we introduce a _regularized_ offline V-Trace (Espeholt et al., 2018; Mathieu et al., 2021). \(\pi_{\text{LLC}}\) is a distribution over primitive actions that we optimize by following a V-Trace policy gradient. We also regularize \(\pi_{\text{LLC}}\) to stay close to an estimate of the behavior policy \(\vec{\mu}\), trained with behavior cloning (Pomerleau, 1989). Similar to other offline RL works (Fujimoto et al., 2019; Gulcehre et al., 2021), we found this regularizer to be essential for training an effective LLC, as removing it or annealing it out made the offline RL ineffective. The gradient step of regularized offline V-Trace is: \[\begin{split}&\frac{\pi_{\text{LLC}}(a_{t}|b_{t},g)}{\widehat{ \mu}(a_{t}|b_{t},g)}\cdot\text{Adv}_{t}\cdot\nabla\log\pi_{\text{LLC}}(a_{t}|b_ {t},g)\\ &-\alpha\nabla KL(\pi_{\text{LLC}}(b_{t},g)|\vec{\mu}(b_{t},g)), \end{split} \tag{1}\] where \(\text{Adv}_{t}\) is the advantage estimate at time \(t\) (computed using V-Trace returns and the value estimate \(\widehat{V}^{\pi_{\text{LLC}}}\), Espeholt et al., 2018), \(\alpha\) is a fixed hyperparameter for the KL regularizer, and the gradient is taken only with respect to the parameters of \(\pi_{\text{LLC}}\) (without differentiating through \(\vec{\mu}\)). Differently from the original V-Trace, Eq. (1) uses an estimate of the behavior policy instead of the behavior policy itself. Following Espeholt et al. (2018), we add a weighted neg-entropy regularizer (\(-H(\pi_{\text{LLC}})\)) to the objective for \(\pi_{\text{LLC}}\). We train the value function estimate \(\widehat{V}^{\pi_{\text{LLC}}}\) through regression (as done by Espeholt et al., 2018, and akin to Fitted Q Iteration, Ernst et al., 2005). The representation \(b_{t}\) and the latent goal \(g_{t}\) are shared between \(\pi_{\text{LLC}},\widehat{V}^{\pi_{\text{LLC}}}\) and \(\widehat{\mu}\), and all three learning tasks (regularized policy gradient, value learning and behavior cloning) flow gradients into the representation and the latent goal. **Variational Goal Encoder.** Our Goal Encoder is inspired by variational autoencoders (Kingma and Welling, 2013; Rezende et al., 2014) and it outputs a distribution over the goal space from which we can sample goals \(g\). Concretely, Goal Encoder outputs the parameters of a multivariate normal distribution with diagonal covariance that is used to sample the latent goal \(g\). Differently from VAEs, we do not attempt to autoencode the input of the Goal Encoder from the sampled latent goals, but instead use the sampled latent goals for the Offline RL and auxiliary tasks. We use a KL regularizer term with weight \(\beta\) to encourage the multivariate normal to stay close to a standard normal distribution. A weight \(\beta=0\) will allow the goal space to be as expressive as afforded by the Goal Encoder, whereas a large enough \(\beta\) will effectively cause the goals \(g\) to be sampled from a standard normal distribution (ignoring the observation input to the encoder). The KL regularization is primarily for the benefit of the HLC, as our empirical results will show. We believe that this regularization makes the goal space smoother, and make it easier for the HLC to explore and choose goals. **Auxiliary Tasks.** In addition to the learning objectives outlined above, we employed three auxiliary prediction tasks to improve the quality of our LLC. The first one is training the Goal Model in Fig. 1. via maximum likelihood estimation of latent goals \(g_{t}\) conditioned on \(c_{t}\) (the output of the agent's Observation Encoder). This is an auxiliary only flow gradients into the observation encoder (not \(g\)). We observed that, without this auxiliary task, the LLC would frequently ignore the goal conditioning. The second auxiliary task is to estimate the state value of the behavior policy, with respect to the environment rewards. The value estimate is a function of \(b_{t}\) and \(g\) and flows gradient into both. We found this auxiliary task to be beneficial, and we believe it helps by shaping \(b_{t}\) and \(g\) to encode information about rewarding states in the environment. The third auxiliary task is to predict, from each timestep \(t\), how far in future the goal is. We frame the prediction task as a multiclass logistic classification task. During training, if the goal is on step \(t_{e}\), then the classification label for each step \(t\) is \(t_{e}-t\), out of \(n\) possible classes, where \(n\) is an upper-bound on how far into the future goals can be. ### SMDP and High-Level Controller Design The HLC can instruct the LLC to execute either primitive actions or goals (and the decision of which one to choose at each step is part of the HLC's policy), and the LLC executes them in the call-and-return model (Dayan and Hinton, 1992). **Option Termination and Initiation.** We improved our agent's sample efficiency by composing simple fixed rules and learned termination/initiation functions. We used a hard limit on option duration (timeout; Sutton et al., 1999) as the fixed rule, and an "attainment classifier" as the learned termination function. We built the attainment classifier from the LLC's auxiliary task that predicts (via classification) how far in the future the goal is. The option terminates when class 0 ("the goal is attained now") is the most likely class predicted by the time-to-goal classifier. The fixed initiation criterion is to allow any goal in any situation (Bacon et al., 2017). However, this is problematic with learned goal-conditioned behavior because it is possible to task the LLC with attaining goals that cannot be attained--either because the goals are implausible, or because the LLC is incapable. When the HLC requests an unattainable goal, the LLC will likely run until timeout, which has a significant cost in environment interactions. We observed this to be very problematic in early training, as the HLC can frequently select unattainable goals, but is oblivious of the sample cost of doing so. We addressed this issue by terminating options after one step if the goal was unattainable. A goal was deemed unattainable if the value estimate \(\widehat{\nu}^{\text{\tiny HLC}}(b_{t},g)\) was below a certain threshold. For a high enough threshold, this is a conservative criterion because the value estimates \(\widehat{\nu}^{\text{\tiny HLC}}(b_{t},g)\) will often only be high for goals that the LLC can achieve. **HLC Observations.** The LLC is responsible for what the HLC observes--it may forward environ ment observations, and it may also process and combine what has been observed during the execution of an option. In this work, the LLC simply forwards the environment observation to the HLC on the steps where the HLC gets to observe the SMDP and take action. High-Level Controller.We used Muesli (Hessel et al., 2021) as the general-purpose RL algorithm for the HLC. Muesli is a strong RL agent, and it is among the strongest RL agents in the Atari benchmark (Bellemare et al., 2013). Moreover, it admits policies over continuous and discrete actions, and this allows us to parameterize the policies we need for interacting with the LLC. For additional implementation details see Appendix B. ## 4 Experiments ### H2O2 is competitive with a strong flat baseline We evaluated H2O2 in the DeepMind Hard Eight suite (Gulcehre et al., 2019). These tasks are 3D, partially observable, procedurally generated, and they require exploration and complex behavior to solve (see Gulcehre et al. (2019) for detailed task descriptions). The agents are trained in a multi-task setting in all eight tasks. The flat baseline is a Muesli agent, and it has only the minimal, necessary differences from the HLC's Muesli agent--for example, the action spaces differ between H2O2 and the flat baseline, so the policies need to be changed accordingly. Unless otherwise stated, all quantities reported in this work are binned over number of frames, and averaged across all tasks and over five independent runs. Bands show standard error over independent runs. Figure 2 shows the average return per episode of H2O2 and the flat baseline as a function of the number of frames generated by interacting with the environment (i.e., _throughout training_). The plot shows that the two agents are competitive, with H2O2 attaining slightly higher performance more frequently. We report per-task performance in Fig. 9 in Appendix C.1, where we can see different variations of sample efficiency and final performance between the two agents across tasks. H2O2's improved performance is a demonstration of the effectiveness of our hierarchical agent, but the variations between H2O2 and the flat baseline performances in each task suggest that H2O2 is indeed learning differently from the flat baseline. How is the hierarchical design influencing H2O2's learning and final performance? What HRL capabilities is H2O2 demonstrating? ### Is H2O2 using temporally extended behaviors? Yes, but we did not observe that "the more temporal extension, the better". We found that parameters that control temporal extension have to be carefully selected in order to obtain better performance and even, "paradoxically", temporally extended behavior. The apparent paradox stems from considering the benefits of increasing temporal extension, without accounting for how it impacts the learning problem. That is, an effective hierarchical agent with more temporally extended behavior is expected to perform at least as well as one with less temporally extended behavior, but giving an untrained agent access to more temporally extended behavior may make the learning problem harder. The problem may be so hard that even after significant training the learning agent may have subpar performance and it may fails to display any meaningful hierarchical behavior. To substantiate our claim, we measured the average number of environment (LLC) steps per Figure 2: Average episode return for H2O2 and the Muesli baseline. SMDP (HLC) step, as well as task performance, of different variants of H2O2. Higher values of "average LLC steps" (per HLC step) mean H2O2 spent more timesteps in temporally extended behavior. An agent that exclusively executed primitive actions would have a ratio of 1. This ratio allows us to infer the fraction of steps spent executing options excluding the first step, which for the purpose of our discussion is how much temporally extended behavior an agent displays. The typical range for the the average LLC steps is between 1.0 and 2.5. An average LLC steps ratio of 1.5 means the agent is in temporally extended behavior for about \(\frac{1}{3}\) of its interaction with the base environment. A ratio of 1.25 corresponds to temporal extension in at least 20% of the interaction, and a ratio of 2.5 is corresponds to at least 60%. ### H2O2 with different option timeouts. We considered variants of H2O2 with different timeouts: 7, 16 and 32 steps (the timeout used for H2O2 in Fig. 2 was 7). Options terminate when either the goal is attained (according to the LLC's classifier) or at timeout. Because the options have a termination function, we would expect that increasing timeout should increase the effectiveness and frequency of the agent's temporally extended behavior. Figure 2(a) shows, however, that this is not the case. Surprisingly, attempting to increase the amount of temporal abstraction by increasing timeouts eventually _harms_ H2O2's ability to employ temporally extended behavior. Moreover, H2O2's performance is surprisingly sensitive to the amount of temporal abstraction: even with a timeout of 16 (which has roughly the same amount of temporally extended behavior (Fig. 2(a)) as the timeout of 7), H2O2's performance is worse than with a timeout of 7 (see Fig. 2(b)). The performance with the timeout of 32 is worst, so this setting leads to poor behavior both in terms of temporal abstraction and task performance. Our data suggests that H2O2 with a timeout of 32 breaks down because the learning problem is too hard. We measured why and how often options terminate throughout training (see Fig. 10), and we saw that in this setting our agent spends about half of the first \(2\cdot 10^{9}\) frames issuing invalid goals (and thus generating transitions with no-ops). So we suspect that the HLC failed to generate "good" data for the LLC to learn effective goal-following behavior, which in turn led to an unnecessarily challenging SMDP for the HLC to solve, that is, one filled with useless options that waste several frames of environment interaction. ### H2O2 with different discounts. We also considered three variants of H2O2 with different discounts \(y\), in \(\{0.9,0.99,0.997\}\) (the value used for H2O2 in Fig. 2 was 0.997). Since the discounting only incides on HLC timesteps, the rewards \(n\) timesteps in the future will only be discounted by \(y^{n}\), even if the amount of primitive actions required to get to that state is significantly larger. For example, if options take an average of 1.5 steps, \(\gamma=0.9\) and \(n=10\), the reward for HLC would be discounted with 0.59, whereas a flat agent executing the same actions would see the reward discounted with 0.21. This "horizon shortening" is expected to encourage the agent to use options, so we expect to see the variants with smaller \(\gamma\) using more options. Figure 3(a) shows that this is indeed the case. Figure 3(b) shows that as we decrease \(y\) the agent spends more time in temporally extended behavior. HRL folklore suggests that the agent with more temporally extended behavior will perform better, because it will be able to assign credit at longer timescales. The results in Fig. 3(b) are evidence against this claim: The plot shows that H2O2 with the largest \(y\) performs best, and that decreasing \(y\) worsens performance (even though it increases option use, as shown in Fig. 3(a)). The issue is that changing \(y\) also affects the objective of the HLC, and that changing the objective can change both the final solution and how the agent explores. Our hypothesis is that H2O2 with lower \(y\) explores worse, possibly in three ways: 1) The HLC fails to generate behavior with higher rewards because it is optimizing for short-term cumulative reward; 2) The options learned from the poor-performing HLC are also poor (with respect to the task), and the agent is incentivized to choose poor-performing options over exploring further with primitive actions; 3) The longer options also reduce the amount of training data the HLC generates for itself; that is, the HLC is incentivized to generate less data for itself by using options. This third point makes the agent sample inefficient! Appendix C.3 shows that H2O2 outperforms a simpler idea of adding temporal abstraction to a flat baseline by increasing the number of repeated actions. ### Does H2O2 benefit from more options? Sometimes, and we argue that it depends on whether the options _simplify the problem_. HRL folklore suggests that making more skills available to the HLC empowers the agent and leads to better solutions. We claim that this is not necessarily the case, and that in H2O2, for learned options to be beneficial, they must _simplify_ the problem for the HLC. That is, it does not make sense to solve an SMDP that is harder to solve than the original MDP--in that case we are better off using the flat agent. We considered two ways to offer more options to H2O2: Increasing the dimension of the latent goals, and reducing the amount of regularization on the goal space. Figure 4(a) shows the performance of H2O2 where we varied the dimension of the latent goals. We considered dimensions in \(\{16,32,48\}\), and H2O2 from Fig. 2 uses 32. We see in the figure that this dimension behaves like a usual hyper-parameter: There's a sweetspot for highest performance at 32, but going lower or higher leads to worse performance. It is surprising, though, that a dimension of 32 works best, as we were initially expecting more expressive goals to be more effective. We also evaluated the effect the Goal Encoder regularizer on the performance of H2O2. We considered \(\beta\in\{10^{-9},10^{-3},10^{-2},10^{-1},1\}\). When \(\beta=10^{-9}\) there is no compression of the goal space, but when \(\beta=1\) the regularization is so strong that the posterior is also a standard normal. We used \(\beta=1\) for H2O2 in Fig. 2. Figure 4(b) shows the performance of H2O2 for the different values of \(\beta\), and we see that H2O2 Figure 5: Effect of constraining the space of options on overall performance. with the least diverse set of options (\(\beta=1\)) performs best, along with the larger values of \(\beta\). The data is consistent with the hypothesis that too much flexibility in the goal space makes the learning problem harder, so adding more options eventually damages the performance of the agent. ### How does H2O2 perform in similar domains from related work? The work of Hafner et al. (2022) is the closest to ours: They introduced Director, a hierarchical deep RL agent for complex partially observable, first-person 3D environments. The was evaluated in Goals Small and Objects Small in the DeepMind Lab environment (Beattie et al., 2016). These are first-person maze tasks that require navigation, localization and recalling reward locations within an episode. Director is an SMDP agent with call-and-return options, but without access to primitive actions, and all options terminate after a fixed number of steps. The options are "state"-directed, in the sense that the LLC ("Worker") is conditioned on latent vectors from the latent state space (the analogue of our \(b_{t}\) in Fig. 1), and is trained in hindsight for achieving the corresponding latent states. Hafner et al. (2022) use a VQ-VAE (Razavi et al., 2019; Van Den Oord et al., 2017) to discretize the latent state space, which gives the HLC ("Manager") a discrete action space. Moreover, they use a World Model (Hafner et al., 2019) to help shape Director's state representation. During training, the LLC in Director is rewarded proportionally to the similarity of its latent state and the conditioning input (the goal), at each timestep. The version of Director that is competitive with their baseline (Dreamer; Hafner et al., 2019) adds extrinsic rewards to the reward provided to the LLC. We evaluated H2O2 and our flat Muesli baseline in DeepMind Lab's Goals Small and Objects Small. Figure 6 shows the average return of different agents on the two tasks. The "Flat Baseline" is a Muesli agent like the one used in the Hard Eight tasks, but uses a replay-to-online ratio of 0.92 We present variants of H2O2 with the replay-to-online ratio used for Hard Eight tasks (0.5) as well as 0.9. The figure also shows the final performance of Director and Dreamer (as the dotted line, both methods have the same final performance). This variant of Director adds extrinsic rewards to the LLC objective. Figure 6 shows that with an appropriate replay-to-online ratio both H2O2 and Muesli baseline can match the data efficiency of Director and Dreamer, though it's unclear what the latter's final performance would be if trained longer. Footnote 2: This ratio means that in each minibatch 90% of the data is sampled from a replay, and the other 10% from online experience. This increases the data efficiency of the agents and make them competitive in early training (the 180-thousand-frame regime). ## 5 Conclusion Our work introduces H2O2, the first demonstration of a complete hierarchical agent that can use non-trivial options to attain strong performance in visually complex partially observable tasks. H2O2 is competitive with a state-of-the-art flat baseline, it discovers and learns to use its options, and it does so from its own generated experience. **Relevance.** HRL has received much interest due to its potential to deliver powerful agents with appealing capabilities--for example, transfer, skill reuse, planning and exploration with abstraction. Recent successes with HRL in different domains Figure 6: The average return across two levels of DMLab Beattie et al. (2016). We also indicate the final performance of the Dreamer and Director baselines Hafner et al. (2022) with the dotted line, after 50M frames. (Hafner et al., 2022; Merel et al., 2019; Wulfmeier et al., 2021) provide evidence that practical, effective HRL agents are possible, even if existing agents do not yet fully realize the potential of HRL. Therefore, it is important to expand the coverage of effective hierarchical agents across domains, and to identify and tackle practical challenges that can bring us closer to a full-fledged hierarchical agent. #### Significance. Our work is an important contribution to the HRL research for two reasons: H2O2 is a proof of concept, complete and effective HRL agent, and our work highlights critical challenges for HRL in complex domains that are vastly overlooked in the literature. It was only by going through the process of designing and training a HRL agent for complex domains that we exposed some of these issues. #### Successes. We built on existing work to tackle some of the practical challenges of HRL in complex domains, such as learning goal-conditioned behaviors offline from any experience generated by an agent (not jut expert behavior). To achieve this, we introduced a regularized offline V-Trace algorithm and demonstrated how to integrate the policy that executes these goal-conditioned behaviors (the LLC) with a general-purpose RL algorithm that learns to select these behaviors as options in order to solve tasks (the HLC). #### Lessons learned. We believe that our empirical findings apply to domains where a very large number of options is conceivable, but for any one task a much smaller set of behaviors is relevant and useful. Visually complex domains tend to naturally have this property, and this is arguably the kind of rich domain we want intelligent agents to be effective in. However, we think that many of the challenges we observed would go away if we were to limit the learning to a small set of options (Merel et al., 2019; Wulfmeier et al., 2021), or choose them sensibly beforehand. Within the scope of these "rich domains", however, the lessons we can draw from our experimental results can apply to various HRL agents beyond H2O2. The lessons apply most closely to SMDP agents. The SMDP framework has been backed with theoretical justification(Barto and Mahadevan, 2003; Precup, 2000), and our work complements existing knowledge with empirical findings. We noticed a strong contrast between how HRL is typically motivated in the literature (e.g., Barto and Mahadevan, 2003; Hutsebaut-Buysse et al., 2022; Pateria et al., 2021), and the practical challenges we encountered. It is often claimed that hierarchical agents can demonstrate very appealing capabilities and algorithmic strengths, such as sample efficiency, structured exploration, temporal abstraction, improved credit assignment, state abstraction, jumpy planning, transfer and generalization. These "HRL promises" can easily be misconstrued as properties of hierarchical agents, which may lead to misconceptions about how hierarchical agents will learn and perform. Our empirical findings exposed some of these HRL misconceptions. For example, the SMDP approach promises to simplify the problem for the general-purpose RL algorithm. So one might expect that adding capabilities that are perceived as strengths of HRL (for example, more expressive options) to the SMDP will cause the general-purpose RL algorithm to solve the SMDP with less effort than if it had simpler options, or only primitive actions. However, in some experiments we showed the opposite. In practice, the design of the LLC effectively changes the SMDP, and the hierarchical agent can only be competitive with a flat agent if SMDP is easier to solve than the original MDP (besides admitting a better solution). Therefore both solution quality and learning dynamics are essential factors to consider when designing the hierarchical agent. #### Open challenges. We also identified questions that remain open: How can we structure the goal space to accelerate the HLC learning? Is it possible to learn effective HLCs with a general-purpose RL algorithms? How can the HLC agent learn with a very large number of complex options, but remain competitive with a flat baseline? Are image goals good enough? What other goal modalities can we use? Which goals should we train the LLC to achieve? Some of these questions can be investigated in simple domains, as long as the domains are designed to pose challenges that we observe in practice. For example, a simple grid-world where there is an option to reach any cell from any other cell can be a fruitful domain to explore. However, it may be challenging to outperform strong flat deep RL baselines in such simple domains if the options are not prescribed but learned end to end. We presented simple approaches for some of the challenges above--e.g. the goal sampling distribution for the LLC. We expect that the performance of H2O2 will improve with goal sampling distributions that incorporate principled techniques for option discovery (Machado and Bowling, 2016). H2O2 can be a starting point for research that aims to investigate specific HRL sub-problems without losing sight of the performance of the whole agent in complex tasks.
2309.11883
On-the-Fly SfM: What you capture is What you get
Over the last decades, ample achievements have been made on Structure from motion (SfM). However, the vast majority of them basically work in an offline manner, i.e., images are firstly captured and then fed together into a SfM pipeline for obtaining poses and sparse point cloud. In this work, on the contrary, we present an on-the-fly SfM: running online SfM while image capturing, the newly taken On-the-Fly image is online estimated with the corresponding pose and points, i.e., what you capture is what you get. Specifically, our approach firstly employs a vocabulary tree that is unsupervised trained using learning-based global features for fast image retrieval of newly fly-in image. Then, a robust feature matching mechanism with least squares (LSM) is presented to improve image registration performance. Finally, via investigating the influence of newly fly-in image's connected neighboring images, an efficient hierarchical weighted local bundle adjustment (BA) is used for optimization. Extensive experimental results demonstrate that on-the-fly SfM can meet the goal of robustly registering the images while capturing in an online way.
Zongqian Zhan, Rui Xia, Yifei Yu, Yibo Xu, Xin Wang
2023-09-21T08:34:01Z
http://arxiv.org/abs/2309.11883v2
# On-the-Fly SfM: What you capture is What you get ###### Abstract Over the last decades, ample achievements have been made on Structure from motion (SfM). However, the vast majority of them basically work in an offline manner, i.e., images are firstly captured and then fed together into a SfM pipeline for obtaining poses and sparse point cloud. In this work, on the contrary, we present an on-the-fly SfM: running online SfM while image capturing, the newly taken On-the-Fly image is online estimated with the corresponding pose and points, i.e., _what you capture is what you get_. More specifically, our approach firstly employs a vocabulary tree that is unsupervised trained using learning-based global features for fast image retrieval of newly fly-in image. Then, a robust feature matching mechanism with least squares (LSM) is presented to improve image registration performance. Finally, via investigating the influence of newly fly-in image's connected neighboring images, an efficient hierarchical weighted local bundle adjustment (BA) is used for optimization. Extensive experimental results demonstrate that our on-the-fly SfM can meet the goal of robustly registering the images while capturing in an online way. ## I Introduction Structure from Motion (SfM) has been a pivotal topic in the field of computer vision, robotics, photogrammetry, which are widely applied in augmented reality [1], autonomous driving [2, 3, 4], and 3D reconstruction [5]. Heretofore, many impressive SfM approaches have been extensively studied, mainly including Incremental SfM [5-9], Hierarchical SfM [10-13] and Global SfM [14-20], depending on the procedure of how images are registered. However, these SfM methods predominantly operate in an offline manner, i.e., images are firstly captured, feature extracting\(\backslash\)matching and epipolar geometry validation are then performed using all images, one specific SfM method is selected to estimate poses of all images and the corresponding sparse point cloud. This conventional offline SfM typically limits the possibility for online measurement, rapid quality evaluation, etc. In response to real-time performance, there exists another related hot research topic of VSLAM (Visual Simultaneous Localization and Mapping) worth referring to, it can deal with video data in real time. Given sequential frames, VSLAM can compute real-time trajectory of cameras and 3D object points. Generally, with various embedded sensors, VSLAM can be mainly categorized into mono-VSLAM, stereo-VSLAM and Inertial-VSLAM [21-25], they all contain several common modules: _tracking_, inputting frames and outputting the corresponding pose; _local mapping_, generating 3D points and optimizing local maps; _loop closure_, detecting loop and refining loop correction. The inherent assumption of VSLAM requires that the input frames must be spatiotemporally continuous [21], which means two adjacent frames must be contiguous in time and space or auxiliary information from GPS/IMU [23,24] is available, this consequentially hinders the way that the data can be collected. In this paper, as Fig. 1 exemplifies, we present a novel on-the-fly SfM: running online SfM while image capturing. Similar to conventional SfM, on-the-fly SfM yields image poses and 3D sparse points, but we do this while the image capture. More specifically, the current image's pose and corresponding 3D points can be estimated before next image is captured and on-the-fly to be processed, i.e., _what you capture is what you get_. Also, analogous to VSLAM that can ensure real-time performance, on-the-fly SfM is further designed to be able to deal with images captured in an arbitrary way, whereby the spatiotemporal continuity is not necessary any more. The proposed SfM is mainly composed of three steps: online image collecting module, fast image matching and efficient geometric processing. The first one is first established with a camera and a Wifi transmitter, which immediately send the captured image for processing via Wifi signal transmission. The second step is to efficiently and robustly generate the matching results between the already registered images and the new fly-in image, in which fast image retrieval is the most important component for real-time performance. The last step is to estimate camera pose and 3D points robustly and fast, besides the canonical image registration and triangulation, an efficient hierarchical weighted local bundle adjustment is adopted. For each new fly-in image, we just iterate these three steps. More details can be found in Section III-A. To approach the goal of _what you capture is what you get_, along with the presented on-the-fly SfM using a new online working mode, we also make three technical contributions: * Fast image retrieval based on learning-based global feature and vocabulary tree. In this work, we extract the global feature using the pre-trained model [26] and unsupervised train vocabulary tree. For each new fly-in image, the global feature is computed and traversed along the vocabulary tree for fast image retrieval. * Refinement of correspondences using Least Squares Matching [27]. Based on the original matching Fig. 1: The proposed on-the-fly SfM. mechanism (e.g., SIFT [28]), considering the geometric and photometric consistency around the local windows of matched points, a least squares system is applied to refine the 2D position of correspondences according to the grey values within the relevant local windows on two images. * Hierarchical weighted local BA for efficient optimization of poses and 3D points. For each new fly-in image, only its neighboring connected images (already registered) are enrolled in BA. In addition, based on our image retrieval result, the influence of various connected images on the newly captured image is implied by hierarchical weights, which are employed as priors for improving BA. ## II Related Works In this section, two related topics (SfM and SLAM) are briefly reviewed, which mainly includes some popular works. In addition, some state-of-the-art studies regarding image retrieval and efficient bundle adjustment are introduced. ### _SfM & VSLAM_ So far, there are a lot of open public SfM packages, e.g., VisualSFM [29], OpenMVG[30], Theia[31], Colmap [5], etc. However, all these packages basically concentrate on offline processing mode. For example, Colmap, one of the most widely-used packages, furnishes an end-to-end 3D reconstruction pipeline for large-scale unordered images and it unfolds via a structured pipeline that is mainly comprised of three key stages: image matching, pose estimation and sparse reconstruction, dense reconstruction. To achieve the goal of real-time SfM, inspired by monocular VSLAM, Song et al. [32] presented a monocular SfM that concentrated on eliminating scale drift using the information of ground plane. They yielded comparable performance to stereo setting on long-time sequences. Zhao et al. [33] proposed a so-called real-time SfM (RTSM), in which feature matching was improved by a hierarchical feature matching strategy based on BoW (Bag-of-Word) [34] and multi-view homography, and a graph-based optimization was employed for efficiency. However, both the reviewed online SfM methods still rely on the spatiotemporal continuity between images or require GPS. Furthermore, there already exist quite a few mature VSLAM methods that are capable of real-time performance in specific scenarios or tasks. For example, the very popular ORB-SLAM series [21, 24, 25] was continuously published and support a wide range of camera models and sensors, allowing them to achieve high-precision localization while capturing frames, and the VINS-Fusion [35] combined input of images and GPS/IMU which are widely adopted in autonomous driving. However, the robustness of these VSLAM methods is limited in certain scenarios, such as weak textures and motion blur. Yue et al. [27] integrated least squares into the feature matching of ORB-SLAM2 and provided more precise observations. Please note that while there are ample VSLAM methods worth reviewing, this review section only lists a few popular and relevant works. Contrary to conventional SfM, our proposed on-the-fly SfM is deployed with real-time online processing while image capturing in an arbitrary way. Comparing to VSLAM, the major advantage of our method is that the requirement for input images' spatiotemporal continuity is not necessary any more, nor is the independence of GPS/IMU. ### _Image retrieval_ Image retrieval technique has been widely deployed in SfM and VSLAM for accelerating feature matching and loop closure detection. One typical idea is to build an efficient indexing structure using local features (e.g., SIFT, ORB), in which the BoW is one of the most representative methods to fast identify similar image pairs and loop closure, such as [36]. Similar to BoW, Havlena and Schindler [37] trained a two-layer vocabulary tree for speeding up image matching. Wang et al. [38] introduced random KD-forest consisted of several independent KD-trees, and matchable image pairs can be efficiently determined via traversing on the KD-forest. In the last few years, learning-based methods have greatly improved image retrieval regarding both time efficiency and precision. Arandjelovic et al. [39] proposed a trainable pooling layer via a soft assignment for VLAD, which boosted the place recognition. Radenovic et al. [40] exploited the SfM result and automatically generated similar and non-similar image pairs, which is used to fine tune pre-trained CNNs for better global image features. Based on [40], Shen et al. [41] adjusted CNN by considering the local overlapping regions. Recently, Hou et al. [26] proposed a CNN fine-tuning method with multiple NetVLADs to aggregate feature maps of various channels and published an benchmarks _LOIP_ that consists of both crowdsourced and photogrammetric images. ### _Efficient optimization of bundle adjustment_ Nowadays, bundle adjustment (BA) has become a mature technique for optimizing image poses and 3D point positions. However, as image number increases, a lot of works for solving BA in a fast and reliable way emerged. For example, preconditioned conjugate gradients were explored to solve BA in [42], Wu et al. [43] and Zheng et al. [44] further improved the efficiency for solving large-scale linear equation system by means of GPU. To cope with large-scale problem, distributed approaches that split a large BA problem into several overlapping small subset BA problems attract researchers' attentions [45, 46, 47, 48]. [45] parallelly solved each subset BA and proposed global camera consensus constraint to merge all subsets, [46] employed 3D points as global consensus constraints and the corresponding covariance information was applied for better convergence behavior. MegBA [47] parallelly solved subsets via multiple GPUs, which provide a more time efficient solution. All the above BA methods aims to efficiently optimize all unknowns globally, which are inherently not feasible for incremental or sequential mode (it is not efficient to run global BA when each and every new image comes in, see section IV-C). ## III On-the-Fly SfM In this section, we introduce our on-the-fly SfM in more detail. First, we overview the general pipeline of our SfM that can perform online SfM while capturing image in arbitrary manner. Then, three key enrolled methodologies are explained: 1) Fast image retrieval based on learning-based global feature and vocabulary tree; 2) Correspondence refinement using least squares matching; 3) Efficient BA optimization via weighted hierarchical tree. ### _Overview of on-the-fly SfM_ Fig.2 illustrates the general workflow of our on-the-fly SfM, which constitutes five parts: image capturing and transmitter, online image matching, two-view geometry, LSM correspondence refinement, online reconstruction. Next, we explain each part. **Image capturing and transmitter**. To achieve the goal of what you capture is what you get, in this work, a consumer digital camera is used to collect images, which is integrated with a wireless Wifi transmitter to transfer images for processing in real time (see section IV for more details). After receiving a new fly-in image, the other four parts start to work. **Online image matching**. Fast identifying matchable images for new fly-in image is one of the most important procedures, as the first step for a new image is to find the relationship with already registered images, i.e., running image matching. In this paper, we applied the learning-based global feature [26] and its corresponding vocabulary tree to fast determine new image's matchable candidate images, among which correspondences are estimated. **Two-view geometry**. Similar to [5], a multi-model two-view geometric verification method is applied. In general, fundamental matrix is estimated and two images are geometrically reliable if at least \(N_{f}\) inlier matches exist, then the homography is computed with \(N_{h}\) inliers. For calibrated case, essential matrix is estimated as well. And the final two-view geometric model is selected according to GRIC [48], and initial stereo reconstruction is selected as the verified image pair with most triangulated 3D points and the median triangulation angle being closed to 90 degrees (e.g., 60\(\sim\)120). **LSM correspondence refinement**. Despite the employed robust estimator in two-view geometry and online reconstruction, a further improvement can be expected by refining the generated correspondences based on least squares matching. **Online reconstruction**. This part mainly addresses on image pose and 3D point estimation, among which the image registration and triangulation are solved by EPnP [49] and RANSAC-based multi-view triangulation [5]. To approach online reconstruction, we solve the most time-consuming bundle adjustment by presenting hierarchical weighted local bundle adjustment which is based on the fact that newly fly-in image only affects its connected overlapping images to some degree (more details can be found in section III-D) ### _Fast image retrieval based on learning-based global feature and vocabulary tree_ In this part, a fast image retrieval pipeline integrated with learning-based global feature and vocabulary tree is employed to guarantee online image matching for on-the-fly SfM. Fig. 3 illustrates the key idea: 1. Pre-train models. CNN model is applied as global feature extractor [26,39,40], and a vocabulary tree is built using global features of all training images; 2. Image retrieval for fly-in image. Each new image's global feature is firstly extracted using selected CNN model, and input into built vocabulary tree to fast identify matchable images. #### Iii-B1 Learning-based Global Feature Extractor CNNs have been successfully applied in retrieving visually similar images as feature extractor [50]. In this work, to determine matchable image pairs that often have partial overlapping area, the fine-tuned CNN model of [26] is selected as our global feature extractor, as we find that [26] is tailored for seeking overlapping image pairs to speed up offline SfM and is supposed to be also feasible for our on-the-fly SfM. In particular, [26] yields a new training dataset (_LOIP_) with ground-truth matchable pairs, and a novel architecture composed of CNN and multiple NetVLADs are fine-tuned by region triplet loss. Note that their off-the-shelf model is accessible and employed. #### Iii-B2 Vocabulary tree training To the best of our knowledge, for global features, similar images are typically retrieved by comparing Euclidean distance of two images' feature vectors, which is yet not efficient for large scale problem. Motivated by BoW, it can be expected that a vocabulary tree for global feature is able to further improve retrieval time efficiency. Given the extracted global feature by [26], we can train a corresponding vocabulary tree via an unsupervised manner, i.e., the canonical K-means algorithm is Figure 4: Toy example for fast image retrieval of new fly-in image. Similar images are clustered into the same node. Figure 3: Fast image retrieval workflow based on learning-based global feature and vocabulary tree Figure 2: Workflow of the proposed on-the-fly SfM hierarchically repeated to split the feature space until a certain depth is reached. To ensure the generality and even splitting of the feature space, _LOIP_ containing various crowdsourced and photogrammetric images is used. As a consequence, a vocabulary tree with the information of each cluster center is generated for fast image retrieval. 3) _Fast Image Retrieval for new fly-in image._ Based on the pre-trained models of global feature extractor and vocabulary tree, matchable images of new fly-in image can be fast found. Instead of estimating Euclidean distance of all possible image pairs, only the cluster centers are required to be compared and the assumption is that similar images should fall into the same node as Fig. 4 shows. More specifically, as a new image flies in, its global feature is extracted and fed into the vocabulary tree, the already registered images that are matchable image candidates can be fast identified via traversing the nodes of vocabulary tree, i.e., similar images should always be in the same node. ### _Correspondence Refinement using Least Squares Matching_ Based on the original feature matching mechanism (e.g., SIFT), we present a correspondence refinement solution by integrating with the least squares matching (LSM), which is supposed to mitigate error accumulation. In general, as Fig. 5 shows, LSM is firstly applied to improve correspondences regarding 2D position and outliers, and to generate new observations for improving PnP estimation. 1) _Basic Principle of LSM._ The general idea of LSM is to optimize the 2D position of matches based on consistency of pixel grey values around corresponding local windows on two images [27]. Typically, radiometric and geometric inconsistency are explored in LSM, the first one often results from illumination, various photographic conditions and errors of digitization, etc., the second one is normally due to depth changes and image distortion, etc. The basic assumptions of LSM are: radiometric inconsistency between matched points is not complicated and can be approximated by linear transformation (see equation (1)) and the geometric inconsistency between two corresponding small local windows can be simply modelled by affine transformation (as Fig (5) left shows, see equation (2)). LSM is formulated by Equation (3) that combines Equation 1 and 2. \[I_{1}(x_{1},y_{1})=h_{0}+h_{1}I_{2}(x^{\prime}_{2},y^{\prime}_{2}) \tag{1}\] \[\left\{\begin{matrix}x^{\prime}_{2}=a_{0}+a_{1}x_{2}+a_{2}y_{2}\\ y^{\prime}_{2}=b_{0}+b_{1}x_{2}+b_{2}y_{2}\end{matrix}\right. \tag{2}\] \[I_{1}(x_{1},y_{1})=h_{0}+h_{1}I_{2}(a_{0}+a_{1}x_{2}+a_{2}y_{2},b_{0}+b_{1}x_{2 }+b_{2}y_{2}) \tag{3}\] where \(I(.)\) indicates the grey value, \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are the correspondence from original matching results. \(a_{0\sim 2}\) and \(b_{0\sim 2}\) are the unknown affine parameters, \(h_{0.1}\) are the unknown linear parameters for radiometric constraint. Equation (3) can be solved using least squares in an iterative way [27]. If the refinement converges successfully, the refined 2D position can be obtained from Equation (2), otherwise, the correspondence is deleted as outlier. 2) _2D position refinement and outlier detection._ According to the basic principle of LSM, given a pairwise correspondence, i.e., \((x_{1},y_{1})\) and \((x_{2},y_{2})\), we first try to solve equation (3) using least squares: if it converges, the corresponding 2D position will be refined; if it fails, the correspondence is detected as an outlier. 3) _Densifying matches._ For new fly-in image, one of the main goals is to compute the corresponding pose via EPnP. To ensure a robust and reliable pose estimation, new reliable extra 2D-3D matches are produced using LSM. For some 3D points that can be viewed by a specific image, but without corresponding 2D observations, LSM is run to generate these new 2D-3D matches. More specifically, initial pose is first estimated, 3D points are reprojected onto image for coarse 2D positions, LSM is then followed to optimize for more accurate 2D positions as densified matches. Finally, all the 2D-3D matches including both original and densified ones are employed for pose estimation. ### _Hierarchical weighted local bundle adjustment for efficient optimization_ To achieve real-time performance for our on-the-fly SfM, an efficient bundle adjustment is heavily required. Inspired by the natural phenomenon that the closer to center the ripple is, the larger the related amplitude is (as Fig.6 left corner shows), analogously, the uncertainty of new fly-in image makes higher influence on closely associated images than images that are farther. As Fig. 6 implies, this work presents a new efficient local bundle adjustment with hierarchical weights. Based on the image retrieval results (section _B_), a hierarchical association tree is built, which indicates the association relationship between new image and registered images. Then, hierarchical weight for every locally associated image is then estimated and used for robust bundle adjustment. Fig. 5: Least square matching refinement. Fig. 6: Hierarchical weighted local bundle adjustment. Fig. 7: Example of hierarchical association tree building and weighting. 1 Hierarchical association tree building and weighting With the presented fast image retrieval method, for every fly-in image, it is efficient to figure out top-N similar images. As images on various ripples (or hierarchical layer) are inconsistently affected by new image, a _Hierarchical association tree_ is built. The images in first ripple are composed of Top-N similar images for current new fly-in image, and the second ripple images are Top-N similar images of first ripple images, repeat until a pre-setting depth \(\mathbf{L}\) is reached. All the enrolled images in the hierarchical tree are denoted as \(I^{hat}\). As Fig. 7 shows, a toy 4-layer hierarchical association tree is illustrated, in which every bottom layer images are the retrieved Top-N images of the upper layer and the first layer contribute highest effect on new image (indicated by thick red line). According to the ripple phenomenon, this work introduces a simple yet efficient hierarchical weighting solution for various ripple images, as shown in Equation (4): \[p_{t}=\begin{cases}1,&\textit{if }i=*\\ (k)^{t-1},&\textit{if }i\neq\mathbf{L}\\ \infty,&\textit{if }i=\mathbf{L}\end{cases} \tag{4}\] where \(i\) is the index of layer number, * is the current new fly-in image and \(k\) is a constant value (\(k\)\(>\)1) denoting the basic inverse influence between new fly-in image and already registered images. The larger the \(i\) is, the higher the corresponding \(p_{t}\) is, which means images on farther ripples are much more stable and should have smaller updates. #### Iii-A2 Local bundle Adjustment with hierarchical weights Based on the local block consisting of \(I^{hat}\) and weighting \(p_{t}\), we establish a new efficient and robust local BA with hierarchical weights. Equation (5) denotes the original reduced normal equation with only camera parameters (see [6] for more details). \[(J^{T}J+\lambda D^{T}D)\delta=-J^{T}f \tag{5}\] To run bundle adjustment in a fast and robust way for new fly-in image, this study modifies Equation (5) as shown in Equation (6) \[(J^{T}J+\lambda D^{T}D)P^{hat}\delta^{hat}=-J^{T}P^{hat}f \tag{6}\] where, only the local block BA (\(\delta^{hat}\)) with images \(I^{hat}\) are refined and reasonable weights \(P^{hat}\) composed of corresponding \(p_{t}\) is employed for robust optimization. ## IV Experiments In this section, we report extensive experimental results on various datasets to demonstrate the capability of "_what you capture is what you get_" for our on-the-fly SfM. ### _Implementation details_ The learning-based global features are extracted by [26] and the vocabulary tree is trained with all images _LOIP_[26]. In Fig. 8, our online image transmission is integrated with CAMFI 3.0 wireless image transmission equipment, whose working area is around 50 meters and transmission speed can be up to 10 Mb/s. Typically, 3-5s are needed to receive one image since it is captured in our tests. All experiments are run on the machine with 16 CPU processors and RTX3080 GPU. **Experimental datasets**. As fig. 8 shows, two self-collected datasets (_SX_-221 images, _YX_-349 images) are used to evaluate the on-the-fly performance of our SfM, which were taken in an arbitrary way and transferred online to our system. Three visual sequences (_fr1_desk_, _fr3_st_far_, _fr1_xyz_) from TUM RGB-D datasets [50] are simulatively employed as input. **Running Parameters**. In this work, some free parameters are empirically set. For the online image matching, the vocabulary tree is with 5-layer depth and 5 sub-clusters for each node. Each new fly-in images selects Top-30 similar images for subsequent matching. The small local window in LSM is set as 15 \(\times\)15 pixels. For efficient BA, as each image in the ripple has top-N candidate images which might return a large BA block, only top-8 similar images are considered. The constant weighting parameter \(k=2\) in all experiments. ### _Performance of fast image retrieval_ To validate the real-time performance of our online image matching, based on _SX_ and _fr3_st_far_, we investigate three different image matching strategies: exhaustive matching using Colmap with default setting (EM), exhaustive Euclidean comparison using learning-based global feature [26] (EE) and the proposed image retrieval (Ours) based on learning-based global feature and vocabulary tree. Fig. 10 qualitatively shows the matching results that both Ours and EE can identify the basic skeleton of EM, which Figure 8: Online Image transmission (left - Hardware, middle-_SX_, right - _YX_). Figure 10: Overlapping graph of _SX_. Vertical and horizontal axis are image ID. The darker red the pixel is, the higher possibility the corresponding image pair overlaps with each other. Figure 9: Time consuming of various methods on _fr3_st_far_. means the most similar images determined by EM are successfully found by Ours and EE. ### _Performance of efficient local bundle adjustment_ To demonstrate the efficacy of the presented local bundle adjustment, different bundle adjustment solutions are compared: first, a global bundle adjustment that enrolls all images is performed (Glo.); second, a combined solution integrated with local and global bundle adjustment (Com.), this is actually successfully applied in Colmap [5]; third, local bundle adjustment with hierarchical weights with (Ours). Based on _fr3_st_far_, these three bundle adjustment solutions are tested for BA optimization when a new image comes into the block. Fig. 11 shows the time cost for different BA methods, which records the optimization time for each new fly-in image. It can be found that, as the image number grows, the consuming time increases dramatically for the Global method, the cost time of Ours increases the slowest and tends to be stable after adding some images. This can be explained by the fact that, as more images involve, more time is needed to refine more unknown parameters. The whole block is considered for Global method, whereas, Ours only solves a local bundle adjustment for images in the built hierarchical association tree. Tab. I lists quantitative results, i.e., averaging mean reprojection error of each BA (AMRE), mean reprojection error of final BA (MFRE) and mean track length (MLT), these results are nearly similar and in the same magnitude. Therefore, the presented BA is fast yet robust solution, and is feasible to our on-the-fly SfM.
2309.04530
Cosmological simulations of a momentum coupling between dark matter and quintessence
Dark energy is frequently modelled as an additional dynamical scalar field component in the Universe, referred to as "quintessence", which drives the late-time acceleration. Furthermore, the quintessence field may be coupled to dark matter and/or baryons, leading to a fifth force. In this paper we explore the consequences for non-linear cosmological structure formation arising from a momentum coupling between the quintessence field and dark matter only. The coupling leads to a modified Euler equation, which we implement in an N-body cosmological simulation. We then analyse the effects of the coupling on the non-linear power spectrum and the properties of the dark matter halos. We find that, for certain quintessence potentials, a positive coupling can lead to significantly reduced structure on small scales and somewhat enhanced structure on large scales, as well as reduced halo density profiles and increased velocity dispersions.
Daniela Palma, Graeme N. Candlish
2023-09-08T18:00:04Z
http://arxiv.org/abs/2309.04530v1
# Cosmological simulations of a momentum coupling between dark matter and quintessence ###### Abstract Dark energy is frequently modelled as an additional dynamical scalar field component in the Universe, referred to as "quintessence," which drives the late-time acceleration. Furthermore, the quintessence field may be coupled to dark matter and/or baryons, leading to a fifth force. In this paper we explore the consequences for non-linear cosmological structure formation arising from a momentum coupling between the quintessence field and dark matter only. The coupling leads to a modified Euler equation, which we implement in an N-body cosmological simulation. We then analyse the effects of the coupling on the non-linear power spectrum and the properties of the dark matter halos. We find that, for certain quintessence potentials, a positive coupling can lead to significantly reduced structure on small scales and somewhat enhanced structure on large scales, as well as reduced halo density profiles and increased velocity dispersions. keywords: dark matter - dark energy - large-scale structure of Universe - methods:numerical ## 1 Introduction Cosmological observations (Planck Collaboration et al., 2018) provide strong evidence that we live in a spatially flat universe presently undergoing an accelerated expansion, with the standard assumption being that this is driven by a cosmological constant, \(\Lambda\). The other components of the standard \(\Lambda\)CDM model are dark matter (DM), whose only cosmologically-relevant interaction is through gravity, and the directly-observable baryonic matter of the visible Universe. This model has proven to be remarkably consistent with observations of the CMB, BAO and Hubble parameter, at least at high redshift (Planck Collaboration et al., 2014, 2016, 2018). There are, however, several problems with the standard model (for a review see Perivolaropoulos & Skara, 2021). Firstly, while the cosmological constant may be interpreted as a homogeneous energy density that fills the Universe, referred to as dark energy (DE), the physical interpretation of this energy density from particle physics as arising from zero-point quantum fluctuations in the vacuum is famously problematic. Thus there are many proposals that the late-time accelerated expansion is caused by some other effect, either a modification of General Relativity or an additional dynamical dark energy component. Among the latter possibilities (see the review Yoo and Watanabe, 2012), the most studied is a scalar field, first postulated in Wetterich (1995) and subsequently referred to as quintessence by Caldwell et al. (1998). Another problem associated with DE in the form of a cosmological constant is the "coincidence problem" (Amendola and Tsujikawa, 2010) which seems to imply that our current epoch is special, given that, in the standard model, the beginning of the DE dominated stage of the cosmological evolution occurred very recently, at a redshift of approximately \(z\approx 0.3\). Dynamical DE theories, in particular those that include a coupling with DM, may help explain this apparent coincidence (Amendola, 1999, although see Lindner, 2006 for an alternative viewpoint). The tension between local and early-Universe measurements of the Hubble parameter is also a potentially serious issue with the standard model (Verde et al., 2019). While it may be that systematic errors in the measurements alleviate some of the tension (Bernal et al., 2016), the size of the discrepancies suggests that it cannot be explained by assuming these errors alone. In addition, the cosmological parameter \(\sigma_{8}\), closely connected to the matter density parameter \(\Omega_{M}\), provides us with an excellent tool to constrain matter formation and structure growth. The CMB and large-scale structure (LSS) observations have shown some discrepancies in this value, suggesting a tension between \(\Omega_{M}\) and \(\sigma_{8}\), as discussed in Ade et al. (2014), indicating a lower structure growth rate than expected according to \(\Lambda\)CDM. Furthermore, at small scales, several studies, such as that of Oh et al. (2015), have shown through rotation curves that the density profiles of satellite galaxy halos exhibit an inner core of nearly constant density. This contrasts with predictions from numerical simulations using the \(\Lambda\)CDM model that halos have a universal NFW density profile, which shows a "cuspy" behaviour in the innermost regions, referred to as the _cusp-core_ problem. One possible solution comes from the contribution of feedback from the baryonic content of the halo, causing a redistribution of the dark matter and resulting in a core (Valenzuela et al., 2007). It is not currently clear if such a mechanism is sufficient to resolve the problem, especially in very dark matter-dominated galaxies. Another possible challenge to the model is the _missing satellites_ problem, whereby cosmological simulations suggest the existence of far more satellite dark matter halos than have been detected observationally (via their baryonic content) around the Milky Way (or other local-group galaxies). Various solutions have been proposed, such as reionisation suppressing star formation in low mass halos (Bullock et al., 2000) or tidal interactions stripping the baryonic material (Brooks et al., 2013). Again, a clearer understanding of baryonic processes inside the halos may provide a solution within the context of \(\Lambda\)CDM. Finally, the study of the motion of satellite galaxies around the Milky Way has shown discrepancies concerning \(\Lambda\)CDM predictions. The expectation within the hierarchical structure formation scenario is that there would be an approximately isotropic distribution of these galaxies around their hosts. However, as studied in the Milky Way (Pawlowski et al., 2013), in Andromeda (Koch & Grebel, 2006) and the elliptical galaxy Centaurus A (Muller et al., 2018), it appears that their satellite galaxies are positioned in a planar distribution around the host galaxy, around which they have a coherent rotational motion. This structure is referred to as the _plane of satellites_. There are some indications that this problem may not be as severe for \(\Lambda\)CDM as initially thought, due to the non-isotropic build-up of structure falling in through filaments (Zentner et al., 2005). However, given that observations suggest these satellite planes are ubiquitous (Phillips et al., 2015) it is still not at all clear if this can be generally accommodated within the standard model. To address the large-scale problems associated to \(\Lambda\) discussed above, and in particular the coincidence problem, there have been numerous proposals that the dynamical dark energy component may be coupled to the dark matter component. There is some motivation for this from particle physics, given that quantum corrections typically introduce couplings between particle species, and it is generally the _absence_ of a coupling that must be explained, usually via some postulated symmetry. By restricting the coupling to be between the two dark sector components we can also avoid very stringent fifth-force constraints (Will, 2014). The presence of such a coupling can strongly influence both the background evolution and that of the perturbations. In Amendola (2000), a model with linear coupling was studied, considering a quintessence scalar field with exponential potential. It was found that the effect of the coupling on the power spectrum reduced and increased (very slightly) the value of \(\sigma_{8}\) for large and small couplings, respectively. In Valiviita et al. (2008), the authors studied a coupled model where the DM and DE components are treated as fluids. This type of coupling generated instabilities within the model, which, according to the same authors, could be avoided if the DE were considered as quintessence. Salvatelli et al. (2014) proposed that if there is an interaction at the level of the densities, the observational data favour that it is activated in the later stages of the evolution of the Universe, \(z\sim 0.9\). In the majority of studies, the coupling is usually introduced at the level of the continuity equations for DM and DE. This type of coupling modifies the background evolution, so the coupling parameter must be very small (Wang et al., 2016), in order not to deviate excessively from the \(\Lambda\)CDM background predictions. Maccio et al. (2004) studied such models of coupled dark matter-dark energy using N-body simulations, with a simplified treatment of the coupling. It was found that, for strong coupling, the DM density profiles tended towards higher concentrations, exacerbating the cusp core problem. However, the study of Baldi (2009) found conflicting results for a similar coupling, showing a change in the slope of the density profile in the other direction, with a decrease in the central densities of the innermost regions of the DM halos. A thorough study by Li & Barrow (2011, 2012) undertook a complete analysis of the consequences of these density-coupled DM-DE models. To begin with, the linear power spectrum was analysed where the contribution of baryons and DM was separated, observing that the presence of the coupling in the power spectrum at small scales shows an increase in the number of structures compared to \(\Lambda\)CDM, with this increase starting at a very early stage, even being relevant as early as \(z=49\). Thus to be consistent, for these kinds of models, it is necessary to use initial conditions for the N-body simulations that differ from those of \(\Lambda\)CDM. In addition it was found that the coupling effect leads to a modified non-linear matter power spectrum and mass function. As regards halo profiles it was shown that it is possible to see a reduction in the inner density profile, as compared to \(\Lambda\)CDM, although this suppression of the inner density is reduced for large couplings. In Li & Barrow (2011) the contributions of various effects in the coupled model were examined: the modified background expansion, varying particle mass, fifth force effects and finally the presence of a velocity dependent force. It was found that the first effect, the modified background expansion, is by far the most consequential for structure formation in these models. In Baldi (2012) the CODECs project is discussed, which significantly extended the explored parameter space of such models, with both large-scale (L-CoDECS) and small-scale (H-CoDECS) models of dark matter density-coupled to dark energy, with the scalar field \(\phi\) evolving according to a potential \(V(\phi)\) of the exponential form. For all cases, they normalized all the models based on the same CMB amplitude, so for each simulation, they used different initial conditions. It was found that the coupling effects could break the degeneracy between DE and \(\sigma_{8}\) at linear scales, given that the amplitude of the linear power spectrum exhibits a faster time evolution compared to \(\Lambda\)CDM. It was further found that for many coupled DM-DE models there is significant enhancement in structure formation in the non-linear regime leading to a modification of the halo mass function (HMF). While there is degeneracy, again, with \(\sigma_{8}\), this can again be broken by the redshift dependence of the HMF. In Simpson (2010) "dark scattering" models are discussed, which consider a momentum exchange between DM and DE. In this model, where the dark energy is treated as a fluid, a drag term in the velocity perturbation arises. In Bose et al. (2018), they analysed such models using various DE equations of state and various interaction cross-sections. They found that the effect of the interaction on linear perturbations acts efficiently to suppress/increase the amplitude of the power spectrum for large/small scales respectively. In Baldi & Simpson (2015), they developed N-body simulations for the dark scattering models proposed by Simpson (2010), using a dark energy fluid with equation of state parameter \(w\), for \(w>-1\) and \(w<-1\). They found that the effect of DE-DM scattering on the linear power spectrum suppresses the power for \(w>-1\) and increases it for \(w<-1\). While in the nonlinear case, the effect is reversed, showing an increase for \(w>-1\) and a suppression for \(w<-1\) at z = 0. They further analysed the HMF, finding that the effect of scattering results in a significant increase (decrease) of the halo abundance over the whole mass range for \(w>-1\) (\(w<-1\)). In addition, they analysed the velocity dispersion of the halos, finding that an increase in the scattering parameter leaves an increase in the velocity dispersion for all mass ranges when \(w>-1\), while for \(w<-1\), they did not find significant deviations. In Baldi & Simpson (2017) the same authors again performed N-body simulations, this time considering a time-evolving equation of state for the dark energy \(w_{DE}\), the results of which were compared with Baldi & Simpson (2015), with the time-dependent equation of state leading to a weaker impact of the coupling at non-linear scales. Thus the amplification found in the power spectrum in the previous study is significantly suppressed in this case. These results suggest a possible avenue to reconcile low and high redshift observables. The dark scattering model of Simpson (2010) has some resemblance to the _Type-3_ model proposed in Pourtsidou et al. (2013). This momentum transfer model considers the dark energy component as a scalar field, rather than a fluid, and leads to a significant modification to the equation of motion for the dark matter. Interestingly, the coupling is absent at the background level, unlike density coupled models. Furthermore, some of the modifications to the DM equation of motion are proportional to the DE density contrast, these latter modifications being absent in the dark scattering models. Given the presence of these additional terms the Type-3 model of Pourtsidou et al. (2013) is not reducible to the dark scattering model of Simpson (2010), as discussed in Skordis et al. (2015), and constitutes a new class of coupled DE-DM models. This class of models was subsequently studied in Pourtsidou and Tram (2016); Chamings et al. (2020), with a negative coupling constant, where it was found that the interaction suppressed structure growth, again possibly reconciling some of the tensions in CMB and LSS observations. These models have been further confronted with observations very recently in Spurio Mancini and Pourtsidou (2021). In this paper, we will focus on the study of the Type-3 model given in Pourtsidou et al. (2013). We will analyze, using N-body simulations, the impact of this coupling on the growth of structures as well as the influence (if any) on the shape of the power spectrum and halo properties by comparing with simulations of uncoupled models. We also briefly consider a \(\Lambda\)CDM model for reference. We are primarily interested in the implications for DM halos and the small-scale problems of the standard model discussed above. ## 2 Theory and simulations We now discuss the theoretical background of the model we are considering and show how this is implemented in a cosmological N-body code. For full details of the theory we refer the reader to Pourtsidou et al. (2013) and Skordis et al. (2015), which we closely follow in this section. Note that we work in units of \(G=c=1\). ### Equations of motion Dark matter is treated as a perfect fluid, in the usual manner, while the energy-momentum tensor of the scalar field for Type-3 models is written as \[T^{(\phi)}_{\mu\nu}=F_{Y}\phi_{\mu}\phi_{\nu}-Fg_{\mu\nu}-ZF_{Z}u_{\mu}\mu_{ \nu}. \tag{1}\] where \[Y=\frac{1}{2}\phi_{\mu}\phi^{\mu} \tag{2}\] \[Z=u^{\mu}\phi_{\mu},\] and \(F=F(Y,Z,\phi)\) is some function. \(F_{Y}\) and \(F_{Z}\) denote derivatives of this function with respect to \(Y\) and \(Z\) respectively. The dark matter fluid 4-velocity is given by \(u^{\mu}\) and \(\phi_{\mu}=\partial_{\mu}\phi\). The equations of motion for the scalar field and the dark matter fluid are given by \[\nabla_{\mu}(F_{Y}\phi^{\mu}+F_{Z}u^{\mu})-F_{\phi}=0, \tag{3}\] and \[u^{\nu}\nabla_{\nu\rho}+\rho\nabla_{\nu}u^{\nu}=0. \tag{4}\] Note that the latter equation is just the standard equation of motion for an uncoupled pressureless perfect fluid. Thus the coupling has no direct effect on the DM density continuity equation. The momentum transfer equation is given by \[(\rho-ZF_{Z})u^{\beta}\nabla_{\beta}u_{\mu}=\nabla_{\beta}(F_{Z}u^{\beta}) \dot{\phi}_{\mu}+F_{Z}D_{\mu}Z, \tag{5}\] where \(D_{\mu}=q_{\mu}^{\nu}\nabla_{\nu}\) is the spatial derivative operator given in terms of the projection operator \(q_{\mu}^{\nu}\equiv u_{\mu}u^{\nu}+\phi_{\mu}^{\nu}\), and \(\ddot{\phi}_{\mu}=q_{\mu}^{\nu}\nabla_{\nu}\phi=D_{\mu}\phi=\partial_{\mu}\phi +u^{\nu}u_{\mu}\partial_{\nu}\phi\) is the spatial projection of the derivative of the scalar field. Note that this equation, in the absence of a coupling, is simply the standard geodesic equation for the dark matter fluid which reduces to the standard (pressureless) Euler equation in the Newtonian limit. Coupled quintessence is given by the choice \(F=Y+V(\phi)+\gamma(Z)\), which we assume from now on. To more easily connect with the Newtonian limit, as is relevant for our N-body simulations, we switch to the Newtonian gauge (in Pourtsidou et al. 2013 the synchronous gauge is used) described by the following line element: \[ds^{2}=a^{2}(\tau)[-(1+2\Psi)d\tau^{2}+(1+2\Phi)\delta_{ij}dx^{i}dx^{j}], \tag{6}\] where \(\Phi\) and \(\Psi\) are spatial scalars and \(\delta_{ij}\) is the 3-dimensional Kronecker delta (we always assume flat space), and the perturbed fluid 4-velocities (to linear order) in this gauge are, \[u_{0} = -a(1+\Psi),\] \[u_{i} = av_{i},\] where \(v_{i}\) is the velocity perturbation of the fluid and \(\Psi\) is one of the previously defined scalar components of the perturbed metric. The evolution of the cold dark matter fluid at the background level is given by the standard equation of motion in (4): \[\dot{\rho}+3\mathcal{H}\bar{\rho}=0, \tag{7}\] and the evolution of the CDM fluid perturbations are \[\dot{\delta}+\theta+3\dot{\Phi}=0 \tag{8}\] where the dot denotes a derivative with respect to conformal time, and we define \(\theta\equiv\nabla_{i}v_{i}\). The CDM density, including first order perturbations, has been expressed as \(\rho=\bar{\rho}(1+\delta)\). Again, we see that the density evolution is unaffected by the presence of the coupling. To simplify our notation we will from now on refer to the background CDM density with \(\rho\). Turning to the scalar field, the background evolution is given by equation (3) as \[\ddot{\phi}-\gamma_{ZZ}\ddot{\phi}+2\mathcal{H}\dot{\phi}+\gamma_{ZZ}\mathcal{ H}\dot{\phi}-3\alpha\mathcal{H}\gamma_{Z}+a^{2}V_{\phi}=0. \tag{9}\] From equation (3) at first order in the perturbations we obtain \[V_{\phi\phi}\phi a^{2} +3\gamma_{Z}\Psi a\mathcal{H}+3\gamma_{Z}a\dot{\Phi}-\gamma_{Z}a \nabla^{2}\theta-\gamma_{ZZZ}\frac{\ddot{\phi}}{a}\dot{\phi}\] \[+\gamma_{ZZZ}\frac{\ddot{\phi}}{a}\dot{\phi}\mathcal{H}+2\gamma_{ ZZ}\Psi_{\phi}^{\phi}-2\gamma_{ZZ}\Psi\ddot{\phi}\mathcal{H}+\gamma_{ZZ}\dot{\Psi}\dot{\phi}\] \[-\gamma_{ZZZ}\ddot{\varphi}-2\gamma_{ZZ}\dot{\varphi}H-2\Psi\ddot{ \phi}-4\Psi\ddot{\phi}\mathcal{H}-3\dot{\phi}\dot{\phi}-\ddot{\Psi}\ddot{\phi}+ \ddot{\varphi}\] \[-\nabla^{2}\varphi+2\dot{\varphi}\mathcal{H}=0 \tag{10}\] Since we want to take our analysis to small scales, it is useful to pass the equations to Fourier space, as is standard. Therefore each perturbed quantity \(\chi\) and its derivatives can be substituted as follows, \[\chi(x,\tau) \rightarrow\chi(\tau), \tag{11}\] \[\nabla_{\chi}(x,\tau) \rightarrow\chi_{K}(\tau),\] (12) \[\nabla^{2}\chi(x,\tau) \equiv\nabla_{i}\nabla^{i}\chi(x,\tau) \rightarrow k^{2}\chi(\tau). \tag{13}\] Applying this to equation (10) and taking the Newtonian limit1 of \(k\gg\mathcal{H}\) (i.e. modes well within the horizon) the equation simplifies enormously to Footnote 1: The requirement of non-relativistic velocities is implicit in the gauge choice, whereby the DM fluid velocity perturbation is assumed to satisfy \(|v|\ll 1\). \[\dot{\varphi}=a\gamma_{2}\theta. \tag{14}\] We leave a more careful treatment of the scalar field perturbation, where this would be explicitly calculated using a numerical lattice field theory approach, to future work. Using equation (14) we can now write the momentum transfer equation (5) as \[\dot{\theta}+\mathcal{H}\theta+\Psi= \frac{1}{a\tilde{\rho}-\gamma_{Z}\tilde{\phi}}\left[2\gamma_{ZZ} \dot{\tilde{\phi}}\mathcal{H}+3a\gamma_{Z}^{2}\rho\mathcal{H}-\gamma_{Z}\Psi \dot{\tilde{\phi}}\right. \tag{15}\] \[+\gamma_{Z}\tilde{\phi}\theta+\sigma_{Z}^{2}\mathcal{H}\theta+a \gamma_{ZYZZ}\dot{\tilde{\phi}}\theta\] \[\left.-\frac{1}{a^{2}\tilde{\rho}-a\gamma_{Z}\dot{\tilde{\phi}}} \left[\gamma_{ZZ}\dot{\tilde{\phi}}^{2}\theta\mathcal{H}-\mathcal{H}\gamma_{ Z}\gamma_{ZZ}\dot{\tilde{\phi}}\theta\right.\right.\] \[\left.\left.+\gamma_{ZZ}\dot{\tilde{\phi}}\dot{\tilde{\phi}} \theta+\gamma_{Z}\gamma_{ZZZ}\dot{\tilde{\phi}}\theta\right]\right.\] where \(\dot{\tilde{Z}}=1/a(\tilde{\phi}+\mathcal{H}\dot{\phi})\). This is the modified Euler equation for momentum-coupled quintessence in a general form, where we have not yet selected the precise form of the coupling. In the absence of the coupling i.e. for \(\gamma=0\), the entire right hand side of equation (15) is zero and we recover the standard Euler equation for the dark matter fluid. This equation must now be implemented in our N-body simulations. ### Modified Euler equation We follow Pourtsidou et al. (2013) and define the coupling as \[\gamma(Z)=\gamma_{0}Z^{2} \tag{16}\] where \(\gamma_{0}\) is a constant whose value is assumed to be in the range \(0\leq\gamma_{0}<1/2\). Note that a negative value for \(\gamma_{0}\) may in fact lead to more favourable observational consequences due to a reduction in structure at the linear level compared to the standard model, as discussed in Pourtsidou and Tram (2016). As we will see, however, a large positive coupling can lead to reduced structure at non-linear scales. The equation (15) becomes \[(1+h_{1})\dot{\psi}_{i}+(1+h_{2})\mathcal{H}\dot{\psi}_{i}+(1+h_{3})\nabla_{i} \Psi=0 \tag{17}\] where the coefficients \(h_{1}\), \(h_{2}\) and \(h_{3}\) are \[h_{1} =\frac{4\gamma_{0}^{2}\dot{\phi}^{2}}{a^{2}\rho-2\gamma_{0}\dot{ \phi}^{2}}, \tag{18}\] \[h_{2} =\frac{(8\gamma_{0}^{2}-2\gamma_{0})\dot{\phi}^{2}+(8\gamma_{0}^ {2}-4\gamma_{0})\dot{\phi}\dot{\phi}\frac{1}{\mathcal{H}}}{a^{2}\rho-2\gamma_{ 0}\dot{\phi}^{2}},\] \[h_{3} =\frac{2\gamma_{0}\dot{\phi}^{2}}{a^{2}\rho-2\gamma_{0}\dot{\phi} ^{2}}.\] In the \(h_{2}\) term, we can replace \(\tilde{\phi}\) using the evolution equation of the background field2, given by equation (9), with Footnote 2: Here we can see the presence of a strong coupling problem, as discussed in Pourtsidou et al. (2013), when \(\gamma_{0}=1/2\). The largest value of \(\gamma_{0}\) that we consider is \(\gamma_{0}=0.3\). \[\ddot{\phi}(1-2\gamma_{0})+2\mathcal{H}\dot{\phi}(1-2\gamma_{0})+a^{2}V_{\phi }=0., \tag{19}\] where we have used equation (16). Note that the coupling constant appears in this equation for the background evolution of the scalar field as an effective rescaling of \(\phi\). Thus, \(h_{2}\) can be written as \[h_{2}=\frac{4\gamma_{0}(\frac{3}{2}-2\gamma_{0})\dot{\phi}^{2}+4\gamma_{0} \phi a^{2}V_{\phi}/\mathcal{H}}{a^{2}\rho-2\gamma_{0}\dot{\phi}^{2}}. \tag{20}\] From this, we can see that in the absence of the coupling we have \(h_{1}=h_{2}=h_{3}=0\), and the Euler equation reduces to its standard form. In the presence of the coupling, however, we see that both the cosmological friction as well as the effective gravitational force acting on the DM are modified. We can also now immediately see in what circumstance we would have modified dynamics, as compared with the uncoupled case. One might expect, given that the \(h_{i}\) are all proportional to \(\dot{\phi}^{2}\), that they would be negligible at late times. If the denominators in equation (18) approach zero, however, then the values of \(h_{i}\) will grow without bound. Due to the positivity of all quantities in both terms in the denominator we can therefore state that there will be a significant modification to the dynamics when \[a^{2}\rho\approx 2\gamma_{0}\dot{\phi}^{2}. \tag{21}\] Given that the dark matter density evolves as for the standard case, we can write the condition for large deviations from the standard dynamics as \[2a\gamma_{0}\dot{\phi}^{2}\approx\rho_{0} \tag{22}\] where \(\rho_{0}\) is the present-day dark matter density. As we will see later, this condition is satisfied at late times for all of our models. ### Modification of the N-body solver For our cosmological N-body simulations we use the well-known RAMSES code (Teyssier, 2002), which is a grid-based AMR code, using a particle-mesh (PM) scheme to evolve the dark matter particle distribution. To write our equation in the code, we must first take into consideration the so-called supercomoving coordinates (Martel and Shapiro, 1998) used in RAMSES, which are defined as \[\vec{v} =H_{0}L\frac{1}{a\vec{u}}, \tag{23}\] \[\vec{x} =\frac{1}{a}\frac{\vec{x}}{L},\] \[dt =a^{2}\frac{d\vec{t}}{H_{0}},\] \[\Psi =\frac{L^{2}H_{0}^{2}}{a^{2}}\tilde{\Phi},\] where \(L\) is the length of the simulation box. The coordinates denoted with a tilde are the supercomoving coordinates. To simplify the notation, we will apply the transformation and then remove the tildes. Thus, using equation (23) in equation (17) we get \[\frac{d\vec{u}}{dt}=-\frac{h_{2}-h_{1}}{1+h_{1}}a^{2}\frac{H}{H_{0}}\vec{u}- \frac{1+h_{3}}{1+h_{1}}\vec{\nabla}\Phi. \tag{24}\] Thus when \(h_{1}=h_{2}=h_{3}=0\), we return to the standard form \(\frac{d\vec{u}}{dt}=-\vec{\nabla}_{x}\Phi\), which is simply Newton's second law for a conservative force given by a potential \(\Phi\). Transforming the Euler equation to supercomoving coordinates, in the uncoupled case, eliminates the cosmological friction term, simplifying the calculations in the code. In the presence of the coupling, however, the cosmological friction term is explicitly present, even in supercomoving coordinates. Note that equation (24) uses the Hubble parameter with respect to physical time. We now have the modified Euler equation (24) in a form in which it may be discretised and solved numerically. In RAMSES, a finite difference approximation is used to resolve the equations of motion, using a Leapfrog scheme. Given an acceleration \(-\nabla\phi^{n}\) at a time \(t^{n}\), with particle positions \(x_{p}^{n}\) and velocities \(v_{p}^{n}\), the velocities are updated by a half timestep using the potential at \(t^{n}\) and then the positions are updated using these updated velocities, according to \[v_{p}^{n+1/2} =v_{p}^{n}-\nabla\phi^{n}\Delta t^{n}/2, \tag{25}\] \[x_{p}^{n+1} =x_{p}^{n}+v_{p}^{n+1/2}\Delta t^{n}/2,\] which is then followed by a full update of the velocity using the updated gravitational potential: \[v_{p}^{n+1} =v_{p}^{n+1/2}-\nabla\phi^{n+1}\Delta t^{n}/2. \tag{26}\] Note that the time-step in RAMSES is adaptive, thus we write \(\Delta t^{n}\). To connect with the implementation of the modified Euler equation in the code, we write the finite difference update of the velocity as \[\frac{v_{p}^{n+1/2}-v_{p}^{n}}{(1/2)\Delta t^{n}}=F, \tag{27}\] where \(F\) is the force acting on the particle. This is simply a finite difference approximation to the differential equation (24) in the absence of coupling. Thus, we can easily modify the velocity update as required to implement equation (24) in the following way: \[v_{p}^{n+1/2}=v_{p}^{n}-\frac{h_{2}-h_{1}}{1+h_{1}}a^{2}\frac{H}{H_{0}}v_{p}^{ n}\Delta t^{n}/2+\frac{1+h_{3}}{1+h_{1}}F\Delta t^{n}/2. \tag{28}\] We now define two new coefficients \(\epsilon_{1}\) and \(\epsilon_{2}\) to simplify the expression, \[\epsilon_{1} =1-\frac{h_{2}-h_{1}}{1+h_{1}}a^{2}\frac{H}{H_{0}}\Delta t^{n}/2 \tag{29}\] \[\epsilon_{2} =\frac{1+h_{3}}{1+h_{1}}\] so finally equation (28) becomes \[v_{p}^{n+1}=\epsilon_{1}v_{p}^{n}+\epsilon_{2}F\Delta t^{n}/2. \tag{30}\] This is the equation we have implemented in RAMSES. The standard dynamics is recovered by setting \(\epsilon_{1}=\epsilon_{2}=1\) which is equivalent to having all the \(h_{i}\) equal to zero. ### Obtaining the background values Going back to the modified Euler equation (17), we can see that the \(h_{i}\) values (or, equivalently, the \(\epsilon_{1}\) and \(\epsilon_{2}\) coefficients in RAMSES) depend on background quantities such as \(\rho\), \(\phi\), and \(\mathcal{H}\). To solve the evolution of these values we used a modified version of the CLASS code (Lesgourgues, 2011) which calculates the evolution of linear cosmological perturbations. CLASS includes the option to add a quintessence field to the matter-energy components of the Universe. Our modifications of the code were to include the coupling term in the Klein-Gordon equation (9), the scalar field perturbation equation (10) and the momentum transfer equation (15), although we have used only the modified background Klein-Gordon equation (thus including the field rescaling) to calculate the evolution of the background quantities. We will, however, use the full perturbation equations momentarily to confirm that, for our specific models, there is a minimal impact on the CMB power spectrum. The scalar field potential used in our study is that of Albrecht & Skordis (2000), given by \[V(\phi)=((\phi-\beta)^{\alpha}+\Gamma)e^{-\lambda\phi} \tag{31}\] which is already included in CLASS. The parameter values for the potentials we have considered are given in Table 1. The CLASS code adjusts one parameter, chosen by the user, in order to satisfy the closure condition of the density parameters: \(\sum_{i}\Omega_{i}=1\). In our case this parameter was chosen to be \(\lambda\). Note that the values of the other parameters have been based on those used in Albrecht & Skordis (2000), although we have considered rather large initial conditions for the scalar field and its derivative. The resulting evolution is not sensitive to the latter. CLASS calculates the background evolution equations for all components of the Universe (including baryons and radiation), giving us tabulated values for all background quantities at a large number of redshifts. Obtaining these values, we then transform the conformal time given in the table calculated by CLASS to the superconformal time (see equation 23) used in RAMSES. For each model we obtain the background evolution (shown in SS3.1). We have added a small routine to RAMSES to read the table of background values generated by CLASS in order to calculate the values of the \(\epsilon_{1}\) and \(\epsilon_{2}\) coefficients in the modified Euler equation. The values are determined at the appropriate redshift by linear interpolation of neighbouring values in the table. ### Initial conditions and parameters for the N-body simulation The initial conditions for the simulations were generated using MUSIC (Hahn & Abel, 2011), with a transfer function generated from CAMB (Lewis & Challinor, 2011), assuming a standard \(\Lambda\)CDM cosmology, with cosmological parameters given in Table 2. We used CAMB for expediency given that the output files are compatible with MUSIC. Given that the background evolution of the dark matter fluid is unaffected by the coupling in our models, the transfer function at high redshift is effectively identical to the uncoupled case and very similar to that of \(\Lambda\)CDM. In addition, leaving the transfer function fixed across all models allows us to generate identical initial conditions, thus simplifying the process of comparing the low-redshift results. We leave for future work the use of fully consistent simulations with appropriately modified initial conditions. The physical box size used for most of our simulations is 32 Mpc \(h^{-1}\), the number of particles \(N_{P}=128^{3}\) and the initial redshift is \(z_{ini}=50\). We have also run some simulations of model C (with \(\gamma_{0}=0\) and \(\gamma_{0}=0.3\)) in larger boxes of sizes 128 Mpc \(h^{-1}\) and 512 Mpc \(h^{-1}\) in order to study the power spectra at larger scales. These simulations use \(256^{3}\) particles, and thus have limited mass resolution and are not used for the halo analysis. Throughout this text when we refer to "large scales" we are referring to scales of order the size of the simulation box. The simulation parameters are summarised in Table 3. All the models studied are summarized in Table 1 with three additional cases given in Table 4, which consider the individual contributions of each Euler coefficient as they are implemented in RAMSES: \(\epsilon_{1}\) and \(\epsilon_{2}\). The parameters for the scalar field potential have been chosen to produce a background evolution that is similar to that of \(\Lambda\)CDM. Our idea is not to deviate excessively from the background evolution of the standard model. For the coupling parameter, we choose the uncoupled case \(\gamma_{0}=0\), an intermediate coupling \(\gamma_{0}=0.15\) and a strongly coupled model \(\gamma_{0}=0.3\). Note that the \(\gamma_{0}=1/2\) case is theoretically excluded (Pourtsidou et al., 2013). It is also worth pointing out that the uncoupled case is not equivalent to \(\Lambda\)CDM, due to the presence of the (uncoupled) quintessence field, rather than a cosmological constant. ## 3 Results We now present the results obtained from our simulations using the CLASS and RAMSES codes. ### Background evolution From CLASS we can obtain the evolution of the density parameters \(\Omega_{i}\) in our models, with \(i\) corresponding to DM, DE, baryons and radiation. Their evolution is broadly consistent with that of \(\Lambda\)CDM. In Fig. 1 we show the evolution of \(H(z)\) normalised by the Hubble parameter of \(\Lambda\)CDM, as well as the values of \(H(z)\) for model C with \(\gamma_{0}=0\) normalised by \(H(z)\) for model C with \(\gamma_{0}=0.3\) to show the effect of the coupling on the background (due to the presence of \(\gamma_{0}\) in equation 19). As we can see in the figure, the values of \(H(z)\) for our quintessence models at high redshift are a constant \(\sim 1\%\) lower than those of \(\Lambda\)CDM. At low redshift, however, models A (blue lines) and B (green lines) deviate to a maximum of \(\sim 3\%\) lower values of the Hubble parameter by \(z=0\), independent of the value of the coupling constant. For model C, we see a more complex behaviour, where the Hubble parameter increases towards the \(\Lambda\)CDM value, before then decreasing again to very similar final values as seen for models A and B. In the case of model C with \(\gamma_{0}=0.3\) (orange dotted line) \(H(z)\) peaks at a value that slightly exceeds the standard cosmology, before dropping rapidly. For model C with \(\gamma_{0}\) normalised by the Hubble parameter for the same model but with coupling \(\gamma_{0}=0.3\) (purple dot-dashed line) we see that for large redshift the background evolution is identical in the two models, but at small redshifts the Hubble parameter for the coupled model takes smaller values than that of the uncoupled model. The consequences of this for the power spectra will be explored in Section 3.2. The equation of state parameter \(w_{\phi}\) for a quintessence model is given by \[w_{\phi}\equiv\frac{p_{\phi}}{\rho_{\phi}}=\frac{\dot{\phi}^{2}/2-V(\phi)}{ \dot{\phi}^{2}/2+V(\phi)}. \tag{32}\] and is shown in Fig. 2. For all models, \(w_{\phi}\) begins with a value equal to 1, which then decays rapidly to values close to \(-1\). This transition occurs at very high redshift, well before the starting redshift of our N-body simulations, and occurs because of the form of the chosen potential. If we observe the evolution for each value of \(\gamma_{0}\), we see an almost identical behaviour, with separation of the models as we approach \(z=0\). The values of \(w_{\phi}(z=0)\) are given in Table 5. The evolution is given by a dynamical equation of state whose final value \(w(z=0)\), for all our models, is \(w>-1\). To check that our models are consistent with CMB observations, we now compare the CMB temperature fluctuation power spectrum of our fiducial \(\Lambda\)CDM model with our coupled quintessence models in Fig. 3. As we can see, there are only very minor deviations in the peaks of the power spectra, when comparing with \(\Lambda\)CDM, due to the \begin{table} \begin{tabular}{c|c c c c c c} \hline Model & Potential & \(\Gamma\) & \(B\) & \(\lambda\) & \(\alpha\) & \(\phi\) & \(\phi\) \\ \hline \hline A & \(\Gamma e^{-1\phi}\) & 1.0 & - & 1.597723e-1 & - & 100 & 10 \\ B & \([(\phi-\beta)^{\alpha}+\Gamma]e^{-1\phi}\) & 0.001 & 34.8 & 2.432815e-1 & 2.0 & 100 & 10 \\ C & \([(\phi-\beta)^{\alpha}+\Gamma]e^{-1\phi}\) & 20.0 & 3.8 & 9.347720e-1 & 17.0 & 100 & 10 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the scalar field potential used in our models. The scalar field values are in units of the reduced Planck mass \(m_{P}=\sqrt{8\pi G}\), and the dot indicates a derivative with respect to conformal time. For all models we consider three values of the coupling: \(\gamma_{0}=0\), \(0.15\) and \(0.3\). \begin{table} \begin{tabular}{c|c c c} \hline Model & \(\epsilon_{1}\) & \(\epsilon_{2}\) \\ \hline \hline C* & 1 & 1 \\ C*1 & \(1-\frac{h_{2}-h_{1}}{7+h_{1}}a^{2}\frac{H_{2}}{7+h_{1}}\Delta r^{n}/2\) & 1 \\ C*2 & 1 & \(\frac{1+h_{2}}{1+h_{1}}\) \\ \hline \end{tabular} \end{table} Table 4: The two additional simulations C*1 and C*2 consider each coefficient in the Euler equation separately, using the background evolution of model C for \(\gamma_{0}=0.3\). The model C* has the background evolution of model C with \(\gamma_{0}=0.3\) and \(\epsilon_{1}=\epsilon_{2}=1\). \begin{table} \begin{tabular}{c|c c c} \hline Parameter & Value \\ \hline \hline \(H_{0}\) & \(70\) km s\({}^{-1}\) Mpc\({}^{-1}\) \\ \(\Omega_{m}\) & 0.3 \\ \(\Omega_{\Lambda}\) & 0.7 \\ \(\Omega_{b}\) & 0.04 \\ \(\sigma_{8}\) & 0.88 \\ \(n_{s}\) & 0.96 \\ \hline \end{tabular} \end{table} Table 2: The cosmological parameters used for generation of the initial conditions. Figure 1: Evolution of \(H(z)/H_{\rm ref}(z)\) for all models summarised in Table 1. In almost all cases \(H_{\rm ref}=H_{\rm LCDM}(z)\). The exception is the purple dot-dashed line, which is for \(H(z)\) of model C without coupling normalised by \(H(z)\) of model C with \(\gamma_{0}=0.3\). \begin{table} \begin{tabular}{c|c c c} \hline \hline \(N_{P}\) & \(M_{P}\) [\(M_{\odot}\) h\({}^{-1}\)] & \(L\) [Mpc h\({}^{-1}\)] & \(\Delta_{x}\) [kpc h\({}^{-1}\)] \\ \hline \hline \(128^{3}\) & \(\sim 1.3\times 10^{9}\) & 32 & 1.95 \\ \(256^{3}\) & \(\sim 1.0\times 10^{10}\) & 128 & 7.8 \\ \(256^{3}\) & \(\sim 6.7\times 10^{11}\) & 512 & 31.2 \\ \hline \end{tabular} \end{table} Table 3: Technical properties of all our simulations. \(\Delta_{x}\) refers to the maximum spatial resolution. slightly modified background evolution. It is worth noting, however, that the coupled quintessence models that we consider lead to CMB power spectra that are essentially identical, regardless of the potential or the coupling. In order to quantitatively understand the deviation in our modified Euler equation from the uncoupled case, we focus on equation (17). From this equation we can directly estimate the magnitude of our modifications and how these might affect the movement of the particles. Dividing equation (17) throughout by the coefficient of the acceleration term, we can refer to the coefficient of the cosmological friction term as \(c_{1}\) and the coefficient of the gravitational force term as \(c_{2}\), that is: \[c_{1}=\frac{1+h_{2}}{1+h_{1}} \tag{33}\] \[c_{2}=\frac{1+h_{3}}{1+h_{1}}.\] Fig. 4 shows the evolution of these coefficients for all our models, which we compare with the uncoupled case (i.e., with \(c_{1}=c_{2}=1\)). We plot models A (blue lines), B (green lines) and C (orange lines), distinguishing for each value of \(\gamma_{0}\). We summarize the deviations of our models from the standard case in Table 6. Considered as a percentage deviation from the standard case, we can see that the modified cosmological friction term always dominates over that of the modified gravitational force term, at least for the models considered in this study. This would suggest that, in the limit of weak coupling, all of our models would correspond to the dark scattering case. The cosmological friction and effective gravitational force in our models remains the same as the uncoupled case until z \(\sim\) 2, indicating that the presence of the coupling only becomes relevant at low redshift. As we approach z = 0, we see a reduction in the cosmological friction, which is particularly pronounced for \(\gamma_{0}\) = 0.3 in model C (orange dotted line) with a change to negative values and a deviation of over 180%. It is well known that the cosmological friction term in the standard model acts to slow down the formation of structure, given that it is a force directed anti-parallel to the particle velocities. A reduction in the coefficient of this term thus implies a reduction in the effectiveness of the cosmological friction, meaning the particles will be less decelerated by the cosmological expansion as compared to the standard case. If this coefficient is equal to zero the cosmological friction is entirely cancelled out leading to unconstrained growth of the gravitational instability. This situation would only arise for a brief period of time in our models, however, due to the time-variation of the coefficients. In the extreme case of a negative coefficient the frictional force then acts parallel to the particle velocities and thus the cosmological friction term in this case becomes a kind of forcing (we will refer to this as the "cosmological push" throughout the rest of the paper). For model A we see that the deviation is much smaller for all \(\gamma_{0}\) values, being 7% for \(\gamma_{0}\) = 0.3, while for model B the deviation reaches 12% at the present time for \(\gamma_{0}\) = 0.3. As for the evolution of \(c_{2}\), we see that the general behavior is Figure 4: Variation of the cosmological friction (\(c_{1}\)) and gravitational force (\(c_{2}\)) coefficients. Figure 3: The angular power spectrum of the CMB temperature fluctuations for LCDM (black line) and our models (coloured lines). Figure 2: The equation of state parameter \(w_{\phi}\) for models A (blue lines), B (green lines) and C (orange lines). \begin{table} \begin{tabular}{l|c c c} \hline \(w_{\phi}(z=0)\) & A & B & C \\ \hline \hline \(\gamma_{0}=0\) & -0.996 & -0.993 & -0.913 \\ \(\gamma_{0}=0.15\) & -0.994 & -0.990 & -0.875 \\ \(\gamma_{0}=0.3\) & -0.990 & -0.983 & -0.779 \\ \hline \end{tabular} \end{table} Table 5: Equation of state \(w_{\phi}(z=0)\) for our models A, B and C with different values of \(\gamma_{0}\). an increase in the coefficient of the gravitational force term. The variation for A reaches 0.4% for \(\gamma_{0}=0.15\) and 1.5% for \(\gamma_{0}=0.3\), while for model B we see that the gravitational force increases by 0.8% and 2.7% for \(\gamma_{0}=0.15\) and 0.3, respectively. For model C, we see that the deviation rises significantly, reaching 53% for \(\gamma_{0}=0.3\). We will see later that both the modified gravitational force term and the modified cosmological friction term have an impact in modifying the evolution of structure, particularly in model C with the largest coupling. It is worth pointing out that, in the extreme case of model C where \(c_{1}<0\), due to the combined effects of the enhanced effective gravitational force and the cosmological push, we would expect to see increasingly "hot" gravitationally bound systems, i.e. the velocity dispersions of bound halos will increase with increasing \(c_{1}\) and \(c_{2}\), presumably leading to some kind of instability in the absence of a mechanism to stop the increase of these coefficients. In this paper we will not address the details of this instability or the virialisation process of the halos. We leave this for future work. ### Density distribution and power spectrum We now turn to the results of our N-body simulations. For all of our results we will consider the final simulation snapshot at \(z=0\). The projected particle density distributions for models A, B and C are shown in Fig. 5 for the small box of 32 Mpc \(h^{-1}\) and in Fig. 6 for the larger boxes of 128 Mpc \(h^{-1}\) and 512 Mpc \(h^{-1}\). The density distribution is visually very similar across all models in Fig. 5 given that we have used identical initial conditions for all runs. Similarly, the structure produced by \(z=0\) in the large box simulations of model C, as shown in Fig. 6, is visually very similar, regardless of whether the coupling is present. To analyse the power spectrum we use POWMES (Colombi et al., 2009). In Figs. 7 and 8, we have plotted the power spectra for all models in the 32 Mpc h\({}^{-1}\) box, normalised by the power spectra of \(\Lambda\)CDM (Fig. 7) and the uncoupled models (Fig. 8), i.e. those with \(\gamma_{0}=0\), to analyse the effects of the coupling over a wide range of scales. In Fig. 9 we plot the power spectra for model C with \(\gamma_{0}=0.3\) normalised by the same model with no coupling, for all three box sizes considered in this study: 32 Mpc \(h^{-1}\), 128 Mpc \(h^{-1}\) and 512 Mpc \(h^{-1}\). We also plot in Fig. 10 the power spectra of the models C*1, C*2 and C (all with \(\gamma_{0}=0.3\)) normalised by the model C* which has \(\epsilon_{1}=\epsilon_{2}=1\) (i.e. enforcing a standard Euler equation) but the same background evolution as the coupled model. We can see in Fig. 7 that, compared to our \(\Lambda\)CDM model, the quintessence models all have reduced power on all scales, with the difference being of order \(15-20\%\) on the largest scales. This arises from the differing background evolution in our quintessence models as compared to that of the \(\Lambda\)CDM model, as would be expected given the differing Hubble parameters shown in Fig. 1. The fact that this is seen for all values of the coupling tells us that the difference in power spectra is indeed almost entirely due to this modified background evolution: it is not strongly dependent on the coupling. At smaller scales, however, we see a much stronger coupling dependence, especially for model C with large coupling. If we compare this behaviour with Fig. 8, where we normalise the power spectrum of each model with respect to the uncoupled model of that type, we notice that the power on large scales is enhanced for model C when compared to the uncoupled case, with effectively no change for the other models. Considering the final values of the coefficients of the modified Euler equation given in Table 6 we would not expect to see large deviations in Fig. 8 for models A and B, but would expect to see much more significant deviations for model C. At smaller scales there is again a clear suppression of structure due to the presence of the coupling. The transition from an enhancement of the structure at larger scales to a reduction at smaller scales seen in Figs. 8 and 9 for model C appears to be related to the transition from linear to non-linear scales, as it occurs at a value of \(k\sim 0.6\) [h/Mpc]. From Fig. 9 we can also see that this transition is not dependent on the box size or the resolution. Physically this transition is caused by the reduction of the cosmological friction term as compared to the uncoupled model. For the most extreme case of model C with \(\gamma_{0}=0.3\) this is no longer simply a reduction of the cosmological friction but an inversion of the direction of action of the force. Within the linear regime (before shell crossing) the cosmological push is directed parallel to the gravitational accelerations, leading to an enhancement of power at larger scales, whereas in the non-linear regime (after shell crossing) this is no longer the case, causing the cosmological push to generally reduce power on small scales. These results are consistent with the results of Baldi & Simpson (2015) regarding the dark scattering model, as we will discuss in more detail at the end of this section. It is important to note, however, that for all our models the coupling parameter \(\gamma_{0}\) appears in the equation of motion for the background quintessence field (19) and so the background evolution is modified when the coupling takes different values. This is also clear from the evolution of \(w_{\phi}\) (see Fig. 2). Therefore, we fully separate the consequences of the coupling in the Euler equation from the background evolution in Fig. 10. In this figure we include model C (orange line), which includes the coupling in both the gravitational force term and the cosmological friction term (with \(\gamma_{0}=0.3\)), model C*1 (blue dotted line) which includes only the modification to the cosmological friction and model C*2 (green dashed line) which includes only the modification of the gravitational force term. We also include model C without coupling (i.e. with \(\gamma_{0}=0\), the purple dot-dashed line). All models are normalised with respect to the model C*, which has the same background evolution as models C (\(\gamma_{0}=0.3\)), C*1 and C*2, but where the coupling is not included in the Euler equation. We first note that the enhanced power at larger scales present for models C*1, C*2 and C (\(\gamma_{0}=0.3\)) is here due entirely to the modified Euler equation, given that the Hubble parameters of all these models are identical. There is enhancement of power at larger scales due to both coefficients \(c_{1}\) and \(c_{2}\), as seen for models C*1 and C*2. The combination of these two effects then leads to a larger total enhancement of power on larger scales, as seen for model C (\(\gamma_{0}=0.3\)). The comparison of model C (\(\gamma_{0}=0\)) with model C* (the purple dot-dashed line) allows us to probe the effect of the modified background only. This is because in both of these models the equation of motion of the dark matter is unmodified, but the Hubble parameters differ. This was discussed earlier in the context of Fig. 1. Referencing that figure, we can see that the Hubble parameter for model C* takes _larger_ values than those for model C without coupling. We would thus expect a larger contribution of cosmological friction in model C* as \begin{table} \begin{tabular}{l|c c} \hline Model & \(c_{1}\) (\(x=0\)) & \(c_{2}\) (\(x=0\)) \\ \hline \hline A, \(\gamma_{0}=0.15\) & 0.981 & 1.004 \\ A, \(\gamma_{0}=0.3\) & 0.933 & 1.015 \\ B, \(\gamma_{0}=0.15\) & 0.966 & 1.008 \\ B, \(\gamma_{0}=0.3\) & 0.881 & 1.027 \\ C, \(\gamma_{0}=0.15\) & 0.576 & 1.108 \\ C, \(\gamma_{0}=0.3\) & -0.859 & 1.530 \\ \hline \end{tabular} \end{table} Table 6: Values of the coefficients of the modified Euler equation at z = 0. compared to model C without coupling, leading to _less_ structure at \(z=0\) in model C*. Normalising with the power spectrum of this model we thus expect values above unity in Fig. 10 for the purple dot-dashed line, which is indeed what we see. Thus we have two competing effects in our models: the Hubble parameter is increased (briefly, at late times) when we include the coupling, leading to a reduction in structure as compared to an uncoupled model. The behaviour of the coefficient \(c_{1}\) in the modified Euler equation, however, means that the _effective_ Hubble parameter for the dark matter is reduced, which would cause an enhancement of structure. In Fig. 8 we have both effects included. The fact that, on larger scales, the coupled models (for model C at least) exhibit enhanced power as compared to the uncoupled models, demonstrates that the effect of the coupling in the modified Euler equation dominates over the modified background evolution. At smaller scales we see an interesting behaviour: the modified cosmological friction term alone _decreases_ structure, while the modified gravitational force term alone _increases_ structure. The physical reason for the latter is clear: the effective gravitational force is stronger thus more structure is formed. It should be noted that the scale-dependence of this effect is due to the short timescale over which the gravitational force term becomes modified (see Fig. 4). Larger scales have simply not had time to be strongly affected. Turning to the modified cosmological friction term (model C*1), the effect of the very strong coupling in model C (\(\gamma_{0}=0.3\)) is to change this term from a friction which acts antiparallel to the particle velocities to a force which acts parallel to the velocity vector of the dark matter (due to the sign change in the coefficient \(c_{1}\)). Thus the velocities in model C*1 are _enhanced_ as compared to the model C*, but the effective gravitational force remains as normal. Dark matter particles are then pushed out of overdensities, resulting in less structure on smaller scales. Interestingly, the combination of these two effects (in model C) still results in a significant reduction of structure at small scales. Figure 5: Final projected particle distribution for models A (left column), B (middle column) and C (right column) at \(z=0\) (box size 32 Mpc \(h^{-1}\), with \(128^{3}\) particles). The coupling is \(\gamma_{0}=0,0.15\) and \(0.3\) in the top, middle and bottom rows.. scales, as the cosmological push overcomes the increased effective gravitational force. Our results appear to be consistent with the study of Baldi & Simpson (2015) of the dark scattering model, where a momentum transfer between DM and DE leads to a modified cosmological friction term (without a modified gravitational force term). Specifically, we can compare with their case of a constant dark energy equation of state \(w=-1.1\). Although the background evolution differs between our models and those of Baldi & Simpson (2015), the effect on the cosmological friction term is comparable. In their model with \(w=-1.1\) the cosmological friction term is reduced as compared to the standard model. They similarly observe an _enhancement_ of power as compared to their uncoupled models at large scales with a _reduction_ of power at small scales arising purely from a modified cosmological friction term and the modified background evolution. Furthermore, the scale-dependence at small scales and the scale-independence at larger scales is very similar to that seen in our models. ### Halo properties For the analysis of the matter distribution in our simulations, we used the Amiga Halo Finder (AHF) code (Gill et al., 2004; Knollmann & Knebe, 2009). A velocity criterion is also applied to particles which have been associated to the halo by the density criterion. Since we are modifying the gravitational force in our models this velocity criterion must also be modified. Thus in AHF we use an effective gravitational constant \(\tilde{G}\), which comes from the multiplication of \(G\) with the appropriate value of \(c_{2}\) given the redshift under consideration. In Table 7 we give this value for \(z=0\), the redshift at which all of our results are determined. Particles associated to a halo due to the density criterion will then be assigned to that halo if their velocity is lower than 1.5 times the escape velocity, i.e. \(v<1.5v_{esc}\), where \[v_{esc}=\sqrt{2|\Phi|} \tag{34}\] Figure 6: Final projected particle distribution for model C with \(\gamma_{0}=0\) (top row) and \(\gamma_{0}=0.3\) (bottom row) at \(z=0\). _Left column_: box size 128 Mpc \(h^{-1}\), _right column_: box size 512 Mpc \(h^{-1}\). In all cases the particle number is \(256^{3}\). and \[\frac{d\Phi}{dr}=\frac{\hat{G}M(r)}{r^{2}} \tag{35}\] with \(r\) the radius from the halo center, \(\Phi\) is the peculiar gravitational potential, \(M(r)\) the mass inside the halo and \(\tilde{G}\) the rescaled effective gravitational constant. The total number of halos found at z = 0 for each type of model, for differing couplings, is given in Table 8. #### 3.3.1 Mass functions We show the halo mass function for all of our models in Fig. 11. We have used the COLOSSUS package (Diemer, 2018) to compare our results with an analytical HMF fitting function defined in Tinker et al. (2008). This definition is derived from halos identified in simulations using the spherical overdensity (SO) method, which agrees with the method used by the AHF code. We have added Poissonian error bars to Fig. 11 where the halo number count in each bin is given by \(N\pm\sqrt{N}\) Due to the limited volume and mass resolution of our simulations, we can only effectively resolve halos in the mass range of \(10^{11}\) - \(10^{14}\)\(M_{\odot}\) in our smallest box size of \(L=32\) Mpc h\({}^{-1}\). This leaves us without information about structures with smaller masses (\(<10^{11}M_{\odot}\)). At higher mass scales our results are comparable to the reference halo mass function within the error bars. We should caution however that we have only a small sample of massive halos present in the models using the smallest box size. We also show the halo mass function for two models with \(L=128\) Mpc h\({}^{-1}\). The mass resolution in these models is poor, thus we cannot meaningfully resolve halos below \(\sim 10^{12}\)\(M_{\odot}\). The error bar in our most massive mass bin is reduced for these models. An increase in the mass resolution of these models would allow us to resolve halos for a wider range of masses. The disagreements with the halo mass function at low \begin{table} \begin{tabular}{c|c c c} \hline Model & A & B & C \\ \hline \hline \(\gamma_{0}=0.15\) & \(4.319\times 10^{-9}\) & \(4.333\times 10^{-9}\) & \(4.765\times 10^{-9}\) \\ \(\gamma_{0}=0.3\) & \(4.366\times 10^{-9}\) & \(4.419\times 10^{-9}\) & \(6.594\times 10^{-9}\) \\ \hline \end{tabular} \end{table} Table 7: The effective gravitational constant \(\tilde{G}\) of our models. All these quantities are expressed in Mpc km\({}^{2}\)/\(M_{\odot}\)s\({}^{2}\). Figure 8: Ratio of the power spectrum of models A (blue lines), B (green lines) and C (orange lines) with respect to uncoupled case. Figure 10: Ratio of the power spectrum of models C*1 (blue dotted line), C*2 (green dashed line), C with \(\gamma_{0}=0\) (orange solid line) and C with \(\gamma_{0}=0.3\) (purple dot-dashed line) with respect to model C*. Figure 7: Ratio of the power spectrum of models A (blue lines), B (green lines) and C (orange lines) with respect to \(\Lambda\)CDM. Figure 9: Ratio of the power spectra of model C (for \(\gamma_{0}=0.3\)) for all three box sizes considered in this study, with respect to the uncoupled case. and high masses appear to be largely unrelated to the coupling, with reasonable agreement in the HMF for all models in the intermediate mass range. #### 3.3.2 Density profiles To analyse the change in the halo density profile due to the coupling we consider the most massive halo in each of our models, which is of the order of \(10^{14}\) M\({}_{\odot}\). Note that, due to the overall similarity in structure across all of our models, the selected halos have similar positions within the computational volume. Specifically, we have verified that the distribution of \(x\), \(y\) and \(z\) coordinates of the centre-of-mass of the most massive halo across all models has a maximum standard deviation of \(\sim 75\) kpc h\({}^{-1}\), which is well within the virial radii of the halos (the minimum \(R_{\rm vir}\) for the most massive halo across all models is \(\sim 690\) kpc h\({}^{-1}\)). The profiles are shown in Fig. 12 separated into three panels: in the left panel we have model A with \(\gamma_{0}=0,0.15\) and \(0.3\); and the same for model B (middle), and model C (right panel). The black vertical line indicates the resolution limit of our small box simulations, which we have chosen to be given by 4 grid cells at the highest level of refinement, i.e. \(4Ax\), where \(\Delta x=1.95\) kpc h\({}^{-1}\). Models A and B do not show any difference in their density profiles with a change in the coupling constant. In fact, at radii greater than \(\sim 50\) kpc h\({}^{-1}\), the density profiles for each value of \(\gamma_{0}\) seem to be almost identical. For model C we see that an increase in the coupling constant causes a _decrease_ in the inner density of the halos, with this effect extending beyond the very inner regions, such that the profiles are similar only in the outermost parts of the halo. Although there is some variation in mass in the most massive halo in the simulations due to the change in coupling (see Table 9) this is not sufficient to explain the changed density profile. In Fig. 13 we show the individual behaviour of each coefficient on the density profiles in the left panel, we show the cumulative mass distribution in the middle panel and the virial ratio in the right panel. We define the virial ratio to be \(2E_{\rm kin}/|E_{\rm post}|\), where \(E_{\rm kin}\) is the average kinetic energy within the given radius and \(E_{\rm post}\) is the average potential energy (as determined by AHF). For this plot we have again selected the most massive halos, this time from the simulations described in Table 4. The density profile for the model C*2 (orange solid line) is significantly enhanced in the inner region as compared with the other models. For model C*1 (blue dotted line), the opposite behavior occurs, where a reduction in the innermost regions of the profile is observed. We can also see that there is a lack of particles in the very inner regions of the most massive halo of this model, as compared to the others. In the case of model C* (no coupling) the profile is between these two extremes. We also show in the same figure the fitted NFW halo profiles using the parameters determined by AHF. This is consistent with other works (see e.g. Baldi et al., 2010) that show that a time-dependent enhancement of the gravitational force modifies the virial equilibrium of the halos and leads to an increase in the halo density profile (model C*2). The cosmological push term removes enough dark matter to substantially reduce the halo profile, as compared to model C*. Thus the reduced profiles seen in Fig. 12 are caused by the cosmological push overcoming the effective gravitational attraction within the halo, resulting in a reduction of the inner density. From the middle panel in Fig. 13 we can see that the mass distribution is more centrally concentrated in the halo of model C*2, and again we see the reduction of particles in the halo of model C*1, with more of the total mass of the halo being found at larger radii. Finally, in the right panel, the virial ratios for the halos in models C*2 and C* tend towards values reasonably close to unity at large radii indicating that they are close to virialised systems. The halo of model C*1, however, is significantly "over-virialised" at large radii, indicating the presence of high kinetic energy particles in the outer region of the halo, deposited there by the effect of the cosmological push. It should be noted that these particles have been identified as being bound to the halo, according to the binding criterion used in Eq. 34, taking into account the enhanced effective gravitational constant of this model. In summary, for the model which exhibits a significant effect on the halo density profile (model C) we see a _decrease_ in the central density. Unfortunately, given the mass resolution of our simulations, we are unable to investigate the density profiles of lower mass halos that would correspond to dwarf galaxies, where the cusp-core problem presents itself. While we hope to improve this in future work, the underlying physical reason for the decreased slope of the density profile seen here should apply equally to all halos, regardless of Figure 11: Halo mass functions for models A (blue symbols), B (green symbols) and C (orange and purple symbols). The green line is the fitting function of Tinker et al. (2008). Poissonian error bars (see text) are also shown. \begin{table} \begin{tabular}{c|c c c} \hline \(\gamma_{0}\) & Model A & Model B & Model C \\ \hline \hline 0 & 1766 & 1755 & 1777 \\ 0.15 & 1766 & 1746 & 1762 \\ 0.3 & 1745 & 1755 & 1764 \\ \hline \end{tabular} \end{table} Table 8: Total number of halos at z = 0 obtained with AHF. \begin{table} \begin{tabular}{c|c c c} \hline Model & \(\gamma_{0}\) & \(N_{subs}\) & \(N_{part}\) & \(M_{halo}\) [\(M_{\odot}/h\)] \\ \hline \hline \multirow{3}{*}{A} & 0 & 21 & 101880 & \(1.319\times 10^{14}\) \\ & 0.15 & 21 & 101583 & \(1.315\times 10^{14}\) \\ & 0.3 & 21 & 102019 & \(1.321\times 10^{14}\) \\ \hline \multirow{3}{*}{B} & 0 & 21 & 101163 & \(1.310\times 10^{14}\) \\ & 0.15 & 21 & 101594 & \(1.315\times 10^{14}\) \\ & 0.3 & 20 & 100927 & \(1.306\times 10^{14}\) \\ \hline \multirow{3}{*}{C} & 0 & 19 & 93438 & \(1.210\times 10^{14}\) \\ & 0.15 & 19 & 96313 & \(1.247\times 10^{14}\) \\ \cline{1-1} & 0.3 & 23 & 107877 & \(1.396\times 10^{14}\) \\ \hline \end{tabular} \end{table} Table 9: Most massive halos selected for density profile analysis. mass. Thus we would expect less cuspy profiles in these models also for low mass halos. #### 3.3.3 Velocity dispersions Given our previous results, showing minimal effects of the coupling for models A and B, we will concentrate mostly on model C for the remainder of our study, except where otherwise indicated. Fig. 14 shows the (logarithm of the) velocity dispersions \(\sigma_{v}\) for all the halos of model C (as determined by AHF) for all values of the coupling. The solid lines in each panel of the figure represents a 4th-degree polynomial best-fit line. We compare these best fit lines for different values of \(\gamma_{0}\) in the right panel of Fig. 14. We find that the largest velocity dispersions occur for \(\gamma_{0}=0.3\) (orange dotted line). The vertical shift compared with the uncoupled case is effectively independent of mass. Given that this is a logarithmic scale, this implies a roughly constant percentage increase in the velocity dispersions as a function of mass. Taking the difference in the best fit lines, we find that the velocity dispersion for halos of mass \(10^{11}\), \(10^{12}\), \(10^{13}\) and \(10^{14}\) is increased by 20%, 17%, 28% and 20% respectively, when comparing the uncoupled model C with the same model and \(\gamma_{0}=0.3\). Although the cosmological push (the inverted cosmological friction seen in model C with the strongest coupling) appears to reduce the amount of structure at smaller scales (see Fig. 8) and reduce the inner density profile of the halos (see Fig. 12) the remaining bound material within the halo has a higher velocity dispersion due to the modified virial equilibrium of the halos resulting from the time-dependent enhancement of the effective gravitational force. These results are again consistent with previous studies of coupled dark matter-dark energy models Baldi & Simpson (2015), where a similar enhancement of halo velocity dispersion profiles was seen. Figure 12: Density profiles for the most massive halo in each model (left panel: model A; middle panel: model B; right panel: model C). Figure 13: Profiles for the most massive halos in models C* (no coupling), C*1 (only modified cosmological friction) and C*2 (only modified gravity term). _Left panel:_ density profiles. The dotted lines are the simulation halo profiles, while the solid lines are analytic NFW profiles using the virial mass and concentration parameter determined by AHF for each halo. _Middle panel:_ cumulative mass profiles. _Right panel:_ virial ratio profiles. #### 3.3.4 Particle velocity distributions We have thus far considered the velocity dispersions of the population of dark matter halos in our models, as determined by AHF, and seen the consequences of the coupling in terms of an increased velocity dispersion across all masses, at least for strong coupling in model C. This can be studied in more detail by analysing the full velocity distribution of the particles within the halos. For this, we have selected three halos from model C (considering \(\gamma_{0}=0\) and \(0.3\)) with masses of \(\sim 10^{12}\), \(10^{13}\) and \(10^{14}\) M\({}_{\odot}\) to see the consequences of the coupling on their constituent particle velocity distributions. Our selection criteria was the following: we first choose model C with \(\gamma_{0}=0\) as our "base" model, and identified all halos within this simulation whose masses lay within the intervals \([1\times 10^{n},1.5\times 10^{n}]\) where \(n\) is chosen to be \(n=12\), \(13\) or \(14\). The most massive halo found within this range is chosen as the "base" halo for the comparison (note that the most massive halo in the models under consideration here has a mass of \(\sim 1.4\times 10^{14}\) M\({}_{\odot}\)). We then find the halo in the comparison model (model C with \(\gamma_{0}=0.3\)) whose centre-of-mass coordinates are closest to the "base" halo. We finally verify that the chosen halo has a mass that lies within the given mass interval. As we can see in Fig. 15, the velocity distributions for the low mass (left panel), intermediate mass (middle panel) and high mass (right panel) halos are all shifted to higher velocities in the presence of the coupling, with the effect more notable in the high mass halo. We have already seen that the velocity dispersions are increased by the coupling, here we see that the mean velocities are also shifted. The high mass halo appears to have a mean velocity significantly shifted, leading to a skewed distribution. To quantify the differences between the velocity distributions for different models and different values of the coupling constant, we have fitted Maxwell-Boltzmann (MB) distribution functions to our histograms. While the MB distribution is known to be a rather poor fit to the velocity distributions of gravitational systems, it nevertheless allows us to provide an approximate measure of the change in the distributions due to the coupling. The MB distribution function is given by \[f(x)=\sqrt{\frac{2}{\pi}}\frac{x^{2}}{a^{3}}\exp\left(-\frac{x^{2}}{2a^{2}}\right) \tag{36}\] where the scale parameter \(a\), for an ideal gas, would be given by \(a=\sqrt{kT/m}\), where \(m\) is the particle mass, \(T\) is the temperature and \(k\) is the Boltzmann constant. Note that this is a one-parameter distribution function. We can interpret a larger value of \(a\) as a "hotter" distribution, at least when applying this analysis to the simulation particles that comprise a dark matter halo as the masses are identical. In later sections we will apply a similar approach to the halos themselves. In that case the masses differ, but we will assume that the halo populations in each model are sufficiently similar that the differences in the scale parameter are primarily driven by the effective temperature of the halo distribution. We have therefore fitted a Maxwell-Boltzmann distribution to each velocity distribution in Fig. 15 and we compare the best-fit values of the scale parameter \(a\) in Table 10. The increase in this parameter for the coupled case in all three mass ranges is evident, indicating that the coupling leads to "hotter" velocity distributions. In selecting these halos we have considered those with similar masses and similar locations within the halo distribution, in order to be able to make a better comparison between each simulation. For the case of the velocity distributions in the left panel of Fig. 15, the coupled model halo has a mass \(10.7\%\) lower than that in the uncoupled model. For the intermediate mass case, the coupled model halo has a mass \(9.4\%\) lower, while the high mass halo in the coupled model has a mass \(15.7\%\) higher than the uncoupled case. Although these differences in mass will also lead to differences in the velocity distribution, the effects we see here are considerably larger than expected from the mass difference alone. For the low mass and intermediate mass halos the coupled model halo masses are smaller than those of the uncoupled models, thus we would _lower_ mean velocities and dispersions for the coupled model halos if these differences arose purely because of mass differences. In the case of the high mass halo, from dimensional analysis one would expect the moments of the velocity distribution to scale as \(\sqrt{M}\), so \(\sim 33\%\) mass difference would translate to a \(\sim 12\%\) difference in the moments. We \begin{table} \begin{tabular}{l|c c c} \hline Model & \(10^{12}M_{\odot}\) & \(10^{13}M_{\odot}\) & \(10^{14}M_{\odot}\) \\ \hline \hline C, \(\gamma_{0}=0\) & 133.4 & 251.3 & 564.8 \\ C, \(\gamma_{0}=0.3\) & 152.1 & 299.0 & 702.7 \\ \hline \end{tabular} \end{table} Table 10: Scale parameter of the Maxwell-Boltzmann distribution functions fitted to the particle velocity distributions of three halos of given mass from model C (see Fig. 15) Figure 14: Velocity dispersions of all halos identified in model C for \(\gamma_{0}=0\) (_left-most panel_), \(\gamma_{0}=0.15\) (_second panel_) and \(\gamma_{0}=0.3\) (_third panel_). In all cases a fourth-order polynomial fit line has been added. Each fit line is then directly compared in the plot in the right-most panel. see shifts in the mean velocity in Fig. 15 that are substantially larger than this. As a final analysis of the effect of the modifications to the Euler equation, we have determined the particle velocity distributions for the most massive halo (\(\sim 10^{14}\) M\({}_{\odot}\)) from each of the simulations that explore the consequences of each modification separately (and maintain an equivalent background evolution in the uncoupled case). These are the C*, C*1 and C*2 simulations referred to in Table 4. The distributions are shown in Fig. 16, compared with the distribution from model C at strong coupling that was already shown in the right panel of Fig. 15. We find that the velocity distributions of each model show quite considerable differences. Firstly, the distribution for model C* (equivalent background evolution but no coupling at the level of the equations of motion) is peaked at a far lower velocity and has a smaller dispersion than the full model C, as expected. In the case of model C*2 (modified gravitational force term only) the dispersion of the distribution is substantially enhanced, as is the mean velocity (although the distribution is not sharply peaked), compared to model C*. The model containing only the effect of the cosmological push (model C*1) shows the most interesting behaviour, being a bimodal distribution, suggesting the presence of a subgroup of particles with larger velocities that is bound within the halo. The increase in the velocity dispersion in this case is not as pronounced as for model C*2. Thus we can conclude that the distribution seen for the highest mass halo in Fig. 15 (right panel) has an increased mean and dispersion primarily due to the enhancement in the effective gravitational force, and is somewhat skewed (with a very weak bimodality) due to the cosmological push. The masses of the selected halos from models C*1 and C*2 are 44% lower and 15% higher when compared to the halo selected from model C for \(\gamma_{0}\)= 0.3. Again these mass differences alone are not sufficient to explain the differences in the velocity distributions shown here, as discussed earlier. #### 3.3.5 Bimodality We now investigate further the bimodality exhibited in the particle velocity distribution for model C*1 in Fig. 16. In Fig. 17 we show the particle content in the most massive halos of models C*, C*1 and C*2 (projected onto the \(xy\) plane), with all plots on the same scale. In the bottom row of this figure we show the velocity distributions of the subhalos within each host. The particles belonging to these subhalos are indicated in the particle plots in the upper row. Those subhalos with velocities (relative to the centre-of-mass velocity of the host) greater than 1000 km/s are indicated in red, while those with lower velocities are indicated in green. Note that this is simply to aid in the identification of the subhalo populations, we have not applied a velocity cut. Comparing amongst the models, we can see that the host halo in the models C* and C*1 is less spatially extended than that of model C*2, which is to be expected given the enhanced effective gravitational interaction in the latter model. It is this enhancement which has apparently also led to an increase in the number of substructures within this host, as compared to the other models. Further evidence for this is shown in Fig. 18 where we plot the mass-concentration relation for all subhalos within models C* and C*2, clearly demonstrating generally higher concentrations for the same mass in the Figure 16: Velocity distribution for the most massive halo selected from the models C with \(\gamma_{0}\) = 0.3, C*, C*1 and C*2. Figure 15: Particle velocity distributions for the low (\(\sim 10^{12}\) M\({}_{\odot}\)), intermediate (\(\sim 10^{13}\) M\({}_{\odot}\)) and high mass (\(\sim 10^{14}\) M\({}_{\odot}\)) halos selected from model C, with \(\gamma_{0}\) = 0 and \(\gamma_{0}\) = 0.3. The curves represent the Maxwell-Boltzmann fit for the uncoupled (solid line) and coupled (dot-dashed line) cases. latter model. Thus the subhalos in model C*2 are likely to be less susceptible to tidal stripping within their hosts. Turning our attention to the halo velocity distributions, we can see that both models C* and C*2 show continuous distributions across the range of velocities, whereas model C*1 shows a very clear separation between subhalo populations, with one group of fast-moving halos clearly distinct from the other slow-moving group. It is the presence of this group of subhalos that gives rise to the bimodality in Fig. 16. It is clear that structure formation proceeds differently in each model (as evidenced by the differing subhalo populations). In the case of model C*1, we hypothesise that there is an accretion of subhalos whose centre-of-mass velocities diverge from that of their host, presumably due to the influence of the cosmological push term, which is dynamically independent of the accelerations induced from the matter distribution. It may be interesting to look for this dynamical separation of groups within host halos in these kinds of models, perhaps using phase-space analyses. We leave further study of the source of this bimodality for future work. #### 3.3.6 Halo velocity distribution We have so far analysed the velocity dispersions and velocity distributions of the dark matter within the halos. Now we examine the velocity distributions of the halos themselves. To do so, we have selected all halos from each model, for each value of \(\gamma_{0}\), over the whole mass range. The measured velocity distributions and the fitted Maxwell-Boltzmann distributions for the halos are shown in Figs. 19 for models A, B and C, and in Fig. 20 for models C*, C*1 and C*2. As we can see, models A and B show very similar velocity distributions for all halos for all values of the coupling. The same can be seen by looking at the values obtained from the Maxwell-Boltzmann fits (Table 11), where we do not find significant variations for different couplings for models A and B. For our extreme case, model C with \(\gamma_{0}=0.3\), we see again a shift in the distribution towards higher mean velocities and larger dispersions. This may be quantified by looking at the Maxwell-Boltzmann fits, where the scale parameter increases significantly for the case of stronger coupling by 35%, as compared to the uncoupled model, again showing "hotter" velocity distributions. We now turn to the models that isolate the cosmological push (model C*1) and the modified gravitational force (model C*2). Fig. 20 shows the velocity distributions and their Maxwell-Boltzmann fits for all halos in models C*, C*1, and C*2. We again see that the cosmological push, model C*1, leads to a shift towards higher velocities and a larger dispersion compared to model C*, as reflected by the values in Table 11. The enhanced gravitational force in model C*2, however, has less effect upon the halo velocities as compared to the cosmological push effect. Although the MB scale parameter for the halos of model C*2 is larger than that of C*1, this appears to be due to a very small number of very high velocity halos shifting the tail of the distribution. The highest velocity halo in model C*1 has \(v\sim 1600\) km/s, whereas in model C*2 there are 4 halos with velocities higher than 2000 km/s. The main peak of the distribution, however, is broadly comparable with that of model C*, whereas there is clear shift in the peak for model C*1. This suggests that the halo velocities are more affected by the modified cosmological friction, than by the modified gravitational force term, although this latter effect is certainly very relevant for the internal particle distributions within the halos (see Fig. 16). \begin{table} \begin{tabular}{l|c} \hline Model & Scale parameter \\ \hline A, \(\gamma_{0}=0\) & 206.0 \\ A, \(\gamma_{0}=0.15\) & 206.5 \\ A, \(\gamma_{0}=0.3\) & 207.8 \\ \hline B, \(\gamma_{0}=0\) & 207.4 \\ B, \(\gamma_{0}=0.15\) & 209.1 \\ B, \(\gamma_{0}=0.3\) & 209.3 \\ \hline C, \(\gamma_{0}=0\) & 205.4 \\ C, \(\gamma_{0}=0.15\) & 221.7 \\ C, \(\gamma_{0}=0.3\) & 277.4 \\ \hline C* & 194.7 \\ C*1 & 254.0 \\ C*2 & 230.3 \\ \hline \end{tabular} \end{table} Table 11: Scale parameter \(a\) obtained from fitting the halo velocity distributions with a Maxwell-Boltzmann distribution for Models A, B, C and the C* models (from Figs. 19 and 20). Figure 17: Subhalos within the most massive host halo of models C with \(\gamma_{0}=0.3\), C*, C*1 and C*2, with histograms of their velocity magnitudes. Velocities above 1000 km/s are indicated in red, those below are indicated in green, with the associated halo particles coloured accordingly. ## 4 Discussion and Conclusions In this paper we have analysed structure formation in a coupled DM/DE model, where dark energy arises from a quintessence field, and the coupling is purely at the level of a momentum transfer. Our analysis has been based on numerical N-body simulations using a modified version of the RAMSES cosmological simulations code. We have determined the form of the modified Euler equation in the Newtonian gauge, and then considered the small-scale Newtonian limit. We have shown that the coefficients of the cosmological friction and gravitational force terms in the resulting Euler equation are time-dependent, being functions of the coupling parameter \(\gamma_{0}\), the time derivative of the background quintessence field \(\dot{\phi}\), the derivative of the potential \(V_{\phi}\) and the background dark matter density \(\rho\). We have considered exclusively the case where \(\gamma_{0}>0\), resulting in a suppression of the cosmological friction term (relative to the standard case) and an enhancement of the effective gravitational force. After implementation of the modified Euler equation into RAMSES, we have then investigated the consequences for structure formation at recent times, focussing on the power spectra and dark matter halo properties, for three choices of scalar field potential. For two of these potentials (referred to as models A and B) the background evolution is very similar to that of \(\Lambda\)CDM, with only a small deviation from \(-1\) at very late times in the value of the dark energy equation of state parameter \(w\). In one of our models, referred to as model C, we have a more pronounced deviation in the background evolution, corresponding to a larger deviation from \(w=-1\). This corresponds to a larger contribution from the scalar field kinetic term, leading to non-negligible deviations from unity in the Euler equation coefficients. Our results demonstrate that, for these models, both the modi Figure 19: Velocity distributions of all halos for models A, B and C. The curves represent the Maxwell-Boltzmann fit for \(\gamma_{0}=0\) (green line), \(\gamma_{0}=0.15\) (orange line) and \(\gamma_{0}=0.3\) (red line). Figure 18: Mass-concentration relation for all halos in models C*, C*1 and C*2. The horizontal lines indicate the median concentration values for each model. fication of the cosmological friction term and the modification of the gravitational force have an impact in modifying the evolution of structure, especially in model C with the largest coupling. Our specific results are: * The power spectrum shows a reduction of structure on small scales for all models, and an increase in structure at large scales for model C. * For strong coupling the cosmological push leads to a reduced inner density profile in at least our most massive halos. * The mean velocity and velocity dispersion (in model C) of all halos is increased due to the combined effects of the enhanced gravitational interaction and the cosmological push. * For this same reason, the particle velocities within the massive halos are also substantially higher with the coupling (in model C) than without, caused by the combination of the effective gravitational force and the cosmological push. * We have found a bimodality in the velocity distribution of the most massive halo in our model C*1 that isolates the effect of the cosmological push. We leave for future work further exploration of the exact source of this bimodality. In addition, with higher resolution simulations and improved statistics we hope to study the frequency of this effect in these models. * The halo velocity distribution in our models with coupling appears to be primarily affected by the modified cosmological friction term, rather than by the modified gravitational force term. It is worth noting that our study differs from previous work done on simulations of a momentum transfer coupling between dark matter and dark energy (Baldi and Simpson, 2015, 2017). The models studied here are specific realisations of quintessence scalar field dark energy models and furthermore the modified gravitational force term in our models is non-existent in the dark scattering model. It is also important to note, however, that given the dominant importance of the modified cosmological friction term in our models, our results appear to be very consistent with those reported in (Baldi and Simpson, 2015, 2017). In summary, our results suggest that, as far as non-linear structure formation is concerned, most coupled models differ slightly, in general, from their uncoupled counterparts. In the case where the DE equation of state parameter deviates appreciably from \(w=-1\) we have a sufficient contribution from the kinetic energy of the quintessence field to generate a substantial additional force upon the dark matter, modifying the evolution of the structure. The most notable modification that results is that of less structure on small scales, and an associated reduction in the internal density profile of the halos, due to the interplay between the enhanced gravitational force acting upon the DM and the so-called cosmological push. This implies that our models, with a positive coupling parameter, could in fact help alleviate some of the tensions present at small scales for \(\Lambda\)CDM, such as the cusp-core problem. Given the increase in structure at linear scales in our models, it would appear that the \(\sigma_{8}\) tension would not be alleviated in our models, although structure formation at smaller scales can be suppressed. Such a possibility was explored at the linear level in Pourtsidou and Tram (2016) with a negative value for the coupling constant. In our study, the coupling constant is positive for all models. We should reiterate that we have obtained significant effects in model C because of the similar magnitudes of the two terms in the denominators in equation (18). As discussed in Section 2.2, in cases where \(\rho_{0}\approx 2a\gamma_{0}\phi^{2}\) we expect substantial deviations, as compared to the standard case, in the coefficients of the Euler equation. If we in fact have an equality in this relationship, we will find singular behaviour in these coefficients. This strongly suggest a limitation in the physical viability of these models, at least for positive coupling. An interesting aspect of this work that would benefit from more investigation is the combination of a modified dynamics for dark matter with standard dynamics for the baryons. In particular, it would be of interest to explore the possible consequences for dynamical friction, given that the strength of this effect on an object falling into a dark matter halo depends on the velocities of the dark matter particle field, as well as the effective gravitational force. For the dark matter, the effective gravitational force is enhanced by the coupling (increasing dynamical friction) while the particle velocities are also enhanced (decreasing dynamical friction). The additional contribution from the cosmological push to further accelerate the dark matter particles could possibly result in a net reduction in dynamical friction. For baryons, however, the gravitational force is not enhanced, thus one might expect a reduction of dynamical friction for baryons as compared to the uncoupled case. It would certainly be of interest to explore the consequences of this for galaxy dynamics, such as in galaxy mergers and the evolution of bars. This would imply the necessity for hydrodynamical simulations, where the equation of motion of the hydrodynamical material would, of course, be unaffected by the coupling between DM and DE. Alternatively, a population of collisionless star particles could be included, and identified as such within the simulation to allow for their dynamical evolution to be unmodified by the coupling. A promising avenue for future research in this topic would be to repeat our analysis for \(\gamma_{0}<0\). This has already been shown, at the linear level as mentioned earlier, to reduce some tensions in the standard model, specifically with \(\sigma_{8}\)(Pourtsidou and Tram, 2016). It is straightforward to consider the evolution of the coefficients \(c_{1}\) and \(c_{2}\) for the case of model C with \(\gamma_{0}=-0.3\). In this case the cosmological friction would now be enhanced but the effective gravitational force is reduced. We have confirmed that the amplitudes of these variations, however, are considerably smaller than seen for the \(\gamma_{0}=0.3\) model. This is because, due to the negative \(\gamma_{0}\) in the denominators of equation (18), the singular behaviour discussed earlier cannot arise. For \(\gamma_{0}>0\), a strong coupling problem restricts the range to \(\gamma_{0}<1/2\). There is no known restriction for negative values of \(\gamma_{0}\), thus a larger absolute value of \(\gamma_{0}\) could potentially be considered in that case. Given that the effective gravitational force is Figure 20: Velocity distributions of all halos in models C*, C*1 and C*2. The curves represent the Maxwell-Boltzmann fit for models C* (green line), C*1 (orange line) and C*2 (blue line). weaker for \(\gamma_{0}<0\), this is also likely to reduce the amount of structure formed and perhaps the slopes of the inner densities of that structure. Future studies of these models would also benefit enormously from increased spatial and mass resolution, as well as larger box sizes to explore consequences at very large scales and compare with analytic (linear) perturbation theory results. ## Acknowledgements The authors wish to thank the anonymous referee for extremely useful comments which have significantly improved the paper. We also wish to thank Alkistis Pourtsidou for very helpful discussions. The authors acknowledge financial support from FONDECYT Regular No. 1181708. DP thanks Greco Pena for useful discussions and acknowledges the Postgrado en Astrofisica program of the Instituto de Fisica y Astronomia of the Universidad de Valparaiso for funding. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.